mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Sam Foster: Haiku Reflections: Experiences in Reality

Mozilla planet - ti, 04/04/2017 - 23:34

Over the several months we worked on Project Haiku, one of the questions we were repeatedly asked was “Why not just make a smartphone app to do this?” Answering that gets right to the heart of what we were trying to demonstrate with Project Haiku specifically, and wanted to see more of in general in IoT/Connected Devices.

This is part of a series of posts on a project I worked on for Mozilla’s Connected Devices group. For context and an overview of the project, please see my earlier post.

The problem with navigating virtual worlds

One of IoT’s great promises is to extend the internet and the web to devices and sensors in our physical world. The flip side of this is another equally powerful idea: to bring the digital into our environment; make it tangible and real and take up space. If you’ve lived through the emergence of the web over the last 20 years, web browsers, smart phones and tablets - that might seem like stepping backwards. Digital technology and the web specifically have broken down physical and geographical barriers to accessing information. We can communicate and share experiences across the globe with a few clicks or keystrokes. But, after 20 years, the web is still in “cyber-space”. We go to this parallel virtual universe and navigate with pointers and maps that have no reference to our analog lives and which confound our intuitive sense of place. This makes wayfinding and building mental models difficult. And without being grounded by inputs and context from our physical environment, the simultaneous existence of these two worlds remains unsettling and can cause a kind of subtle tension.

Imagined space, Hackers-style

As I write this, the display in front of me shows me content framed by a website, which is framed by my browser’s UI, which is framed by the operating system’s window manager and desktop. The display itself has it own frame - a bezel on an enclosure sitting on my desk. And these are just the literal boxes. Then there are the conceptual boxes - a page within a site, within a domain, presented by an application as one of many tabs. Sites, domains, applications, windows, homescreens, desktops, workspaces…

The flexibility this arrangement brings is truly incredible. But, for some common tasks it is also a burden. If we could collapse some of these worlds within worlds down to something simpler, direct and tangible, we could engage that ancestral part of our brains that really wants things to have three dimensions and take up space in our world. We need a way to tear off a piece of the web and pin it to the wall, make space for it on the desk, carry it with us; to give it physical presence.

Permission to uni-task

Assigning a single function to a thing - when the capability exists to be many things at once - was another source of skepticism and concern throughout Project Haiku. But in the history of invention, the pendulum swings continually between uni-tasking and multi-tasking; specialized and general. A synthesizer and an electric piano share origins and overlap in functions, but one does not supersede the other. They are different tools for distinct circumstances. In an age of ubiquitous smart phones, wrist watches still provide a function, and project status and values. There’s a pragmatism and attractive simplicity to dedicating a single task to an object we use. The problem is that as we stack functions into a single device, each new possibility requires a means of selecting which one we want. Reading or writing? Bold or italic text? Shared or private, published or deleted, for one group or broadcast to all? Each decision, each action is an interaction with a digital interface, stacked and overlaid into the same physical object that is our computer, tablet or phone. Uni-tasking devices give us an opportunity to dismantle this stack and peel away the layers.

The two ideas of single function and occupying physical space are complementary: I check the weather by looking out the window, I check the time by glancing at my wrist, the recipe I want is bookmarked in the last book on the shelf. We can create similar coordinates or landmarks for our digital interactions as well.

Our sense of place and proximity is also an important input to how we prioritize what needs doing. A sink full of dishes demands my attention - while I’m in the kitchen. But when I’m downtown, it has to wait while I attend to other matters. Similarly, a colleague raising a question can expect me to answer when I’m in the same room. But we both understand that as the distance between us changes, so does the urgency to provide an answer. When I’m at the office, work things are my priority. As I travel home, my context shifts. Expectations change as we move from place to place, and physical locations and boundaries help partition our lives. Its true that the smart phone started as a huge convenience by un-tethering us from the desk to carry our access to information - and its access to us - with us. But, by doing so, we lost some of the ability to walk away; to step out from a conversation or leave work behind.

A concept rendering using one of the proposed form-factors for the Haiku device

Addressing these tensions became one of the goals of Project Haiku. As we talked to people about their interactions with technology in their home and in their lives, we saw again and again how poor a fit the best of today’s solutions were. What began as empowering and liberating has started to infringe on people’s freedom to chose how to spend their time.

When I’m spending time on my computer, its just more opportunities for it to beep at me. Every chance I get I turn it off. Typing into a box - what fun is that? You guys should come up with something… good.

This is a quote from one of our early interviews. It was a refreshing perspective and sentiments like this - as well as the moments of joy and connectedness that we saw were possible - that helped steer this project. We weren’t able to finish the story by bringing a product to market. But the process and all we learned along the way will stick with me. It is my hope that this series of posts will plant some seeds and perhaps give other future projects a small nudge towards making our technology experiences more grounded in the world we move about in.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Mozilla Releases Version 2.4 of CA Certificate Policy

Mozilla planet - ti, 04/04/2017 - 22:10

Mozilla has released version 2.4.1 of Mozilla’s CA Certificate Policy and sent a CA Communication to inform Certification Authorities (CAs) who have root certificates included in Mozilla’s program about new program requirements. Mozilla’s CA Certificate Program governs inclusion of root certificates in Network Security Services (NSS), a set of open source libraries designed to support cross-platform development of security-enabled client and server applications. The NSS root certificate store is not only used in Mozilla products such as the Firefox browser, but is also used by other companies and open-source projects in a variety of applications.

The changes of note in Mozilla’s CA Certificate Policy are as follows:

  • In addition to audit statements, the CP and CPS documents need to be submitted to Mozilla each year.
  • As of June 1, 2017, the audit, CP, and CPS documents must be provided in English, translated if necessary.
  • All submitted documentation must be openly licensed (see the policy for the exact options and terms).
  • Version 2.4 of Mozilla’s CA Certificate Policy incorporates by reference the Common CCADB Policy and the Mozilla CCADB Policy.
  • The new Common CA Database (CCADB) Policy makes official a number of existing expectations regarding the CCADB.
  • The applicable versions of some audit criteria have been updated.
  • There are additional requirements on OCSP responses.
  • 64 bits of entropy is required in certificate serial numbers.

The differences in Mozilla’s CA Certificate Policy between versions 2.4 and 2.3 (published December 2016), and between versions 2.4 and 2.2 (published July 2013) may be viewed on Github. Version 2.4.1 contains exactly the same normative requirements as version 2.4 but has been completely reorganized.

The CA Communication has been emailed to the Primary Point of Contact (POC) for each CA in Mozilla’s program, and they have been asked to respond to 14 action items. The full set of action items can be read here. Responses to the survey will be automatically and immediately published via the Common CA Database.

In addition to responding to the action items, we are informing CAs that we are instituting a program requirement that they follow discussions in the mozilla.dev.security.policy forum, which includes discussions about upcoming changes to Mozilla’s CA Certificate Policy, questions and clarification about policy and expectations, root certificate inclusion/change requests, and certificates that are found to be non-compliant with the CA/Browser Forum’s Baseline Requirements or other program requirements. CAs are not required to contribute to those discussions, only to be aware of them. However, we hope CAs will participate and help shape the future of Mozilla’s CA Certificate Program.

With this CA Communication, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Mozilla Security Team

The post Mozilla Releases Version 2.4 of CA Certificate Policy appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Code Simplicity: Effective Engineering Productivity

Mozilla planet - ti, 04/04/2017 - 21:00

Often, people who work on engineering productivity either come into conflict with the developers they are attempting to help, or spend a long time working on some project that ends up not mattering because nobody actually cares about it.

This comes about because the problem that you see that a development team has is not necessarily the problem that they know exists. For example, you could come into the team and see that they have hopelessly complex code and so they can’t write good tests or maintain the system easily. However, the developers aren’t really that aware that they have complex code or that this complexity is causing the trouble that they are having. What they are aware of is something like, “we can only release once a month and the whole team has to stay at work until 10:00 PM to get the release out on the day that we release.”

When engineering productivity workers encounter this situation, some of them just try to ignore the developers’ complaints and just go start refactoring code. This doesn’t really work, for several reasons. The first is that both management and some other developers will resist you, making it more difficult than it needs to be to get the job done. But if just simple resistance was the problem, you could overcome it. The real problem is that you will become unreal and irrelevant to the company, even if you’re doing the best job that anybody’s ever seen. Your management will try to dissuade you from doing your job, or even try to get rid of you. When you’re already tackling technical complexity, you don’t need to also be tackling a whole company that’s opposed to you.

In time, many engineering productivity workers develop an adversarial attitude toward the developers that they are working with. They feel that if the engineers would “just use the tool that I wrote” then surely all would be well. But the developers aren’t using the tool that you wrote, so why does your tool even matter? The problem here is that when you start off ignoring developer complaints (or don’t even find out what problems developers think they have) that’s already inherently adversarial. That is, it’s not that everything started off great and then somehow became this big conflict. It actually started off with a conflict by you thinking that there was one problem and the developers thinking there was a different problem.

And it’s not just that the company will be resistive—this situation is also highly demoralizing to the individual engineering productivity worker. In general, people like to get things done. They like for their work to have some result, to have some effect. If you do a bunch of refactoring but nobody maintains the code’s simplicity, or you write some tool/framework that nobody uses, then ultimately you’re not really doing anything, and that’s disheartening.

So what should you do? Well, we’ve established that if you simply disagree with (or don’t know) the problem that developers think they have, then you’ll most likely end up frustrated, demoralized, and possibly even out of a job. So what’s the solution? Should you just do whatever the developers tell you to do? After all, that would probably make them happy and keep you employed and all that.

Well, yes, you will accomplish that (keeping your job and making some people happy)…well, maybe for a little while. You see, this approach is actually very shortsighted. If the developers you are working with knew exactly how to resolve the situation they are in, it’s probable that they would never have gotten themselves into it in the first place. That isn’t always true—sometimes you’re working with a new group of people who have taken over an old codebase, but in that case then usually this new group is the “productivity worker” that I’m talking about, or maybe you are one of these new developers. Or some other situation. But even then, if you only provide the solutions that are suggested to you, you’ll end up with the same problems that I describe in Users Have Problems, Developers Have Solutions. That is, when you work in developer productivity, the developers are your users. You can’t just accept any suggestion they have for how you should implement your solutions. It might make some people happy for a little while, but you end up with a system that’s not only hard to maintain, it also only represents the needs of the loudest users—who are probably not the majority of your users. So then you have a poorly-designed system that doesn’t even have the features its actual users want, which once again leads to you not getting promoted, being frustrated, etc.

Also, there’s a particular problem that happens in this space with developer productivity. If you only provide the solutions that developers specify, you usually never get around to resolving the actual underlying problems. For example, if the developers think the release of their 10-million-lines-of-code monolithic binary is taking too long, and you just spend all your time making the release tools faster, you’re never going to get to a good state. You might get to a better state (somewhat faster releases) but you’ll never resolve the real problem, which is that the binary is just too damn large.

So what, then? Not doing what they say means failing, and doing what they say means only mediocre success. Where’s the middle ground here?

The correct solution is very similar to Users Have Problems, Developers Have Solutions, but it has a few extra pieces. Using this method, I have not only solved significant underlying problems in vast codebases, I have actually changed the development culture of significant engineering organizations. So it works pretty well, when done correctly.

The first thing to do is to find out what problems the developers think they have. Don’t make any judgments. Go around and talk to people. Don’t just ask the managers or the senior executives. They usually say something completely different from what the real software engineers say. Go around and talk to a lot of people who work directly on the codebase. If you can’t get everybody, get the technical lead from each team. And then yes, also do talk to the management, because they also have problems that you want to address and you should understand what those are. But if you want to solve developer problems, you have to find out what those problems are from developers.

There’s a trick that I use here during this phase. In general, developers aren’t very good at saying where code complexity lies if you just ask them directly. Like, if you just ask, “What is too complex?” or “What do you find difficult?” they will think for a bit and may or may not come up with anything. But if you ask most developers for an emotional reaction to the code that they work on or work with, they will almost always have something. I ask questions like, “Is there some part of your job that you find really annoying?” “Is there some piece of code that’s always been frustrating to work with?” “Is there some part of the codebase that you’re afraid to touch because you think you’ll break it?” And to managers, “Is there some part of the codebase that developers are always complaining about?” You can adjust these questions to your situation, and remember that you want to be having a real conversation with developers—not just robotically reading off a list of questions. They are going to say things that you’re going to want more specifics on. You’ll probably want to take notes, and so forth.

After a while of doing this, you’ll start to get the idea that there is a common theme (or a few common themes) between the complaints. If you’ve read my book or if you’ve worked in engineering productivity for a while, you’ll usually realize that the real underlying cause of the problems is some sort of code complexity. But that’s not purely the theme we’re looking for—we could have figured that out without even talking to anybody. We’re looking for something a bit higher level, like “building the binary is slow.” There might be several themes that come up.

Now, you’ll have a bunch of data, and there are a few things you can do with it. Usually engineering management will be interested in some of this information that you’ve collected, and presenting it to them will make you real to the managers and hopefully foster some agreement that something needs to be done about the problem. That’s not necessary to do as part of this solution, but sometimes you’ll want to do it, based on your own judgment of the situation.

The first thing you should do with the data is find some problem that developers know they have, that you know you can do something about in a short period of time (like a month or two) and deliver that solution. This doesn’t have to be life-changing or completely alter the way that everybody works. In fact, it really should not do that. Because the point of this change is to make your work credible.

When you work in engineering productivity, you live or die by your personal credibility.

You see, at some point you need to be able to get down to the real problem. And the only way that you’re going to be able to do that is if the developers find you credible enough to believe you and trust you when you want to make some change. So you need to do something at first to become credible to the team. It’s not some huge, all-out change. It’s something that you know you can do, even if it’s a bit difficult. It helps if it’s something that other people have tried to do and failed, because then you also demonstrate that in fact something can be done about this mess that other people perhaps failed to handle (and then everybody felt hopeless about the whole thing and just decided they’d have to live with the mess forever, and it can’t be fixed and blah blah blah so on and so on).

Once you’ve established your basic credibility by handling this initial problem, then you can start to look at what problem the developers have and what you think the best solution to that would be. Now, often, this is not something you can implement all at once. And this is another important point—you can’t change everything about a team’s culture or development process all at once. You have to do it incrementally, deal with the “fallout” of the change (people getting mad because you changed something, or because it’s all different now, or because your first iteration of the change doesn’t work well) and wait for that to calm down before moving on to the next step. If you tried to change everything all at once, you’d essentially have a rebellion on your hands—a rebellion that would result in the end of your credibility and the failure of all your efforts. You’d be right back in the same pit that the other two, non-working solutions from above end you up in—being demoralized or ineffective. So you have to work in steps. Some teams can accept larger steps, and some can only accept smaller ones. Usually, the larger the team, the more slowly you have to go.

Now, sometimes at this point you run into somebody who is such a curmudgeon that you just can’t seem to make forward progress. Sometimes there is some person who is very senior who is either very set in their ways or just kind of crazy. (You can usually tell the latter because the crazy ones are frequently insulting or rude.) How much progress you can make in this case depends partly on your communication skills, partly on your willingness to persist, but also partly in how you go about resolving this situation. In general, what you want to do is find your allies and create a core support group for the efforts you are making. Almost always, the majority of developers want sanity to prevail, even if they aren’t saying anything.

Just being publicly encouraging when somebody says they want to improve something goes a long way. Don’t demand that everybody make the perfect change—you’re gathering your “team” and validating the idea that code cleanup, productivity improvements, etc. are valuable. And you have something like a volunteer culture or an open-source project—you have to be very encouraging and kind in order to foster its growth. That doesn’t mean you should accept bad changes, but if somebody wants to make things better, then you should at least acknowledge them and say that’s great.

Sometimes 9 out of 10 people all want to do the right thing, but they are being overruled by the one loud person who they feel they must bow down to or respect beyond rationality, for some reason. So you basically do what you can with the group of people who do support you, and make the progress that you can make that way. Usually, it’s actually even possible to ignore the one loud person and just get on with making things better anyway.

If you ultimately get totally stopped by some senior person, then either (a) you didn’t go about this the right way (meaning that you didn’t follow my recommendations above, there’s some communication difficulty, you’re genuinely trying to do something that would be bad for developers, etc.) or (b) the person stopping you is outright insane, no matter how “normal” they seem.

If you’re blocked because you’re doing the wrong thing, then figure out what would help developers the most and do that instead. Sometimes this is as simple as doing a better job of communicating with the person who’s blocking you. Like, for example, stop being adversarial or argumentative, but listen to what they person has to say and see if you can work with them. Being kind, interested, and helpful goes a long way. But if it’s not that, and you’re being stopped by a crazy person, and you can’t make any progress even with your supporters, then you should probably find another team to work with. It’s not worth your sanity and happiness to go up against somebody who will never listen to reason and who is dead set on stopping you at all costs. Go somewhere where you can make a difference in the world rather than hitting your head up against a brick wall forever.

That’s not everything there is to know about handling that sort of situation with a person who’s blocking your work, but it gives you the basics. Persist, be kind, form a group of your supporters, don’t do things that would cause you to lose credibility, and find the things that you can do to help. Usually the resistance will crumble slowly over time, or the people who don’t like things getting better will leave.

So let’s say that you are making progress improving productivity by incremental steps, and you are in some control over any situations that might stop you. Where do you go from there? Well, make sure that you’re moving towards the fundamental problem with your incremental steps. At some point, you need to start changing the way that people write software in order to solve the problem. There is a lot to know about this, which I’ve either written up before or I’ll write up later. But at some point you’re going to need to get down to simplifying code. When do you get to do that? Usually, when you’ve incrementally gotten to the point where there is a problem that you can credibly indicate refactoring as part of the solution to. Don’t promise the world, and don’t say that you’re going to start making a graph of improved developer productivity from the refactoring work that you are going to do. Managers (and some developers) will want various things from you, sometimes unreasonable demands born out of a lack of understanding of what you do (or sometimes from the outright desire to block you by placing unreasonable requirements on your work). No, you have to have some problem where you can say, “Hey, it would be nice to refactor this piece of code so that we can write feature X more easily,” or something like that.

From there, you keep proposing refactorings where you can. This doesn’t mean that you stop working on tooling, testing, process, etc. But your persistence on refactoring is what changes the culture the most. What you want is for people to think “we always clean up code when we work on things,” or “code quality is important,” or whatever it takes to get the culture that you want.

Once you have a culture where things are getting better rather than getting worse, the problem will tend to eventually fix itself over time, even if you don’t work on it anymore. This doesn’t mean you should stop at this point, but at the worst, once everybody cares about code quality, testing, productivity, etc. you’ll see things start to resolve themselves without you having to be actively involved.

Remember, this whole process isn’t about “building consensus.” You’re not going for total agreement from everybody in the group about how you should do your job. It’s about finding out what people know is broken and giving them solutions to that, solutions that they can accept and which improve your credibility with the team, but also solutions which incrementally work toward resolving the real underlying problems of the codebase, not just pandering to whatever developer need happens to be the loudest at the moment. If you had to keep only one thing in mind, it’s:

Solve the problems that people know they have, not the problems you think they have.

One last thing that I’ll point out, is that I’ve talked a lot about this as though you were personally responsible for the engineering productivity of a whole company or a whole team. That’s not always the case—in fact, it’s probably not the case for most people who work in engineering productivity. Some people work on a smaller part of a tool, a framework, a sub-team, etc. This point about solving the problems that are real still applies. Actually, probably most of what I wrote above can be adapted to this particular case, but the most important thing is that you not go off and solve the problem that you think developers have, but that instead you solve a problem that (a) you can prove exists and (b) that the developers know exists. Many of the engineering productivity teams that I’ve worked with have violated this so badly that they have spent years writing tools or frameworks that developers didn’t want, never used, and which the developers actually worked to delete when the person who designed them was gone. What a pointless waste of time!

So don’t waste your time. Be effective. And change the world.

-Max

Categorieën: Mozilla-nl planet

Sam Foster: Reflections on Project Haiku

Mozilla planet - ti, 04/04/2017 - 20:40

I’ve written before on this blog about my current project with Mozilla’s Connected Devices group: Project Haiku. Last week, after close to 9 months of exploration, prototyping and refinement this project was put on hold indefinitely.

So I wanted to take this opportunity - a brief lull before I get caught up in my next project - to reflect on the work and many ideas that Project Haiku produced. There are several angles to look at it from, so I’ll break it down into separate blog posts. In this post I’ll provide a background of the what, when and why as a simple chronological story of the project from start to finish.

Other posts in this series:

Phase 0: Are we solving the right problem?

Back in March 2016, with Firefox OS winding down and most of that team off exploring the field of IoT and the smart home, Liz proposed a vision for a project that would tackle smart home problems in a way that was more grounded in human experience and recognized the diversity of our requirements from technology and our need to have it reflect our values - both aesthetically and practically. I had been experimenting with ideas like the smart mirror and this human-centric direction resonated with me. A team gathered around her proposal and we started digging.

It quickly became clear that the “smart home” box wasn’t a useful constraint. Connecting things around the home in a way that felt valuable and reflective of the principles we’d identified for this project was proving elusive. So we stepped back and did some design thinking: are we asking the right question? What do people really want from technology in the context of the home? And which people are we talking about? This lead us to a study in which we interviewed a set of teens and retirement-age folks on themes of freedom and independence in the home. You can find more details on the study here

Connecting people

Of the themes that emerged from this study, we chose to focus on that of connecting people. We saw the same needs repeated over and over: people wanted to share moments, to maintain a presence in each others lives. At the same time there was a sense of loss of control and growing obligation from smart phones and social media; being spread too thin. Over the next few months, we built test devices to better understand this problem, and conducted further studies eventually arriving at a simple wearable device that would show real-time status for a small group of friends and family.

Wearable mockup

We were happy to see that a few other companies had arrived as similar conclusions - taking their own journey to get to this point. Products like Ringly and the Goodnight Lamp embodied some of the same thinking. Our idea for a wearable product was very much informed by Mozilla’s ethos and mission. In this simple device we were going to implement what amounted to a simple wearable web client capable of monitoring a handful of URLs and “displaying” the changing values supplied by those endpoints as visual light patterns and haptic feedback. We would bring the https://www.mozilla.org/about/manifesto/ to the world of connected wearables, and bring both peace of mind and small moments of joy to young people at a time when many in the industry seem intent on exploiting their Fear of Missing Out, and are sometimes cavalier in their handling of privacy and data ownership.

Stumbling and a change in direction

Getting to grips with what it would take to produce this device and re-building momentum lost over the summer break had cost us though. Just as this picture came into focus and we started to take the next steps in the plan, the team was called to account. Our enthusiasm and confidence in the product was not shared by the innovation board. There was some skepticism of our premise - that our audience of teenage girls would want such a thing - despite the research we had done. And there were concerns about our ability to contain the cost and complexity implicit in the small, wearable form-factor. Given the finite resources available to the Connected Devices group and the ambitions of our project relative to the experience and expertise available to us, from the outside it looked like we were heading off into the weeds.

At the same time, another team had concluded a exploratory project with an outside agency and had produced a report echoing many of the needs and values Haiku had identified. They had proposed a (non-mobile) device for the home which would facilitate communication and sharing between friends and family. We decided to put aside the wearable and pick up where this report left off. I’ve written already about some of this work. We produced a “learning prototype” to home in some more on what people wanted from a device like this, and where we could have the most impact. We adopted a new target audience and use case - communication between kids and grandparents - and assessed priorities and features. We did some technical exploration and landed on what was essentially a WebRTC application, running on an embedded Linux device. The WebRTC architecture was a great fit: private and secure by default, with no need to store or pass personal communications through Mozilla’s servers. Each connection is point to point and the very personal and private content implicit in the use cases would always be encrypted. With little to no data to store, an open-source codebase for client and services and a minimum of setup, we could minimize the risk/threat of lock-in for the device owner.

Meanwhile, we had questions. How might this device be used? What kinds of messages would these people want to send? Should we store missed messages? Is the device portable or not? We knocked together another prototype, this time using left-over phones from FxOS days to gather data and feedback from a set of grandparents and grandchildren over a couple of week-long studies.

Fleshing out the idea

The culmination of this work was a product definition that included the user market and use cases, the features and principles, as well as details on what we would need to implement and how. We had landed on a concept for a device and service that would give grandparents and their grandchildren an easy, one-touch experience to share moments using audio or emoji. The child would have a dedicated connected device, explicitly and exclusively paired to an app installed on the grandparent’s phone. We observed a magical thing emerging from the simplicity and directness of the experience: kids were able to carry out “conversations” without any assistance from their parents; they could own their relationship with distant loved ones. This was the real value proposition. Project Haiku wasn’t presenting a technical breakthrough as-such, but taking existing technology and fostering joy, confidence and agency using open standards and the infrastructure of the web.

Early device design concept by our industrial design contractor

The process we follow in Connected Devices has a “Gate 1” milestone in which for a project to move forward, it should present a clear picture of what the product will be, demonstrate viability and a market fit, and detail what it will take to get there. It is evaluated against these and other criteria including alignment with the Mozilla mission, and alignment with the collective vision for Connected Devices. In December we presented to the board and found out later that week that we had met the criteria and passed Gate 1. However…

Back-burnered

The “however” was about resources and priorities: people, money and time. We simply couldn’t pursue each of the products at this time. In the context of the emerging game plan for Connected Devices, Haiku was not a high priority and other projects that were, were hurting for lack of people to work on them. So Project Haiku is on the back-burner. Its possible though unlikely that we’ll be able to revisit it and pick development back up later this year. In the meantime, the best we can do is to ensure that work and findings from this project are well documented so the organization and the community have the opportunity to learn what we learned.

To that end, I’ll be putting my thoughts to paper on this blog on a series of topics which Project Haiku touched. As usual with Mozilla, all our code and documents are publicly available. Please find me on IRC in the #haiku channel (irc.mozilla.com) as sfoster, or through my mozilla or personal email (sfoster at mozilla, sam at sam-i-am.com) if you have any questions. I’m also on twitter etc. as samfosteriam

Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Extravaganza: April 2017

Mozilla planet - ti, 04/04/2017 - 19:00

 April 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Extravaganza: April 2017

Mozilla planet - ti, 04/04/2017 - 19:00

 April 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: “Build Your Own WebExtension Add-on” Campaigns Around the World

Mozilla planet - ti, 04/04/2017 - 17:45

We recently partnered with the Mozilla Open Innovation team to launch an activity that would introduce developers to WebExtensions and guide them through the experience of creating new add-ons with the APIs. The “Build Your Own WebExtension Add-on For Firefox” activity launched in February as part of Mozilla’s Activate campaign to mobilize Mozillians around the world to have impact in key areas of the organization’s mission. This activity will run until the end of 2017.

Mozilla communities in Tamilnadu, Switzerland, and Brazil answered the call-to-action and recently hosted events using the Activate curriculum. To date, 54 people have attended these events, and participants have submitted seven new add-ons to addons.mozilla.org. (If you are curious to see what they have built, take a look at this this collection on AMO.)

If you’re interested in hosting an event, read on to find out how our communities have organized their events, and what they would recommend for best practices!

Tamilnadu

Viswaprasanth Ks has been a passionate member of the add-ons community since he started contributing to Mozilla in 2012. He recently led an add-ons track at the Tamilnadu community’s 24 Hour Hackathon, where 25 participants brainstormed and created their own extensions to solve real-world problems.

What we learned

Encourage participants to learn JavaScript and have them start learning extension development from the mdn-web extension repo, recommends Viswaprasanth. Those with less familiarity with HTML and JavaScript might need additional support to complete the activity. Plus, the examples listed in the mdn-web extension repo have been carefully evaluated as being good starting places for beginning developers.

Picture of participants at 24 Hour Hackathon

Photo by Viswaprasanth Ks

Switzerland

Michael Kohler slated this activity for one of Mozilla Switzerland’s monthly meet-ups and tapped long-time add-ons contributor Martin Giger to mentor a group of 10 participants. Attendees found the workshop to be a relaxing introduction to extension development and left the event feeling empowered and confident in their abilities to create add-ons using WebExtensions APIs.

What we learned

Anticipate that it will take 90 minutes to complete Part I of the curriculum. “We used around 90 minutes to get to a working first example, including the intro,” Michael reports. If you are only able to complete Part I during an event, consider scheduling a follow-up event where participants can continue creating extensions in a fun, supportive atmosphere.

Martin Giger speaks at Mozilla Switzerland meet up

Photo by Michael Kohler

Brazil

What can 22 Brazilians and 30 liters of beer accomplish in one day? Quite a bit, according to Andre Garzia’s blog post about his recent event. After a discussion about extension development and a group brainstorming session, participants organized themselves into small groups and worked on ten add-ons.

What we learned

Provide some starter ideas to those who want to go beyond the initial tutorial and build their own original add-on. Andre writes in his post, “We knew from the start that telling people to come out with add-on ideas out of the blue would not be an effective way to engage everybody. People have different ways to come up with ideas and some don’t enjoy coming up with an idea on the spot like this. To help people out, we made a clothesline where we hung add-on ideas up. Each idea had a description, suggested APIs to use and a difficulty/complexity rate. Attendees were encouraged to browse our hanging ideas and take one to implement if they felt like it.”

Note: if you need help developing a list of starter ideas, take a look at this list of requests from users on Discourse.

Printed ideas for add-ons on a clothesline

Photo by Andre Garzia

Have you conducted an add-ons development workshop for your community or are you interested in hosting one? Tell us about it on Discourse!

The add-ons team would like to extend a hearty thank you to Viswaprasanth Ks and Daniele Scasciafratte for providing input and tutorials for the “Build Your Own WebExtension Add-on” activity, and to Michael Kohler, Viswaprasanth Ks, and Andrew Garzia for coordinating these events.

The post “Build Your Own WebExtension Add-on” Campaigns Around the World appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Daniel Glazman: XUL-based OS X TouchBar #2

Mozilla planet - ti, 04/04/2017 - 16:42

Done. I have added XUL-based support for OS X Touchbar to Postbox and that will certainly ship soon. It's then probably the first Gecko-based application with OS X Touchbar support... All in all, it was quite easy. I only regretted some very strange or sometimes inconsistent design choices on Apple's side. But let's get back to the results... In the code, we have for instance:

code in Postbox's main window

And the result is:

Postbox's main window and touchbar

All in all, quite cool, simple to understand, manipulate, and even extend (on both the XUL and Cocoa side). Was fun to implement :-)

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Webinar

Mozilla planet - ti, 04/04/2017 - 15:30

Reps Webinar Onboarding for new Reps

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Webinar

Mozilla planet - ti, 04/04/2017 - 15:30

Reps Webinar Onboarding for new Reps

Categorieën: Mozilla-nl planet

Armen Zambrano: Screencast: How to green up Firefox test jobs on new infrastructure

Mozilla planet - ti, 04/04/2017 - 15:24
In this blog post I go over the basics of investigating if a new platform on the continous integration system is ready to run Firefox test jobs.

In this case we look at Windows 7 and Windows 10 jobs on TaskCluster.
Some issues are on the actually machines (black screenshots; audio set up) and others are tests that need developer investigation.

You need about 30 minutes to watch these.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Nicholas Nethercote: Improving the Gecko Profiler

Mozilla planet - ti, 04/04/2017 - 07:52

Over the last three months I landed 167 patches, via 41 Bugzilla bugs, for the Gecko Profiler. These included crash fixes, assertion failure fixes, data race fixes, optimization fixes, and a great many refactorings.

Background

The Gecko Profiler is a profiler built into Firefox. It can be used with the Gecko Profiler Addon to profile Firefox. It also provides the core profiling mechanism that is used by Firefox’s devtools to profile JavaScript content code.

It’s a crucial component.  It was originally written 5 years ago in something of a hurry, because we desperately needed a built-in profiler, one that could give more detailed, custom information than is possible with an external profiler. As I understand it, part of it was imported from V8, so it was a mix of previously existing code and new code. And in the years since it has been extended by multiple people, in a variety of ways that didn’t necessarily mesh well.

As a result, at the start of Q1 it was in pretty bad shape. Crashes and assertion failures were frequent, and the code itself was hard to read and maintain. Markus Stange had recently taken over ownership, and had made some great improvements to the Addon, but the core code needed work as well. So I started digging in to see what I could find and improve. There was a lot!

Fixes

Bug 1331571. The profiler had code for incorporating power consumption estimates from the Intel Power Gadget. Unfortunately, this integration had major flaws: Intel Power Gadget only gives very coarse power consumption estimates; the profiler samples at 1000Hz and is CPU intensive and so is likely to skew the power consumption estimates significantly; and nobody had ever used it meaningfully. So I removed it.

Bug 1317771. The profiler had a “standalone” configuration that allowed it to be used in programs other than Firefox. But it was complex (lots of #ifdef statements) and broken and unlikely to be of use. So I removed it.

Bug 1328369, Bug 1328373: Dead code removal.

Bug 1332577. The public API for the profiler was a mess. It was split across several header files, and most API functions had an “outer” version with a profiler_ prefix that immediately called into an inner version with a mozilla_sampler_ prefix (though there were some inconsistencies). So I combined the various header files into a single file, GeckoProfiler.h, and simplified the API functions into a single level, all consistently named with a profiler_ prefix.

Bug 1333296. Even the name of the profiler was a source of confusion. It was originally known as “SPS”, which I believe is short for “simple profiling system”. At some point that changed to “the Gecko Profiler”, although was also occasionally referred to as “the built-in profiler”! Because of this history, the code was littered with references to SPS. In this bug I updated them all to refer to the Gecko Profiler. (I updated the MDN docs, too. The page name still uses “Built-in Profiler” because I don’t know how to change MDN page names.)

Bug 1329684. I removed some mutex wrapper classes that I think were necessary at one point for the “standalone” configuration.

Bug 1328365. Thread-local storage was being used to store a pointer to an object that was only accessed on the main thread, so I changed it to be a global variable. I also renamed some variables whose names referred to a type that had been renamed a long time ago.

Bug 1333655. The profiler had a cross-platform thread abstraction that was clumsy and over-engineered, so I streamlined it.

Bug 1334466. The profiler had a class called Sampler, which I think was imported from V8, and a subclass called GeckoSampler. Both classes were fairly complex, and we only ever instantiated the subclass. The separation merely obscured things, so I merged the subclass into Sampler. Having all that code in a single class and a single module made it much easier to see exactly what it was doing.

Bug 1335595. Two classes, ThreadInfo and ThreadProfile, were used for per-thread data. They were hopelessly intertwined: each one had a pointer to the other, and multiple fields were present (i.e. duplicated) in both of them. So I just merged them.

Bug 1336326. Three minor clean-ups.

Bug 816598. I implemented a memory reporter for the profiler. This was first requested in 2012, and five duplicate bugs had been filed in the interim!

Bug 1126576. I removed some very grotty manual refcounting from the PseudoStack class, which simplified things. A little too much, in fact… I misunderstood how things worked, causing a crash, which I subsequently fixed in bug 1340161.

Bug 1337189. The aforementioned Sampler class was still over-engineered. It only ever had 0 or 1 instantiations, and basically was an unnecessary level of abstraction. In this bug I basically got rid of it by merging it into another file. Which took 27 patches! (One of these patches introduced a regression, which I later fixed in bug 1340327.) At this point a lot of the core code that had previously been spread across multiple files and classes was now in a single file, tools/profiler/core/platform.cpp, and it was becoming increasingly obvious that there was a lot of global state being accessed from multiple threads with insufficient thread synchronization.

Bug 1338957. The profiler tracks which threads are “sleeping” (i.e. blocked on some operation), to support an optimization where it samples sleeping threads in a cheaper-than-normal fashion. It was using two counters and a boolean to track the sleep state of each thread. These three values were accessed from multiple threads; two of them were atomic, and one wasn’t, so the whole setup was very racy. I managed to condense the three values into a single atomic tri-state value, which made things simpler and thread-safe.

Bug 1339327. I landed eight refactoring patches with no particular common theme, mostly involving renaming things and removing unnecessary stuff.

Bug 1339435. I removed two erroneous assertions that I had added in an earlier patch — two functions that I thought only ran on the main thread turned out to run off the main thread as well.

Bug 1339695. The profiler has a lot of code that is specific to a particular architecture (e.g. x86), OS (e.g. Windows), or platform (e.g. x86/Windows). The #ifdef statements used to select these were massively inconsistent — turns out there are many ways to detect this stuff — so I fixed this up. Among other things, this involved using the nice constants in tools/profiler/core/PlatformMacros.h consistently throughout the profiler’s code. (I fixed a regression — caused by mistyping one of the #ifdef conditions, alas! — from this change in bug 1350211. And another one involving --disable-profiling in bug 1348776.) I also renamed some files that had .cc extensions instead of the usual .cpp because they had (I think) been imported from V8.

Bug 1340928. At this point I had started working on a major change to the handling of the profiler’s core global state. It would inevitably be a big patch, but I wanted it to be as small as possible, so I started aggressively carving off small changes that could be landed separately. This bug featured 16 of them.

Bug 1328378. The profiler has two kinds of samples: periodic, which are taken by a separate thread in response to a timer firing, and synchronous, which a thread takes itself in response to a request via the profiler API. There are a lot of similarities between the two, but also some important differences. This bug took some steps to simplify the messy handling of synchronous samples.

Bug 1344118. I mentioned earlier that the profiler tracks which threads are “sleeping” to support an optimization: when a thread is asleep, we can mostly duplicate its last sample without unwinding its stack. But the optimization was buggy and would become a catastrophic pessimization in certain circumstances, due to what should have been a short O(1)-ish buffer search becoming O(n²)-ish, which would quickly peg one CPU at 100% usage. As far as I can tell, this bug was present in the optimization ever since it was implemented three years ago. (It’s possible it wasn’t noticed because its effect increase as more threads are profiled, but the profiler defaults to only profiling the main thread and the compositor thread.) The fix was straightforward once the diagnosis was made, and Julian Seward did a follow-up that made the optimization even more effective.

Bug 1342306. In this bug I put almost all of the profiler’s global state into a single class and protected accesses to it with a mutex. Unlike the old code, the new code is simple and obviously thread-safe. The final patch in this bug was much bigger than I would have liked, at 142 KiB, even after I carved off as many precursor patches as I could. Unsurprisingly, there were some follow-up fixes required: bug 1346356 (a leak and a deadlock), bug 1347044 (another deadlock), bug 1348374 (yet another deadlock), and bug 1350967 (surprise! another deadlock).

Bug 1345262. I fixed an assertion failure caused by the profiler and the JS engine having differing views about what functions should be called on what threads.

Bug 1347348. Five more assorted clean-ups.

Bug 1349856. I fixed a minor error involving a call to the profiler from Gecko.

Bug 1346132. I removed the profiler’s bespoke logging system, replacing it with the standard Mozilla one. I also made the logging output more concise and informative.

Bug 1350212. I cleaned up a class and its uses a bit.

Bug 1351523. I reordered one function’s arguments to match the order used in two related functions.

Bug 1351528. I removed some unused values from an enum.

Bug 1348024. I simplified some environment variables used by the profiler.

Bug 1351946. I removed some gnarly code for starting the profiler on B2G.

Bug 1351136. The profiler’s testing coverage is not very good, as can be seen from the numerous regressions I introduced and fixed. So I added a gtest that improves coverage. There’s still room for more test coverage improvement.

Bug 1351963. I further clarified the handling of synchronous vs. periodic samples, and simplified the ownership of some per-thread data structures.

Discussion

I learned some interesting things while doing this work.

Learning a component

Three months ago I knew almost nothing about the profiler’s code. Today I’m a module peer.

At the start of January I had been told that the profiler needed work, and I had assigned myself a Q1 deliverable to “land three improvements to the Gecko Profiler”. I started just by looking for easy, obvious code clean-ups, such as dead code removal, fixing inconsistent naming of things, and removing unnecessary indirections. (These are the kinds of clean-ups you can make with only shallow understanding.) The profiler proved to be a target-rich environment for such changes!

After I made a few dozen such changes I started understanding more deeply how the pieces fit together. (Partly because I’d been staring at the code a lot, and partly because my changes were making the code easier to understand. Refactorings add up.) I started interleaving my easy clean-up patches with ones that required more insight into how the profiler worked. I made numerous mistakes along the way, as the various regression fixes above show. But that’s ok.

I also kept a text file in which I had a list of ideas for things to fix. Every time I saw something that looked like it could be improved, I added it to the file, and I repeatedly checked the file when deciding what to work on next. As my understanding of the code improved, multiple times I realized that items I had written down were wrong, or incomplete, or that seemingly separate things were related. (In fact, I’m still using the file, because I still have numerous things I want to improve.)

Multi-threaded programming basics

Although I first learned C and C++ about 20 years ago, and I have worked at Mozilla for more than 8 years, this was the first time I’ve ever done serious multi-threaded programming, i.e. at a level where I needed a reasonably deep understanding of how various threads can interact. I got the following two great tips from Julian Seward, which helped a lot.

  • Write down pseudocode for each thread.
  • Write down potential worst-case thread operation interleavings.

I also found it helpful to add comments (or assertions, where appropriate) to the top of functions that indicate which thread or threads they run on. For example:

void profiler_gathered_OOP_profile() { MOZ_RELEASE_ASSERT(NS_IsMainThread()); ... }

and:

void profiler_thread_sleep() { // This function runs both on and off the main thread. ... } A useful idiom: proof-of-lock tokens

I also employed a programming idiom that turned out to be extremely helpful.  Most of the global profiler state is in single class called ProfilerState. There is a single instance of this class, gPS, and a single mutex that protects it, gPSMutex. To guarantee that no code is able to access gPS‘s contents without first locking the mutex, for every field in ProfilerState there is a getter and a setter, both of which require a “proof-of-lock” token, which takes the form of a const PS::AutoLock&, where PS::AutoLock is an RAII type that locks and unlocks a mutex.

For example, consider this function, which checks if the profiler is paused.

bool profiler_is_paused() { PS::AutoLock lock(gPSMutex); if (!gPS->IsActive(lock)) { return false; } return gPS->IsPaused(lock); }

The PS::AutoLock locks the mutex. IsActive() and IsPaused() both access fields within gPS, and so they are passed lock, which serves as the proof-of-lock value. IsPaused() and SetIsPaused() are implemented as follow.

bool IsPaused(const PS::AutoLock&) const { return mIsPaused; } void SetIsPaused(const PS::AutoLock&, bool aIsPaused) { mIsPaused = aIsPaused; }

Neither function actually uses the proof-of-lock token. Nonetheless, any function that calls a ProfilerState getter or setter must either lock gPSMutex, or have an ancestor that does. This idiom has two very nice benefits.

  • You can’t access gPS‘s contents without having first locked gPSMutex. (Well, it is possible to subvert the protection, but not by accident.)
  • It’s obvious that all functions that have a proof-of-lock argument are called only while gPSMutex is locked.

Functions that are called from multiple places sometimes must be split in two: an outer function in which the mutex is initially unlocked, and an inner function that takes a proof-of-lock token. This isn’t hard, though.

Deadlocks vs. data races

After my big change to the profiler’s global state, I had to fix numerous deadlocks. This wasn’t too hard. Deadlocks (caused by too much thread synchronization) are obvious, easy to diagnose, and these ones weren’t hard to fix. It’s useful to contrast them with data races (caused by too little thread synchronization) which typically have subtle effects and are difficult to diagnose.

Patch discipline

For this work I wrote a lot of small patches. This is my preferred way to work, for two reasons. First, small patches make life easier for reviewers, which in turn results in faster reviews. Second, small patches make regression hunting easy. It’s always nice when you bisect a regression to a small patch.

Future work and Thanks

The profiler still has plenty of room for improvement, and I’m planning to do more work on it in Q2. In the meantime, if you’ve tried the profiler in the past and had problems it might be worth trying again. It’s in much better shape now.

Finally, many thanks to Markus Stange for reviewing the majority of the patches and answering lots of questions, and Julian Seward for reviewing most of the remainder and for numerous helpful discussions about threaded programming.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, March 2017

Mozilla planet - ti, 04/04/2017 - 07:32

Here’s what happened in March in Kuma, the engine of MDN:

  • Shipped content experiments framework
  • Merged read-only maintenance mode
  • Shipped tweaks and fixes

Here’s the plan for April:

  • Clean up KumaScript macro development
  • Improve and maintain CSS quality
  • Ship the sample database
Done in March Content Experiments Framework

We’re planning to experiment with small, interactive examples at the top of high-traffic reference pages. We want to see the effects of this change, by showing the new content to some of the users, and tracking their behavior. We shipped a new A/B testing framework, using the Traffic Cop library in the browser. We’ll use the framework for the examples experiment, starting in April.

Read-Only Maintenance Mode

We’ve merged a new maintenance mode configuration, which keeps Kuma running when the database connection is read-only. Eventually, this will allow MDN content to remain available when the database is being updated, and lead to new distributed architectures. In the near term, we’ll use it to test our new AWS infrastructure running production backups, and eventually against off-peak MDN traffic.

Shipped Tweaks and Fixes

Here’s some other highlights from the 15 merged Kuma PRs in March:

KumaScript continues to be busy, with 19 merged PRs. There were some PRs from new contributors:

Planned for April

We had a productive work week in Toronto. We decided that we need to make sure we’re paying down our technical debt regularly, while we continue supporting improved features for MDN visitors. Here’s what we’re planning to ship in April:

Clean Up KumaScript Macro Development

KumaScript macros have moved to GitHub, but ghosts of the old way of doing things remain in Kuma, and the development process is still tricky. This month, we’ll tackle some of the known issues:

  • Remove the legacy macros from MDN (stuck in time at November 2016)
  • Remove macro editing from MDN
  • Update macro searching
  • Start on an automated testing framework for KumaScript macros
Improve and Maintain CSS Quality

We’re preparing for some future changes by getting our CSS in order. One of the strategies will be to define style rules for our CSS, and check that existing code is compliant with stylelint. We can then enforce the style rules by detecting violations in pull requests.

Ship the Sample Database

The Sample Database has been promised every month since October 2016, and has slipped every month. We don’t want to break the tradition: the sample database will ship in April. See PR 4076 for the remaining tasks.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 176

Mozilla planet - ti, 04/04/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's Crate of this Week is fst, which contains Finite State Transducers and assorted algorithms that use them (e.g. fuzzy text search). Thanks to Jules Kerssemakers for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

114 pull requests were merged in the last week.

New Contributors
  • Alan Stoate
  • aStoate
  • Donnie Bishop
  • GAJaloyan
  • Jörg Thalheim
  • Malo Jaffré
  • Micah Tigley
  • Nick Sweeting
  • Phil Ellison
  • raph
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I gave my company's Embedded C training course this morning. It's amazing how much more sense C makes when you explain it in Rust terms.

theJPster in #rust-embedded.

Thanks to Oliver Schneider for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 03 Apr 2017

Mozilla planet - mo, 03/04/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 03 Apr 2017

Mozilla planet - mo, 03/04/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Gecko Profiler Introduction

Mozilla planet - mo, 03/04/2017 - 18:34

Gecko Profiler Introduction Ehsan Akhgari: Gecko Profiler Introduction

Categorieën: Mozilla-nl planet

Air Mozilla: Gecko Profiler Introduction

Mozilla planet - mo, 03/04/2017 - 18:34

Gecko Profiler Introduction Ehsan Akhgari: Gecko Profiler Introduction

Categorieën: Mozilla-nl planet

Carsten Book: Sheriffing@Mozilla – Sheriffing and Backouts

Mozilla planet - mo, 03/04/2017 - 16:09

Hi,

Keeping the code trees [1] green (meaning free of build or test failures,
regressions, and minimizing intermittent test failures) is the daily
goal of sheriffing.

In order to reach this goal, this means we sometimes have to back out (revert)
changes made by developers. While this is a part of our job, we don’t do
it easily or without reason.

Backouts happen mostly for:
-> Bustage (i.e. Firefox no longer
successfully builds)
-> Test failures caused by a specific change
-> Issues reported by the community, like startup crashes or severe
regressions (these backouts often lead to new nightly builds being
created as well)
-> Performance regressions or memory leaks
-> Issues that block merges like merge-conflicts (like for a mozilla-inbound to mozilla-central merge)

For our primary integration repositories (where our developers land most
their changes), our workflow depends on which repository the problem is
on.

Mozilla-Inbound

-> Close Mozilla-Inbound if needed (preventing
developers from landing any further changes until the problem is
resolved)

-> Try to notify the responsible developer so that they
are  aware of the problem caused by their patch

-> If possible, we
accept follow-up patches to fix the problem. This allows us to fail
forward and avoid running extra jobs that require more CPU time and
therefore increase costs.

-> If we don’t get response from the developer within a short
timeframe like 5 minutes, we back out the change and comment in the
bug with a reason for the backout (for example, including a link to the
failure log) and a needinfo to the assigne, to make sure the bug don’t get lost.

Autoland

-> Changesets that cause problems are backed out immediately –
no follow-ups as described above are possible (only the sheriffs can push manually to
autoland)

In any case, backouts are never meant to be personal and it’s part of
our job to try our best to keep our trees open for developers. We also
try to provide as much information as possible in the bug for why we
backed out a change.

Of course, we also make mistakes and it could be that we backed out
changesets that were innocent (like in a case where its not 100% clear
what caused the problem), but we try our best.

If you feedback or ideas how we can make things better, let me know.

Cheers,
– Tomcat

 

[1] Trees: The tree contains the source code as well as the code required to build each project on supported platforms (Linux, Windows, macOS, etc) and tests for various areas. Sheriffs take care of Firefox Code Trees like mozilla-central, mozilla-inbound, autoland, mozilla-aurora, mozilla-beta and mozilla-esr45/52 – our primary tool is treeherder and can be found here

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Migrating ColorZilla to WebExtensions

Mozilla planet - mo, 03/04/2017 - 13:24

ColorZilla lets you get a color reading from any point in your browser, quickly make adjustments to it, and paste it into another program. It also generates gradients and more, making it an indispensable add-on for designers and artists.

For more resources on updating your extension, please check out MDN. You can also contact us via these methods.

Can you provide a short background on your add-on? What does it do, when was it created, and why was it created?

ColorZilla is one of the earliest Firefox add-ons—in fact, it’s the 271st Firefox add-on ever created (currently there are over 18,000 add-ons available on AMO). The first version was released almost 13 years ago in September 2004. ColorZilla was created to help designers and web developers with color-related tasks—it had the first-ever browser-based eyedropper, which allowed picking colors from any location in the browser and included a sophisticated Photoshop-like color-picker that could perform various color manipulations. Over the years the add-on gained recognition with millions of users, won awards and was updated with many advanced features, such as DOM color analyzers, gradient editors etc.

What add-on technologies or APIs were used to build your add-on?

Because the core of the ColorZilla codebase was written in the very early days, it used fairly low-level APIs and services.

Initially, ColorZilla relied on native XPCOM components for color sampling from the browser window. The first release included a Windows XPCOM module with a following release adding native XPCOM modules for MacOSX and Linux. After a few years, when new APIs became available, the native XPCOM part was eliminated and replaced with a Canvas JavaScript-based solution that didn’t require any platform-specific modules.

Beyond color sampling, ColorZilla used low-level Firefox XPCOM services for file system access (to save color palettes etc), preferences, extension management etc. It also accessed the browser content DOM directly in order to analyze DOM colors etc.

Why did you decide to transition your add-on to WebExtensions APIs?

There were two major reasons. The first reason was Firefox moving from single process to Electrolysis (e10s). With add-ons no longer able to directly access web content, it would have required refactoring large portions of the ColorZilla code base. In addition, as ColorZilla for Chrome was released in 2012, it meant that there was a need to maintain two completely separate code bases, and to implement new features and capabilities for both. Using WebExtensions allowed seamless supporting of e10s and code-sharing with ColorZilla for Chrome, minimizing the amount of overhead and maintenance and maximizing the efforts that could be invested in innovation and new capabilities.

Walk us through the process of how you made the transition. How was the experience of finding WebExtensions APIs to replace legacy APIs? What are some advantages and limitations?

Because ColorZilla for Chrome was already available on the market for about 5 years and because WebExtensions are largely based on Chrome extension APIs, the most natural path was to back-port the Chrome version to Firefox instead of porting the legacy Firefox extension code base to WebExtensions.

The first step of that process was to bring all the WebExtensions APIs used in the code to their latest versions, as ColorZilla for Chrome was using some older or deprecated Chrome APIs and Firefox implementation of WebExtensions is based on the latest APIs and doesn’t include the older versions. One such example is updating older chrome.extension.onRequest API to browser.runtime.onMessage.

The next step was to make all the places that hard-coded Chrome—in UI, URLs, etc—to be flexible and detect the current browser. The final step was to bridge various gaps in implementation or semantics between Chrome and Firefox—for example, it’s not possible to programmatically copy to clipboard from background scripts in Firefox. Another example is the browser.extension.isAllowedFileSchemeAccess API that has a slightly different semantic—meaning in Chrome, the script cannot access local files, and in Firefox, it cannot open them, but can still access them.

WebExtensions, as both a high-level and multi-browser set of APIs, has some limitations. One example that affected ColorZilla is that the main add-on button allows only one action. So the “browser action” cannot have a main button action and a drop-down containing a menu with more options (also known as a “menu-button” in the pre-WebExtensions world). With only one action available when users click on the main button, there was a need to come up with creative UI solutions to combine showing a menu of available options with auto-starting the color sampling. This allowed users to click on the web content and get a color reading immediately. This and other limitations require add-on developers to often not just port their add-ons to new APIs, but re-think the UI and functionality of their add-ons.

The huge advantages of the final WebExtensions-based ColorZilla is that it’s both future-proof, supporting new and future versions of Firefox, and multi-browser, supporting Chrome, Edge and other browsers with a single code base.

Note: This bug is meant to expand the capability of menu-buttons in the browserAction API.

What, if anything, is different about your add-on now that it is a WebExtension? Were you able to transition with all the features intact?

The majority of the functionality was successfully transitioned. The UI/UX of the add-on is somewhat different and some users did need to adjust to that, but all the top features (and more!) are there in the new WebExtensions version.

What advice would you give other legacy add-on developers?

First, I suggest going over the WebExtensions API and capabilities and doing a feasibility analysis of whether the legacy add-on functionality can be supported with WebExtensions. Some legacy add-ons leverage low-level APIs and access or modify Firefox in a very deep or unique way, which wouldn’t be possible with WebExtensions. Then, if the functionality can be supported, I suggest mapping the UI/UX of the legacy add-on to the new sets of WebExtensions requirements and paradigms—browser actions, popup windows etc. Following implementation, I suggest extensive testing across different platforms and configurations—depending on the complexity of the add-on, the porting process can introduce a range of issues and quirks. Finally, once the new WebExtensions-based version is released, my advice is to be ready to listen to user feedback and bug reports and quickly release new versions and address issues, to minimize the window of instability for users.

Anything else you’d like to add?

One advice for Mozilla is to better support developers’ and users’ transition to WebExtensions—the process is quite effort-intensive for developers, and user-facing issues, quirks and instabilities that might be introduced due to these changes might be frustrating for both add-on authors and their users. One thing Mozilla could improve, beyond supporting the developer community, is to really shorten the add-on review times and work with developers to shorten the cycle between user bug reports, developer fixes and the release of these fixes to the users. This will really minimize the window of instability for users and make the entire process of moving the Firefox add-on ecosystem to WebExtensions so much smoother. My advice for add-on authors on this front is to engage with the AMO editors, understand the review process and work together to make the review process as fast and smooth possible.

The post Migrating ColorZilla to WebExtensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Pages