mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Announces the Release of Firefox 47 with new HTML5 Video Support - HTML Goodies

Nieuws verzameld via Google - to, 16/06/2016 - 02:49

Mozilla Announces the Release of Firefox 47 with new HTML5 Video Support
HTML Goodies
Mozilla today released the stable version of version 47 of its browser which was in beta since April this year. The new HTML5 video features include support for Google's Widevine CDM DRM protected HTML5 video streams. Video streaming services like ...

Categorieën: Mozilla-nl planet

Anjana Vakil: I want to mock with you

Mozilla planet - to, 16/06/2016 - 02:00

This post brought to you from Mozilla’s London All Hands meeting - cheers!

When writing Python unit tests, sometimes you want to just test one specific aspect of a piece of code that does multiple things.

For example, maybe you’re wondering:

  • Does object X get created here?
  • Does method X get called here?
  • Assuming method X returns Y, does the right thing happen after that?

Finding the answers to such questions is super simple if you use mock: a library which “allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.” Since Python 3.3 it’s available simply as unittest.mock, but if you’re using an earlier Python you can get it from PyPI with pip install mock.

So, what are mocks? How do you use them?

Well, in short I could tell you that a Mock is a sort of magical object that’s intended to be a doppelgänger for some object in your code that you want to test. Mocks have special attributes and methods you can use to find out how your test is using the object you’re mocking. For example, you can use Mock.called and .call_count to find out if and how many times a method has been called. You can also manipulate Mocks to simulate functionality that you’re not directly testing, but is necessary for the code you’re testing. For example, you can set Mock.return_value to pretend that an function gave you some particular output, and make sure that the right thing happens in your program.

But honestly, I don’t think I could give a better or more succinct overview of mocks than the Quick Guide, so for a real intro you should go read that. While you’re doing that, I’m going to watch this fantastic Michael Jackson video:

Oh you’re back? Hi! So, now that you have a basic idea of what makes Mocks super cool, let me share with you some of the tips/tips/trials/tribulations I discovered when starting to use them.

Patches and namespaces

tl;dr: Learn where to patch if you don’t want to be sad!

When you import a helper module into a module you’re testing, the tested module gets its own namespace for the helper module. So if you want to mock a class from the helper module, you need to mock it within the tested module’s namespace.

For example, let’s say I have a Super Useful helper module, which defines a class HelperClass that is So Very Helpful:

# helper.py class HelperClass(): def __init__(self): self.name = "helper" def help(self): helpful = True return helpful

And in the module I want to test, tested, I instantiate the Incredibly Helpful HelperClass, which I imported from helper.py:

# tested.py from helper import HelperClass def fn(): h = HelperClass() # using tested.HelperClass return h.help()

Now, let’s say that it is Incredibly Important that I make sure that a HelperClass object is actually getting created in tested, i.e. that HelperClass() is being called. I can write a test module that patches HelperClass, and check the resulting Mock object’s called property. But I have to be careful that I patch the right HelperClass! Consider test_tested.py:

# test_tested.py import tested from mock import patch # This is not what you want: @patch('helper.HelperClass') def test_helper_wrong(mock_HelperClass): tested.fn() assert mock_HelperClass.called # Fails! I mocked the wrong class, am sad :( # This is what you want: @patch('tested.HelperClass') def test_helper_right(mock_HelperClass): tested.fn() assert mock_HelperClass.called # Passes! I am not sad :)

OK great! If I patch tested.HelperClass, I get what I want.

But what if the module I want to test uses import helper and helper.HelperClass(), instead of from helper import HelperClass and HelperClass()? As in tested2.py:

# tested2.py import helper def fn(): h = helper.HelperClass() return h.help()

In this case, in my test for tested2 I need to patch the class with patch('helper.HelperClass') instead of patch('tested.HelperClass'). Consider test_tested2.py:

# test_tested2.py import tested2 from mock import patch # This time, this IS what I want: @patch('helper.HelperClass') def test_helper_2_right(mock_HelperClass): tested2.fn() assert mock_HelperClass.called # Passes! I am not sad :) # And this is NOT what I want! # Mock will complain: "module 'tested2' does not have the attribute 'HelperClass'" @patch('tested2.HelperClass') def test_helper_2_right(mock_HelperClass): tested2.fn() assert mock_HelperClass.called

Wonderful!

In short: be careful of which namespace you’re patching in. If you patch whatever object you’re testing in the wrong namespace, the object that’s created will be the real object, not the mocked version. And that will make you confused and sad.

I was confused and sad when I was trying to mock the TestManifest.active_tests() function to test BaseMarionetteTestRunner.add_test, and I was trying to mock it in the place it was defined, i.e. patch('manifestparser.manifestparser.TestManifest.active_tests').

Instead, I had to patch TestManifest within the runner.base module, i.e. the place where it was actually being called by the add_test function, i.e. patch('marionette.runner.base.TestManifest.active_tests').

So don’t be confused or sad, mock the thing where it is used, not where it was defined!

Pretending to read files with mock_open

One thing I find particularly annoying is writing tests for modules that have to interact with files. Well, I guess I could, like, write code in my tests that creates dummy files and then deletes them, or (even worse) just put some dummy files next to my test module for it to use. But wouldn’t it be better if I could just skip all that and pretend the files exist, and have whatever content I need them to have?

It sure would! And that’s exactly the type of thing mock is really helpful with. In fact, there’s even a helper called mock_open that makes it super simple to pretend to read a file. All you have to do is patch the builtin open function, and pass in mock_open(read_data="my data") to the patch to make the open in the code you’re testing only pretend to open a file with that content, instead of actually doing it.

To see it in action, you can take a look at a (not necessarily great) little test I wrote that pretends to open a file and read some data from it:

def test_nonJSON_file_throws_error(runner): with patch('os.path.exists') as exists: exists.return_value = True with patch('__builtin__.open', mock_open(read_data='[not {valid JSON]')): with pytest.raises(Exception) as json_exc: runner._load_testvars() # This is the code I want to test, specifically to be sure it throws an exception assert 'not properly formatted' in json_exc.value.message Gotchya: Mocking and debugging at the same time

See that patch('os.path.exists') in the test I just mentioned? Yeah, that’s probably not a great idea. At least, I found it problematic.

I was having some difficulty with a similar test, in which I was also patching os.path.exists to fake a file (though that wasn’t the part I was having problems with), so I decided to set a breakpoint with pytest.set_trace() to drop into the Python debugger and try to understand the problem. The debugger I use is pdb++, which just adds some helpful little features to the default pdb, like colors and sticky mode.

So there I am, merrily debugging away at my (Pdb++) prompt. But as soon as I entered the patch('os.path.exists') context, I started getting weird behavior in the debugger console: complaints about some ~/.fancycompleterrc.py file and certain commands not working properly.

It turns out that at least one module pdb++ was using (e.g. fancycompleter) was getting confused about file(s) it needs to function, because of checks for os.path.exists that were now all messed up thanks to my ill-advised patch. This had me scratching my head for longer than I’d like to admit.

What I still don’t understand (explanations welcome!) is why I still got this weird behavior when I tried to change the test to patch 'mymodule.os.path.exists' (where mymodule.py contains import os) instead of just 'os.path.exists'. Based on what we saw about namespaces, I figured this would restrict the mock to only mymodule, so that pdb++ and related modules would be safe - but it didn’t seem to have any effect whatsoever. But I’ll have to save that mystery for another day (and another post).

Still, lesson learned: if you’re patching a commonly used function, like, say, os.path.exists, don’t forget that once you’re inside that mocked context, you no longer have access to the real function at all! So keep an eye out, and mock responsibly!

Mock the night away

Those are just a few of the things I’ve learned in my first few weeks of mocking. If you need some bedtime reading, check out these resources that I found helpful:

I’m sure mock has all kinds of secrets, magic, and superpowers I’ve yet to discover, but that gives me something to look forward to! If you have mock-foo tips to share, just give me a shout on Twitter!

Categorieën: Mozilla-nl planet

Chris Ilias: Who uses voice control?

Mozilla planet - to, 16/06/2016 - 00:55

Voice control (Siri, Google Now, Amazon Echo, etc.) is not a very useful feature to me, and wonder if I’m in the minority.

Why it is not useful:

  • I live with other people.
    • Sometimes one of the people I live with or myself may be sleeping. If someone speaks out loud to the TV or phone, that might wake the other up.
    • Even when everyone is awake, that doesn’t mean we are together. It annoys me when someone talks to the TV while watching basketball. I don’t want to find out how annoying it would be to listen to someone in another room tell the TV or their phone what to do.
  • I work with other people.
    • If I’m having lunch, and a co-worker wants to look something up on his/her phone, I don’t want to hear them speak their queries out loud. I actually have coworkers that use their phones as boomboxes to listen to music while eating lunch, as if no-one else can hear it, or everyone has the same taste in music, or everyone wants to listen to music at all during lunch.

The only times I use Siri are:

  • When I am in the car.
  • When I am speaking with others in a social setting, like a pub, and we want to look something up pertaining to the conversation.
  • When I’m alone

When I saw Apple introduce tvOS, the dependence on Siri turned me off from upgrading my Apple TV.

Am I in the minority here?

I get the feeling I’m not. I cannot recall anyone I know using Siri for other anything than entertainment with friends. Controlling devices with your voice in public must be Larry David’s worst nightmare.

Categorieën: Mozilla-nl planet

Jen Kagan: day 18: the case of the missing vine api and the add-on sdk

Mozilla planet - wo, 15/06/2016 - 23:17

i’m trying to add vine support to min-vid and realizing that i’m still having a hard time wrapping my head around the min-vid program structure.

what does adding vine support even mean? it means that when you’re on vine.co, i want you to be able to right click on any video on the website, send it to the corner of your browser, and watch vines in the corner while you continue your browsing.

i’m running into a few obstacles. one obstacle is that i can’t find an official vine api. what’s an api? it stands for “application programming interface” (maybe? i think? no, i’m not googling) and i don’t know the official definition, but my unofficial definition is that the api is documentation i need from vine about how to access and manipulate content they have on their website. i need to be able to know the pattern for structuring video URLs. i need to know what functions to call in order to autoplay, loop, pause, and mute their videos. since this doesn’t exist in an official, well-documented way, i made a gist of their embed.js file, which i think/hope maybe controls their embedded videos, and which i want to eventually make sense of by adding inline comments.

another obstacle is that mozilla’s add-on sdk is really weirdly structured. i wrote about this earlier and am still sketching it out. here’s what i’ve gathered so far:

  • the page you navigate to in your browser is a page. with its own DOM. this is the page DOM.
  • the firefox add-on is made of content scripts (CS) and page scripts (PS).
  • the CS talks to the page DOM and the PS talks to the page DOM, but the CS and the PS don’t talk to each other.
  • with min-vid, the CS is index.js. this controls the context menu that comes up when you right click, and it’s the thing that tells the panel to show itself in the corner.
  • the two PS’s in min-vid are default.html and controls.js. the default.html PS loads the stuff inside the panel. the controls.js PS lets you control the stuff that’s in the panel.

so far, i can get the vine video to show up in the panel, but only after i’ve sent a youtube video to the panel. i can’t the vine video to show up on its own, and i’m not sure why. this makes me sad. here is a sketch:

FullSizeRender-1

Categorieën: Mozilla-nl planet

Doug Belshaw: Why we need 'view source' for digital literacy frameworks

Mozilla planet - wo, 15/06/2016 - 20:01

Apologies if this post comes across as a little jaded, but as someone who wrote their doctoral thesis on this topic, I had to stifle a yawn when I saw that the World Economic Forum have defined 8 digital skills we must teach our children.

World Economic Forum - digital skills

In a move so unsurprising that it’s beyond pastiche, they’ve also coined a new term:

Digital intelligence or “DQ” is the set of social, emotional and cognitive abilities that enable individuals to face the challenges and adapt to the demands of digital life.

I don’t mean to demean what is obviously thoughtful and important work, but I do wonder how (and who!) came up with this. They’ve got an online platform which helps develop the skills they’ve identified as important, but it’s difficult to fathom why some things were included and others left out.

An audit-trail of decision-making is important, as it reveals both the explicit and implicit biases of those involved in the work, as well as lazy shortcuts they may have taken. I attempted to do this in my work as lead of Mozilla’s Web Literacy Map project through the use of a wiki, but even that could have been clearer.

What we need is the equivalent of ‘view source’ for digital literacy frameworks. Specifically, I’m interested in answers to the following 10 questions:

  1. Who worked on this?
  2. What’s the goal of the organisation(s) behind this?
  3. How long did you spend researching this area?
  4. What are you trying to achieve through the creation of this framework?
  5. Why do you need to invent a new term? Why do very similar (and established) words and phrases not work well in this context?
  6. How long is this project going to be supported for?
  7. Is your digital literacy framework versioned?
  8. If you’ve included skills, literacies,and habits of mind that aren’t obviously 'digital’, why is that?
  9. What were the tough decisions that you made? Why did you come down on the side you did?
  10. What further work do you need to do to improve this framework?

I’d be interested in your thoughts and feedback around this post. Have you seen a digital literacy framework that does this well? What other questions would you add?

Note: I haven’t dived into the visual representation of digital literacy frameworks. That’s a whole other can of worms…

Get in touch! I’m @dajbelshaw and you can email me: hello@dynamicskillset.com

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: A Step Forward for Net Neutrality in the U.S.

Mozilla planet - wo, 15/06/2016 - 16:55

We’re thrilled to see the D.C. Circuit Court upholding the FCC’s historic net neutrality order, and the agency’s authority to continue to protect Internet users and businesses from throttling and blocking. Protecting openness and innovation is at the core of Mozilla’s mission. Net neutrality supports a level playing field, critical to ensuring a healthy, innovative, and open Web.

Leading up to this ruling Mozilla filed a joint amicus brief with CCIA supporting the order, and engaged extensively in the FCC proceedings. We filed a written petition, provided formal comments along the way, and engaged our community with a petition to Congress. Mozilla also organized global teach-ins and a day of action, and co-authored a letter to the President.

We’re glad to see this development and we remain steadfast in our position that net neutrality is a critical factor to ensuring the Internet is open and accessible. Mozilla is committed to continuing to advocate for net neutrality principles around the world.

Categorieën: Mozilla-nl planet

Daniel Stenberg: No websockets over HTTP/2

Mozilla planet - wo, 15/06/2016 - 13:55

There is no websockets for HTTP/2.

By this, I mean that there’s no way to negotiate or upgrade a connection to websockets over HTTP/2 like there is for HTTP/1.1 as expressed by RFC 6455. That spec details how a client can use Upgrade: in a HTTP/1.1 request to switch that connection into a websockets connection.

Note that websockets is not part of the HTTP/1 spec, it just uses a HTTP/1 protocol detail to switch an HTTP connection into a websockets connection. Websockets over HTTP/2 would similarly not be a part of the HTTP/2 specification but would be separate.

(As a side-note, that Upgrade: mechanism is the same mechanism a HTTP/1.1 connection can get upgraded to HTTP/2 if the server supports it – when not using HTTPS.)

chinese-socket

Draft

There’s was once a draft submitted that describes how websockets over HTTP/2 could’ve been done. It didn’t get any particular interest in the IETF HTTP working group back then and as far as I’ve seen, there has been very little general interest in any group to pick up this dropped ball and continue running. It just didn’t go any further.

This is important: the lack of websockets over HTTP/2 is because nobody has produced a spec (and implementations) to do websockets over HTTP/2. Those things don’t happen by themselves, they actually require a bunch of people and implementers to believe in the cause and work for it.

Websockets over HTTP/2 could of course have the benefit that it would only be one stream over the connection that could serve regular non-websockets traffic at the same time in many other streams, while websockets upgraded on a HTTP/1 connection uses the entire connection exclusively.

Instead

So what do users do instead of using websockets over HTTP/2? Well, there are several options. You probably either stick to HTTP/2, upgrade from HTTP/1, use Web push or go the WebRTC route!

If you really need to stick to websockets, then you simply have to upgrade to that from a HTTP/1 connection – just like before. Most people I’ve talked to that are stuck really hard on using websockets are app developers that basically only use a single connection anyway so doing that HTTP/1 or HTTP/2 makes no meaningful difference.

Sticking to HTTP/2 pretty much allows you to go back and use the long-polling tricks of the past before websockets was created. They were once rather bad since they would waste a connection and be error-prone since you’d have a connection that would sit idle most of the time. Doing this over HTTP/2 is much less of a problem since it’ll just be a single stream that won’t be used that much so it isn’t that much of a waste. Plus, the connection may very well be used by other streams so it will be less of a problem with idle connections getting killed by NATs or firewalls.

The Web Push API was brought by W3C during 2015 and is in many ways a more “webby” way of doing push than the much more manual and “raw” method that websockets is. If you use websockets mostly for push notifications, then this might be a more convenient choice.

Also introduced after websockets, is WebRTC. This is a technique introduced for communication between browsers, but it certainly provides an alternative to some of the things websockets were once used for.

Future

Websockets over HTTP/2 could still be done. The fact that it isn’t done just shows that there isn’t enough interest.

Non-TLS

Recall how browsers only speak HTTP/2 over TLS, while websockets can also be done over plain TCP. In fact, the only way to upgrade a HTTP connection to websockets is using the HTTP/1 Upgrade: header trick, and not the ALPN method for TLS that HTTP/2 uses to reduce the number of round-trips required.

If anyone would introduce websockets over HTTP/2, they would then probably only be possible to be made over TLS from within browsers.

Categorieën: Mozilla-nl planet

David Burns: Selenium WebDriver and Firefox 47

Mozilla planet - wo, 15/06/2016 - 00:14

With the release of Firefox 47, the extension based version FirefoxDriver is no longer working. There was a change in Firefox that when Selenium started the browser it caused it to crash. It has been fixed but there is a process to get this to release which is slow (to make sure we don't break anything else) so hopefully this version is due for release next week or so.

This does not mean that your tests need to stop working entirely as there are options to keep them working.

Marionette

Firstly, you can use Marionette, the Mozilla version of FirefoxDriver to drive Firefox. This has been in Firefox since about 24 as we, slowly working against Mozilla priorities, getting it up to Selenium level. Currently Marionette is passing ~85% of the Selenium test suite.

I have written up some documentation on how to use Marionette on MDN

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • I am sure there are other things we don't remember

It would be great if we could raise bugs.

Firefox 45 ESR or Firefox 46

If you don't want to worry about Marionette, the other option is to downgrade to Firefox 45, preferably the ESR as it won't update to 47 and will update in about 6-9 months time to Firefox 52 when you will need to use Marionette.

Marionette will be turned on by default from Selenium 3, which is currently being worked on by the Selenium community. Ideally when Firefox 52 comes around you will just update to Selenium 3 and, fingers crossed, all works as planned.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Nastiness Works

Mozilla planet - ti, 14/06/2016 - 23:59

One thing I experienced many times at Mozilla was users pressuring developers with nastiness --- ranging from subtle digs to vitriolic abuse, trying to make you feel guilty and/or trigger an "I'll show you! (by fixing the bug)" response. I know it happens in most open-source projects; I've been guilty of using it myself.

I particularly dislike this tactic because it works on me. It really makes me want to fix bugs. But I also know one shouldn't reward bad behavior, so I feel bad fixing those bugs. Maybe the best I can do is call out the bad behavior, fix the bug, and avoid letting that same person use that tactic again.

Perhaps you're wondering "what's wrong with that tactic if it gets bugs fixed?" Development resources are finite so every bug or feature is competing with others. When you use nastiness to manipulate developers into favouring your bug, you're not improving quality generally, you're stealing attention away from other issues whose proponents didn't stoop to that tactic and making developers a little bit miserable in the process. In fact by undermining rational triage you're probably making quality worse overall.

Categorieën: Mozilla-nl planet

Mitchell Baker: Expanding Mozilla’s Boards

Mozilla planet - ti, 14/06/2016 - 22:48

This post was originally published on the Mozilla Blog.

Watch the presentation of Mozilla's Boards expansion on AirMozilla

Watch the presentation of Mozilla’s Boards expansion on AirMozilla

In a post earlier this month, I mentioned the importance of building a network of people who can help us identify and recruit potential Board level contributors and senior advisors. We are also currently working to expand both the Mozilla Foundation and Mozilla Corporation Boards.

The role of a Mozilla Board member

I’ve written a few posts about the role of the Board of Directors at Mozilla.

At Mozilla, we invite our Board members to be more involved with management, employees and volunteers than is generally the case. It’s not that common for Board members to have unstructured contacts with individuals or even sometimes the management team. The conventional thinking is that these types of relationships make it hard for the CEO to do his or her job. We feel differently. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

We also prefer a reasonably extended “get to know each other” period for our Board members. Sometimes I hear people speak poorly of extended process, but I feel it’s very important for Mozilla.  Mozilla is an unusual organization. We’re a technology powerhouse with a broad Internet openness and empowerment mission at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the Internet industry.

It’s important that our Board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open Internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

I want all our Board members to understand that “empowering people” encompasses “user communities” but is much broader for Mozilla. Mozilla should be a resource for the set of people who care about the open Internet. We want people to look to Mozilla because we are such an excellent resource for openness online, not because we hope to “leverage our community” to do something that benefits us.

These sort of distinctions can be rather abstract in practice. So knowing someone well enough to be comfortable about these takes a while. We have a couple of ways of doing this. First, we have extensive discussions with a wide range of people. Board candidates will meet the existing Board members, members of the management team, individual contributors and volunteers. We’ve been piloting ways to work with potential Board candidates in some way. We’ve done that with Cathy Davidson, Ronaldo Lemos, Katharina Borchert and Karim Lakhani. We’re not sure we’ll be able to do it with everyone, and we don’t see it as a requirement. We do see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It helps us feel comfortable including someone at this senior level of stewardship.

What does a Mozilla Board member look like

Job descriptions often get long and wordy. We have those too but, for the search of new Board members, we’ve tried something else this time: a visual role description.

Board member job description for Mozilla Foundation

Board member job description for Mozilla Corporation

Board member job description for Mozilla Foundation

Board member job description for Mozilla Foundation

Here is a short explanation of how to read these visuals:

  • The horizontal lines speaks to things that every Board member should have. For instance, to be a Board member, you have to care about the mission and you have to have some cultural sense of Mozilla, etc. They are a set of things that are important for each and every candidate. In addition, there is a set of things that are important for the Board as a whole. For instance, we could put international experience in there or whether the candidate is a public spokesperson. We want some of that but it is not necessary that every Board member has that.
  • In the vertical green columns, we have the particular skills and expertise that we are looking for at this point.
  • We would expect the horizontal lines not to change too much over time and the vertical lines to change depending on who joins the Board and who leaves.

I invite you to look at these documents and provide input on them. If you have candidates that you believe would be good Board members, send them to the boarddevelopment@mozilla.com mailing list. We will use real discretion with the names you send us.

We’ll also be designing a process for how to broaden participation in the process beyond other Board members. We want to take advantage of the awareness and the cluefulness of the organization. That will be part of a future update.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Expanding Mozilla’s Boards

Mozilla planet - ti, 14/06/2016 - 13:42
https://air.mozilla.org/moco-mofo-board-development/

Watch the presentation of Mozilla’s Boards expansion on AirMozilla

In a post earlier this month, I mentioned the importance of building a network of people who can help us identify and recruit potential Board level contributors and senior advisors. We are also currently working to expand both the Mozilla Foundation and Mozilla Corporation Boards.

The role of a Mozilla Board member

I’ve written a few posts about the role of the Board of Directors at Mozilla.

At Mozilla, we invite our Board members to be more involved with management, employees and volunteers than is generally the case. It’s not that common for Board members to have unstructured contacts with individuals or even sometimes the management team. The conventional thinking is that these types of relationships make it hard for the CEO to do his or her job. We feel differently. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

We also prefer a reasonably extended “get to know each other” period for our Board members. Sometimes I hear people speak poorly of extended process, but I feel it’s very important for Mozilla.  Mozilla is an unusual organization. We’re a technology powerhouse with a broad Internet openness and empowerment mission at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the Internet industry.

It’s important that our Board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open Internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

I want all our Board members to understand that “empowering people” encompasses “user communities” but is much broader for Mozilla. Mozilla should be a resource for the set of people who care about the open Internet. We want people to look to Mozilla because we are such an excellent resource for openness online, not because we hope to “leverage our community” to do something that benefits us.

These sort of distinctions can be rather abstract in practice. So knowing someone well enough to be comfortable about these takes a while. We have a couple of ways of doing this. First, we have extensive discussions with a wide range of people. Board candidates will meet the existing Board members, members of the management team, individual contributors and volunteers. We’ve been piloting ways to work with potential Board candidates in some way. We’ve done that with Cathy Davidson, Ronaldo Lemos, Katharina Borchert and Karim Lakhani. We’re not sure we’ll be able to do it with everyone, and we don’t see it as a requirement. We do see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It helps us feel comfortable including someone at this senior level of stewardship.

What does a Mozilla Board member look like

Job descriptions often get long and wordy. We have those too but, for the search of new Board members, we’ve tried something else this time: a visual role description.

Board member job description for Mozilla Corporation

Board member job description for Mozilla Corporation

Board member job description for Mozilla Foundation

Board member job description for Mozilla Foundation

Here is a short explanation of how to read these visuals:

  • The horizontal lines speaks to things that every Board member should have. For instance, to be a Board member, you have to care about the mission and you have to have some cultural sense of Mozilla, etc. They are a set of things that are important for each and every candidate. In addition, there is a set of things that are important for the Board as a whole. For instance, we could put international experience in there or whether the candidate is a public spokesperson. We want some of that but it is not necessary that every Board member has that.
  • In the vertical green columns, we have the particular skills and expertise that we are looking for at this point.
  • We would expect the horizontal lines not to change too much over time and the vertical lines to change depending on who joins the Board and who leaves.

I invite you to look at these documents and provide input on them. If you have candidates that you believe would be good Board members, send them to the boarddevelopment@mozilla.com mailing list. We will use real discretion with the names you send us.

We’ll also be designing a process for how to broaden participation in the process beyond other Board members. We want to take advantage of the awareness and the cluefulness of the organization. That will be part of a future update.

 

Categorieën: Mozilla-nl planet

Robert O'Callahan: "Safe C++ Subset" Is Vapourware

Mozilla planet - mo, 13/06/2016 - 22:28

In almost every discussion of Rust vs C++, someone makes a comment like:

the subset of C++14 that most people will want to use and the guidelines for its safe use are already well on their way to being defined ... By following the guidelines, which can be verified statically at compile time, the same kind of safeties provided by Rust can be had from C++, and with less annotation effort.This promise is vapourware. In fact, it's classic vapourware in the sense of "wildly optimistic claim about a future product made to counter a competitor". (Herb Sutter says in comments that this wasn't designed with the goal of "countering a competitor" so I'll take him at his word (though it's used that way by others). Sorry Herb!)

(FWIW the claim quoted above is actually an overstatement of the goals of the C++ Core Guidelines to which it refers, which say "our design is a simpler feature focused on eliminating leaks and dangling only"; Rust provides important additional safety properties such as data-race freedom. But even just the memory safety claim is vapourware.)

To satisfy this claim, we need to see a complete set of statically checkable rules and a plausible argument that a program adhering to these rules cannot exhibit memory safety bugs. Notably, languages that offer memory safety are not just claiming you can write safe programs in the language, nor that there is a static checker that finds most memory safety bugs; they are claiming that code written in that language (or the safe subset thereof) cannot exhibit memory safety bugs.

AFAIK the closest to this C++ gets is the Core Guidelines Lifetimes I and II document, last updated December 2015. It contains only an "informal overview and rationale"; it refers to "Section III, analysis rules (forthcoming this winter)", which apparently has not yet come forth. (I'm pretty sure they didn't mean the New Zealand winter.) The informal overview shows a heavy dependence on alias analysis, which does not inspire confidence because alias analysis is always fragile. The overview leaves open critical questions about even trivial examples. Consider:

unique_ptr<int> p;
void foo(const int& v) {
p = nullptr;
cout << v;
}
void bar() {
p = make_unique(7);
foo(*p);
}Obviously this program is unsafe and must be forbidden, but what rule would reject it? The document says
  • In the function body, by default a Pointer parameter param is assumed to be valid for the duration of the function call and not depend on any other parameter, so at the start of the function lset(param) = param (its own lifetime) only.
  • At a call site, by default passing a Pointer to a function requires that the argument’s lset not include anything that could be invalidated by the function.
Clearly the body of foo is OK by those rules. For the call to foo from bar, it depends on what is meant by "anything that could be invalidated by the function". Does that include anything reachable via global variables? Because if it does, then you can't pass anything reachable from a global variable to any function by reference, which is crippling. But if it doesn't, then what rejects this code?

Update Herb points out that example 7.1 covers a similar situation with raw pointers. That example indicates that anything reachable through a global variable cannot be passed by to a function by raw-pointer or reference. That still seems like a crippling limitation to me. You can't, for example, copy-construct anything (indirectly) reachable through a global variable:

unique_ptr<Foo> p;
void bar() {
p = make_unique<Foo>(...);
Foo xyz(*p); // Forbidden!
}

This is not one rogue example that is easily addressed. This example cuts to the heart of the problem, which is that understanding aliasing in the face of functions with potentially unbounded side effects is notoriously difficult. I myself wrote a PhD thesis on the subject, one among hundreds, if not thousands. Designing your language and its libraries from the ground up to deal with these issues has been shown to work, in Rust at least, but I'm deeply skeptical it can be bolted onto C++.

Q&A

Aren't clang and MSVC already shipping previews of this safe subset? They're implementing static checking rules that no doubt will catch many bugs, which is great. They're nowhere near demonstrating they can catch every memory safety bug.

Aren't you always vulnerable to bugs in the compiler, foreign code, or mistakes in the safety proofs, so you can never reach 100% safety anyway? Yes, but it is important to reduce the amount of trusted code to the minimum. There are ways to use machine-checked proofs to verify that compilation and proof steps do not introduce safety bugs.

Won't you look stupid when Section III is released? Occupational hazard, but that leads me to one more point: even if and when a statically checked, plausibly safe subset is produced, it will take significant experience working with that subset to determine whether it's viable. A subset that rejects core C++ features such as references, or otherwise excludes most existing C++ code, will not be very compelling (as acknowledged in the Lifetimes document: "Our goal is that the false positive rate should be kept at under 10% on average over a large body of code").

Categorieën: Mozilla-nl planet

Jen Kagan: day 16: helpful git things

Mozilla planet - mo, 13/06/2016 - 20:54

it’s been important for me to get comfortable-ish with git. i’m slowly learning about best practices on a big open source project that’s managed through github.

one example: creating a separate branch for each feature i work on. in the case of min-vid, this means i created one branch to add youtu.be support, a different branch to add to the project’s README, a different branch to work on vine support, etc. that way, if my changes aren’t merged into the main project’s master, i don’t have to re-clone the project. i just keep working on the branch or delete it or whatever. this also lets me bounce between different features if i get stuck on one and need to take a break by working on another one. i keep the workflow on a post-it on my desktop so i don’t have to think about it (a la atul gawande’s so good checklist manifesto):

git checkout master

git pull upstream master
(to get new changes from the main project’s master branch)

git push origin master
(to push new changes up to my own master branch)

git checkout -b [new branch]
(to work on a feature)

npm run package
(to package the add-on before submitting the PR)

git add .

git commit -m '[commit message]

git push origin [new branch]
(to push my changes to my feature branch; from here, i can submit a PR)

git checkout master

another important git practice: squashing commits so my pull request doesn’t include 1000 commits that muddy the project history with my teensy changes. this is the most annoying thing ever and i always mess it up and i can’t even bear to explain it because this person has done a pretty good job already. just don’t ever ever forget to REBASE ON TOP OF MASTER, people!

last thing, which has been more important on my side project that i’m hosting on gh-pages: updating my gh-pages branch with changes from my master branch. this is crucial because the gh-pages branch, which displays my website, doesn’t automatically incorporate changes i make to my index.html file on my master branch. so here’s the workflow:

git checkout master
(work on stuff on the master branch)

git add .

git commit -m '[commit message]'

git push origin master
(the previous commands push your changes to your master branch. now, to update your gh-pages branch:)

git checkout gh-pages

git merge master

git push origin gh-pages

yes, that’s it, the end, congrats!

p.s. that all assumes that you already created a gh-pages branch to host your website. if you haven’t and want to, here’s how you do it:

git checkout master
(work on stuff on the master branch)

git add .

git commit -m '[message]'

git push origin master
(same as before. this is just normal, updating-your-master-branch stuff. so, next:)

git checkout -b gh-pages
(-b creates a new branch, gh-pages names the new branch “gh-pages”)

git push origin gh-pages
(this pushes changes from your origin/master branch to your new gh-pages branch)

yes, that’s it, the end, congrats!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 134

Mozilla planet - mo, 13/06/2016 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42 and llogiq.

Updates from Rust Community News & Blog Posts New Crates & Project Updates Crate of the Week

This week's Crate of the Week is petgraph, which provides graph structures and algorithms. Thanks to /u/diwic for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

110 pull requests were merged in the last two weeks.

New Contributors
  • Andrew Brinker
  • Chris Tomlinson
  • Hendrik Sollich
  • Horace Abenga
  • Jacob Clark
  • Jakob Demler
  • James Alan Preiss
  • James Lucas
  • Joachim Viide
  • Mark Côté
  • Mathieu De Coster
  • Michael Necio
  • Morten H. Solvang
  • Wojciech Nawrocki
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Isn’t rust too difficult to be widely adopted?

I believe in people.

Steve Klabnik on TRPLF

Thanks to Steven Allen for the suggestion.

Submit your quotes for next week!

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 67

Mozilla planet - mo, 13/06/2016 - 02:30

In the last week, we landed 85 PRs in the Servo organization’s repositories.

That number is a bit low this week, due to some issues with our CI machines (especially the OSX boxes) that have hurt our landing speed. Most of the staff are in London this week for the Mozilla All Hands meeting, but we’ll try to look at it.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions
  • glennw upgraded our GL API usage to rely on more GLES3 features
  • ms2ger removed some usage of transmute
  • nox removed some of the dependencies on crates that are very fragile to rust nightly changes
  • nox reduced the number of fonts that we load unconditionally
  • larsberg added the ability to open web pages in Servo on Android
  • anderco fixed some box shadow issues
  • ajeffrey implemented the beginnings of the top level browsing context
  • izgzhen improved the implementation and tests for the file manager thread
  • edunham expanded the ./mach package command to handle desktop platforms
  • daoshengmu implemented TexSubImage2d for WebGL
  • pcwalton fixed an issue with receiving mouse events while scrolling in certain situations
  • danlrobertson continued the quest to build Servo on FreeBSD
  • manishearth reimplemented XMLHttpRequest in terms of the Fetch specification
  • kevgs corrected a spec-incompatibility in Document.defaultView
  • fduraffourg added a mechanism to update the list of public suffixes
  • farodin91 enabled using WindowProxy types in WebIDL
  • bobthekingofegypt prevented some unnecesary echoes of websocket quit messages
New Contributors

There were no new contributors this week.

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

No screenshots this week.

Categorieën: Mozilla-nl planet

Air Mozilla: Hackathon Open Democracy Now Day 2

Mozilla planet - sn, 11/06/2016 - 09:30

Hackathon Open Democracy Now Day 2 Hackathon d'ouverture du festival Futur en Seine 2016 sur le thème de la Civic Tech.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Some Dynamic Measurements Of Firefox On x86-64

Mozilla planet - sn, 11/06/2016 - 05:25

This follows up on my previous measurements of static properties of Firefox code on x86-64 with some measurements of dynamic properties obtained by instrumenting code. These are mostly for my own amusement but intuitions about how programs behave at the machine level, grounded in data, have sometimes been unexpectedly useful.

Dynamic properties are highly workload-dependent. Media codecs are more SSE/AVX intensive than regular code so if you do nothing but watch videos you'd expect qualitatively different results than if you just load Web pages. I used a mixed workload that starts Firefox (multi-process enabled, optimized build), loads the NZ Herald, scrolls to the bottom, loads an article with a video, plays the video for several seconds, then quits. It ran for about 30 seconds under rr and executes about 60 billion instructions.

I repeated my register usage result analysis, this time weighted by dynamic execution count and taking into account implicit register usage such as push using rsp. The results differ significantly on whether you count the consecutive iterations of a repeated string instruction (e.g. rep movsb) as a single instruction execution or one instruction execution per iteration, so I show both. Unlike the static graphs, these results for all instructions executed anywhere in the process(es), including JITted code, not just libxul.

  • As expected, registers involved in string instructions get a big boost when you count string instruction repetitions individually. About 7 billion of the 64 billion instruction executions "with string repetitions" are iterations of string instructions. (In practice Intel CPUs can optimize these to execute 64 iterations at a time, under favourable conditions.)
  • As expected, sp is very frequently used once you consider its implicit uses.
  • String instructions aside, the dynamic results don't look very different from the static results. Registers R8 to R11 look a bit more used in this graph, which may be because they tend to be allocated in highly optimized leaf functions, which are more likely to be hot code.

  • The surprising thing about the results for SSE/AVX registers is that they still don't look very different to the static results. Even the bottom 8 registers still aren't frequently used compared to most general-purpose registers, even though I deliberately tried to exercise codec code.
  • I wonder why R5 is the least used bottom-8 register by a significant margin. Maybe these results are dominated by a few hot loops that by chance don't use that register much.

I was also interested in exploring the distribution of instruction execution frequencies:

A dot at position x, y on this graph means that fraction y of all instructions executed at least once is executed at most x times. So, we can see that about 19% of all instructions executed are executed only once. About 42% of instructions are executed at most 10 times. About 85% of instructions are executed at most 1000 times. These results treat consecutive iterations of a string instruction as a single execution. (It's hard to precisely define what it means for an instruction to "be the same" in the presence of dynamic loading and JITted code. I'm assuming that every execution of an instruction at a particular address in a particular address space is an execution of "the same instruction".)

Interestingly, the five most frequently executed instructions are executed about 160M times. Those instructions are for this line, which is simply filling a large buffer with 0xff000000. gcc is generating quite slow code:

132e7b2: cmp %rax,%rdx
132e7b5: je 132e7d1
132e7b7: movl $0xff000000,(%r9,%rax,4)
132e7bf: inc %rax
132e7c2: jmp 132e7b2 That's five instructions executed for every four bytes written. This could be done a lot faster in a variety of different ways --- rep stosd or rep stosq would probably get the fast-string optimization, but SSE/AVX might be faster.
Categorieën: Mozilla-nl planet

Anthony Hughes: London Calling

Mozilla planet - sn, 11/06/2016 - 02:08

I’m happy to share that I will be hosting my first ever All-hands session, Graphics Stability – Tools and Measurements (12:30pm on Tuesday in the Hilton Metropole), at the upcoming Mozilla All-hands in London. The session is intended to be a conversation about taking a more collaborative approach to data-driven decision-making as it pertains to improving Graphics stability.

I will begin by presenting how the Graphics team is using data to tackle the graphics stability problem, reflecting on the problem at hand and the various approaches we’ve taken to date. My hope is this serves as a catalyst to lively discussion for the remainder of the session, resulting in a plan for more effective data-driven decision-making in the future through collaboration.

I am extending an invitation to those outside the Graphics team, to draw on a diverse range of backgrounds and expertise. As someone with a background in QA, data analysis is an interesting diversion (some may call it an evolution) in my career — it’s something I just fell in to after a fairly lengthy and difficult transitional period. While I’ve learned a lot recently, I am an amateur data scientist at best and could certainly benefit from more developed expertise.

I hope you’ll consider being a part of this conversation with the Graphics team. It should prove to be both educational and insightful. If you cannot make it, not to worry, I will be blogging more on this subject after I return from London.

Feel free to reach out to me if you have questions.

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 025. Forest and bugs

Mozilla planet - fr, 10/06/2016 - 12:00

In France, in the forest, listening to the sound of leaves on oak trees, in between bugs and preparing for London work week. Tune of the week: Working Class Hero.

Webcompat Life

Progress this week:

Today: 2016-06-14T06:14:05.461238 338 open issues ---------------------- needsinfo 4 needsdiagnosis 92 needscontact 23 contactready 46 sitewait 161 ----------------------

You are welcome to participate

London agenda.

Webcompat issues

(a selection of some of the bugs worked on this week).

WebCompat.com dev
  • I'm wondering if we are not missing a step once we have contacted a Web site.
Reading List Follow Your Nose TODO
  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Categorieën: Mozilla-nl planet

Air Mozilla: Hackathon Open Democracy Now

Mozilla planet - fr, 10/06/2016 - 09:30

Hackathon Open Democracy Now Hackathon d'ouverture du festival Futur en Seine 2016 sur le thème de la Civic Tech.

Categorieën: Mozilla-nl planet

Pages