mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 3 uur 30 min geleden

Justin Dolske: That time of year

vr, 31/10/2014 - 22:45

Via ZooBorns


Categorieën: Mozilla-nl planet

Chris Cooper: 10.8 testing disabled by default on Try

vr, 31/10/2014 - 21:25

Mountain LionIf you’ve recently submitted patches to the Mozilla Try server, you may have been dismayed by the turnaround time for your test results. Indeed, last week we had reports from some developers that they were waiting more than 24 hours to get results for a single Try push in the face of backlogs caused by tree closures.

The chief culprit here was Mountain Lion, or OS X 10.8, which is our smallest pool (99) of test machines. It was not uncommon for there to be over 2,000 pending test jobs for Mountain Lion at any given time last week. Once we reach a pending count that high, we cannot make headway until the weekend when check-in volume drops substantially.

In the face of these delays, developers started landing some patches on mozilla-inbound before the corresponding jobs had finished on Try, and worse still, not killing the obsolete pending jobs on Try. That’s just bad hygiene and practice. Sheriffs had to actively look for the duplicate jobs and kill them up to help decrease load.

We cannot easily increase the size of the Mountain Lion pool. Apple does not allow you to install older OS X versions on new hardware, so our pool size here is capped at the number of machines we bought when 10.8 was released over 2 years ago or what we can scrounge from resellers.

To improve the situation, we made the decision this week to disable 10.8 testing by default on Try. Developers must now select 10.8 explicitly from the “Restrict tests to platform(s)” list on TryChooser if they want to run Mountain Lion tests. If you have an existing Mac Try build that you’d like to back-fill with 10.8 results, please ping the sheriff on duty (sheriffduty) in #developers or #releng and they can help you out *without* incurring another full Try run.

Please note that we do plan to stand up Yosemite (10.10) testing as a replacement for Mountain Lion early in 2015. This is a stop-gap measure until we’re able to do so.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Net Neutrality in the U.S. Reaches a Tipping Point

vr, 31/10/2014 - 17:19

We’ve spent years working to advance net neutrality all around the world. This year, net neutrality in the United States became a core focus of ours because of the major U.S. court decision striking down the existing Federal Communications Commission (FCC) rules. The pressure for change in the U.S. has continued to grow, fueled by a large coalition of public interest organizations, including Mozilla, and by the voices of millions of individual Americans.

In May, we filed a petition to the FCC to propose a new path forward to adopt strong, user- and innovation-protecting rules once and for all. We followed that up by mobilizing our community, organizing global teach-ins on net neutrality, and submitting initial comments and reply comments to the agency. We also joined a major day of action and co-authored a letter to the President. We care about this issue.

Net neutrality has now reached a tipping point. As the days grow shorter, the meetings on the topic grow longer. We believe the baseline of what we can expect has gone up, and now, rumored likely outcomes all include some element of Title II, or common carrier, protections sought by advocates against significant opposition. We don’t know what that will look like, or whether the baseline has come up enough. Still, we are asking the FCC for what we believe the open Internet needs to ensure a level playing field for user choices and innovation.

Our baseline:
• Strong rules against blocking and discrimination online, to prevent the open, generative Internet from being closed off by gatekeepers in the access service;
• Title II in the “last mile” – the local portion of the network controlled by the Internet access service provider – to help ensure the FCC’s authority to issue net neutrality rules will survive challenge; and
• The same framework and rules applied to mobile as well as fixed access services.

Our Petition focused on the question of where the FCC derives its authority. We told the FCC we support both hybrid classification proposals and reclassification, and, choosing between the two, we prefer reclassification as the simplest, cleanest path forward. But we believe both paths would allow the FCC to adopt the same strong rules to protect the open Internet, and survive court review.

We don’t know where this proceeding will end up. We will continue to do whatever we can to achieve our baseline. Stay tuned.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Reps Weekly Call – October 30th 2014

vr, 31/10/2014 - 14:00

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

mozfest-edited

Summary
  • Mozfest.
  • Contributor Survey.
  • Community Content Contributor for the new Tiles Project.
  • Firefox OS BUS update.

Detailed notes

AirMozilla video

https://air.mozilla.org/reps-weekly-20141030/

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Categorieën: Mozilla-nl planet

Tantek Çelik: How URL started as UDI — a brief conversation with @timberners_lee @W3C #TPAC

vr, 31/10/2014 - 13:41

fun: showed @timberners_lee my post[1] on URL naming history.
priceless: Tim explained how "URL" started as "UDI".

The following is from a conversation I had on 2014-303 during this week's W3C TPAC meetings with Tim Berners-Lee, which he gave me permission to post on my site.

Universal Document Identifier

When Tim developed and implemented the concept / technology we now know as a "URL", he originally called it a "UDI" which stood for:

(U)niversal (D)ocument (I)dentifier

When Tim brought UDI to the IETF (Internet Engineering Task Force) for standardization, they formed a working group to work on it called the "URI working group". Then they objected to the naming of "UDI" and insisted on renaming it.

Universal to Uniform

They objected to "Universal". They said to call it universal was hubris, even if the technology actually was universal in its design that allowed any identification mechanism to define its own scheme.

So the IETF changed "Universal" to "Uniform".

Document to Resource

They objected to "Document" - they said that was too specific and that such things were better, more generally, referred to as "Resources".

Identifier to Locator

Finally they objected to "Identifier", because in their minds these kinds of things were either a "name" OR an "address" (not both).

Thus they deliberately changed "Identifier" to "Locator" because the design of UDIs were that they were an address where you went to retrieve something.

They deliberately called them "Locator" to make them sound less reliable, as a warning not to use them as a "name" to identify something. Because they wanted people to use URNs instead (e.g. DOIs etc.).

URLs Identify Things, UDI Clues

Today, people use URLs to identify things, including documents, companies, and even people. URNs not so much.

Yes, "URL" was previously called "UDI", and the IETF made Tim Berners-Lee rename it.

You can find clues of this background in a surviving copy of the 1994-03-21 draft of the "Uniform Resource Locators (URL)" specification[2], buried in the "Acknowledgments" section:

"The paper url3 had been generated from udi2 in the light of discussion at the UDI BOF meeting at the Boston IETF in July 1992." More Digging

Curiously, the "hypertext form" of the "Uniform Resource Locators (URL)" specification[2] that it mentions, 404s:

http://info.cern.ch/hypertext/WWW/Addressing/URL/Overview.html

However with a little searching I found the undated, yet appearing to be even older (likely 1991) "W3 Naming Schemes"[3] which describes URLs / UDIs without mentioning either by name, including linking to "W3 address syntax: BNF"[4] which provides names for the different parts of the "W3 addressing syntax" like:

anchoraddress
docaddress [ # anchor ]
docaddress
httpaddress | fileaddress | newsaddress | telnetaddress | prosperoaddress | gopheraddress | waisaddress
httpaddress
h t t p : / / hostport [ / path ] [ ? search ]

Look familiar? I'm going to have to update my blog post[1].

References
  1. 2011-08-26 How many ways can you slice a URL and name the pieces?
  2. 1994-03-21 Uniform Resource Locators (URL) — A Syntax for the Expression of Access Information of Objects on the Network
  3. 1991 W3 Naming Schemes
  4. 1991 W3 address syntax: BNF
Categorieën: Mozilla-nl planet

Wil Clouser: Contributing a patch to the Firefox Marketplace from scratch

vr, 31/10/2014 - 08:00

Jared, Stuart, and Andy recently spent some time focusing on one of the Marketplace's biggest hurdles for new contributors: how do I get all these moving pieces set up and talking to each other?

I haven't written a patch for the Marketplace in a while so I decided to see what all the fuss was about. First up I, of course, read the installation documentation. Ok, I skimmed it, but it looks pretty straight forward.

Step 1: Install Docker

I'm running Ubuntu so that's as easy as:

sudo apt-get install docker.io

To fix permissions (put your own username instead of clouserw):

sudo usermod -a -G docker clouserw

Step 2: Build Dependencies

The steps below had lots of output which I'll skip pasting here, but there were no errors and it only took a few minutes to run.

$ git clone https://github.com/mozilla/wharfie $ cd wharfie $ bin/mkt whoami clouserw $ bin/mkt checkout $ mkvirtualenv wharfie $ pip install --upgrade pip $ pip install -r requirements.txt $ sudo sh -c "echo 127.0.0.1 mp.dev >> /etc/hosts" Step 3: Build and run the Docker containers

I ran this seemingly innocent command:

$ fig build

And 9 minutes and a zillion pages of output later I saw a promising message saying it had successfully built. One more command:

$ fig up

and I loaded http://mp.dev/ in my browser:

Screenshot of Marketplace

A weird empty home page, but it's a running version of the Marketplace on my local computer! Success! Although, I'm not sure it counts unless the unit tests pass. Let's see...

$ CTRL-C # To shutdown the running fig instance $ fig run --rm zamboni python ./manage.py test --noinput -s --logging-clear-handlers [...] Ran 4223 tests in 955.328s FAILED (SKIP=26, errors=34, failures=17)

Hmm...that doesn't seem good. Apparently there is some work left to get the tests to pass. I'll file bug 1082183 and keep moving. I know Travis-CI will automatically run all the tests on any pull request so any changes I make will still be tested -- depending on the changes you make this might be enough.

Step 4: Let's write some code

If I were new to the Marketplace I'd look at the Contributing docs and follow the links there to find a bug to work on. However, I know Bug 989121 - Upgrade django-csp has been assigned to me for six months so I'm going to do that.

I'll avoid talking about the patch since I'm trying to focus on the how and not the what in this post. The code is all in the /trees/ subdirectory under wharfie, so I'll go there to write my code. A summary of the commands:

$ cd trees/zamboni $ vi <files> # Be sure to include unit tests $ git add <files> $ git checkout -b 989121 # I name my branches after my bug numbers $ git commit # commit messages must include the bug number $ git push origin 989121

Now my changes are on Github! When I load the repository I committed to in my browser I see a big green button at the top asking if I want to make a pull request.

Github pull request button

I click the button and submit my pull request which notifies the Marketplace developers that I'd like to merge the changes in. It will also trigger the unit tests and notify me via email if any of them fail. Assuming everything passes then I'm all done.

This flow is still a little rough around the edges, but for an adventurous contributor it's certainly possible. It looks like Bug 1011198 - Get a turn-key marketplace dev-env up and running is tracking progress on making this easier so if you're interested in contributing feel free to follow along and jump in when you're comfortable.

Categorieën: Mozilla-nl planet

Brian Birtles: After 10 years

vr, 31/10/2014 - 07:29

Yesterday marks 10 days to the day since I posted my first patch to Bugzilla. It was a small patch to composite SVG images with their background (and not just have a white background).

Since then I’ve contributed to Firefox as a volunteer, an intern, a contractor, and, as of 3 years ago tomorrow, a Mozilla Japan employee.

It’s still a thrill and privilege to contribute to Firefox. I’m deeply humbled by the giants I work alongside who support me like I was one of their own. In the evening when I’m tired from the day I still often find myself bursting into a spontaneous prayer of thanks that I get to work on this stuff.

So here are 8 reflections from the last 10 years. It should have been 10 but I ran out of steam.

Why I joined

  1. I got involved with Firefox because, as a Web developer, I wanted to make the Web platform better. Firefox was in a position of influence and anyone could join in. It was open technically and culturally. XPCOM took a little getting used to but everyone was very supportive.

What I learned

  1. Don’t worry about the boundaries. When I first started hacking on SVG code I would be afraid to touch any source file outside /content/svg/content/src (now, thankfully, dom/svg!). When I started on the SVG working group I would think, “we can’t possibly change that, that’s another working group’s spec!” But when Cameron McCormack joined Mozilla I was really impressed how he fearlessly fixed things all over the tree. As I’ve become more familiar and confident with Firefox code and Web specs I’ve stopped worrying about artificial boundaries like folders and working groups and more concerned with fixing things properly.

  2. Blessed are the peacemakers. It’s really easy to get into arguments on the Internet that don’t help anyone. I once heard a colleague consider how Jesus’ teaching applies to the Internet. He suggested that sometimes when someone makes a fool of us on the Internet the best thing is just to leave it and look like a fool. I find that hard to do and don’t do it often, but I’m always glad when I do.

    Earlier this year another colleague impressed me with her very graceful response to Brendan’s appointment to CEO. I thought it was a great example of peace-making.

  3. Nothing is new. I’ve found these words very sobering:

    What has been will be again,
      what has been done will be done again;
      there is nothing new under the sun.
    Is there anything of which one can say,
      “Look! This is something new”?
    It was here already, long ago;
      it was here before our time. (Ecclesiastes 1:9–10)

    It’s so easy to get caught up in some new technology—I got pretty caught up defending SVG Animation (aka SMIL) for a while when I worked on it. Taking a step back though, that new thing has almost invariably been done before in some form, and it will certainly be superseded in my lifetime. In fact, every bit of code I’ve ever written will almost certainly be either rewritten or abandoned altogether within my lifetime.

    In light of that I try to fixate less on each new technology and more on the process: what kind of person was I when I implemented that (now obsolete) feature? What motivated me to work at it each day? That, I believe, is eternal.

How I hope Mozilla will shape up over the next 10 years

  1. I hope we’ll be the most welcoming community on the Web

    I don’t mean that we’ll give free hugs to new contributors, or that we’ll accept any patch that manages to enter Bugzilla, or we’ll entertain any troublemaker who happens upon #developers. Rather, I hope that anyone who wants to help out finds overwhelming encouragement and enthusiasm and without having to sign up to an ideological agenda first. Something like this interaction.

  2. I hope we’ll stay humble

    I’d love to see Mozilla be known as servants of the Web but when things go well there’s always the danger we’ll become arrogant, less welcoming of others’ ideas, and deaf to our critics. I hope we can celebrate our victories while taking a modest view of ourselves. Who knows, maybe our harshest critics will become some of our most valuable contributors.

  3. I hope we’ll talk less, show more

    By building amazing products through the input of thousands of people around the world we can prove Open works, we can prove you don’t need to choose between privacy and convenience. My initial interest in Mozilla was because of its technical excellence and welcoming community. The philosophy came later.

  4. I hope we’ll make less t-shirts

    Can we do, I don’t know, a shirt once in a while? Socks even? Pretty much anything else!

Categorieën: Mozilla-nl planet

Daniel Stenberg: Changing networks on Mac with Firefox

do, 30/10/2014 - 22:46

Not too long ago I blogged about my work to better deal with changing networks while Firefox is running. That job was basically two parts.

A) generic code to handle receiving such a network-changed event and then

B) a platform specific part that was for Windows that detected such a network change and sent the event

Today I’ve landed yet another fix for part B called bug 1079385, which detects network changes for Firefox on Mac OS X.

mac miniI’ve never programmed anything before on the Mac so this was sort of my christening in this environment. I mean, I’ve written countless of POSIX compliant programs including curl and friends that certainly builds and runs on Mac OS just fine, but I never before used the Mac-specific APIs to do things.

I got a mac mini just two weeks ago to work on this. Getting it up, prepared and my first Firefox built from source took all-in-all less than three hours. Learning the details of the mac API world was much more trouble and can’t say that I’m mastering it now either but I did find myself at least figuring out how to detect when IP addresses on the interfaces change and a changed address is a pretty good signal that the network changed somehow.

Categorieën: Mozilla-nl planet

Nathan Froyd: porting rr to x86-64

do, 30/10/2014 - 21:47

(TL;DR: rr from git can record and replay 64-bit programs.  Try it for yourself!)

Over the last several months, I’ve been devoting an ever-increasing amount of my time to making rr able to trace x86-64 programs.  I’ve learned a lot along the way and thought I’d lay out all the major pieces of work that needed to be done to make this happen.

Before explaining the major pieces, it will be helpful to define some terms: the host architecture is the architecture that the rr binary itself is compiled for.  The target architecture is the architecture of the binary that rr is tracing.  These are often equivalent, but not necessarily so: you could be tracing a 64-bit binary with a 64-bit rr (host == target), but then the program starts to run a 32-bit subprocess, which rr also begins to trace (host != target).  And you have to handle both cases in a single rr session, with a single rr binary.  (64-bit rr doesn’t handle the host != target case quite yet, but all the infrastructure is there.)

All of the pieces described below are not new ideas: the major programs you use for development (compiler, linker, debugger, etc.) all have done some variation of what I describe below.  However, it’s not every day that one takes a program written without any awareness of host/target distinctions and endows it with the necessary awareness.

Quite often, a program written exclusively for 32-bit hosts has issues when trying to compile for 64-bit hosts, and rr was no exception in this regard.  Making the code 64-bit clean by fixing all the places that triggered compiler warnings on x86-64, but not on i386, was probably the easiest part of the whole porting effort.  Format strings were a big part of this: writing %llx when you wanted to print a uint64_t, for instance, which assumes that uint64_t is implemented as unsigned long long (not necessarily true on 64-bit hosts).  There were several places where long was used instead of uint32_t.  And there were even places that triggered signed/unsigned comparison warnings on 64-bit platforms only.  (Exercise for the reader: construct code where this happens before looking at the solution.)

Once all the host issues are dealt with, removing all the places where rr assumed semantics or conventions of the x86 architecture was the next step.  In short, all of the code assumed host == target: we were compiled on x86, so that must be the architecture of the program we’re debugging.  How many places actually assumed this, though?  Consider what the very simplified pseudo-code of the rr main recording loop looks like:

while (true) { wait for the tracee to make a syscall grab the registers at the point of the syscall extract the syscall number from the registers (1) switch (syscall_number) { case SYS_read: (2) extract pointer to the data read from the registers (3) record contents of data break; case SYS_clock_gettime: extract pointers for argument structures from the registers record contents of those argument structures (4) break; case SYS_mmap: (5) ... case SYS_mmap2: (6) ... case SYS_clone: (7) ... ... default: complain about an unhandled syscall } let the tracee resume }

Every line marked with a number at the end indicates a different instance where host and target differences come into play and/or the code might have assumed x86 semantics.  (And the numbering above is not exhaustive!)  Taking them in order:

  1. You can obtain the registers of your target with a single ptrace call, but the layout of those registers depends on your target.  ptrace returns the registers as a struct user_regs, which differs between targets; the syscall number location obviously differs between different layouts of struct user_regs.
  2. The constant SYS_read refers to the syscall number for read on the host.  If you want to identify the syscall number for the target, you’ll need to do something different.
  3. This instance is a continuation of #1: syscall arguments are passed in different registers for each target, and the locations of those registers differ in size and location between different layouts of struct user_regs.
  4. SYS_clock_gettime takes a pointer to a struct timespec.  How much data should we read from that pointer for recording purposes?  We can’t just use sizeof(struct timespec), since that’s the size for the host, not the target.
  5. Like SYS_read, SYS_mmap refers to the syscall number for mmap on the host, so we need to do something similar to SYS_read here.  But just because two different architectures have a SYS_mmap, it doesn’t mean that the calling conventions for those syscalls at the kernel level are identical.  (This distinction applies to several other syscalls as well.)  SYS_mmap on x86 takes a single pointer argument, pointing to a structure that contains the syscall’s arguments.  The x86-64 version takes its arguments in registers.  We have to extract arguments appropriately for each calling convention.
  6. SYS_mmap2 only exists on x86; x86-64 has no such syscall.  So we have to handle host-only syscalls or target-only syscalls in addition to things like SYS_read.
  7. SYS_clone has four (!) different argument orderings at the kernel level, depending on the architecture, and x86 and x86-64 of course use different argument orderings.  You must take these target differences into account when extracting arguments.  SYS_clone implementations also differ in how they treat the tls parameter, and those differences have to be handled as well.

So, depending on the architecture of our target, we want to use different constants, different structures, and do different things depending on calling conventions or other semantic differences.

The approach rr uses is that the Registers of every rr Task (rr’s name for an operating system thread) has an architecture, along with a few other things like recorded events.  Every structure for which the host/target distinction matters has an arch() accessor.  Additionally, we define some per-architecture classes.  Each class contains definitions for important kernel types and structures, along with enumerations for syscalls and various constants.

Then we try to let C++ templates do most of the heavy lifting.  In code, it looks something like this:

enum SupportedArch { x86, x86_64, }; class X86Arch { /* many typedefs, structures, enums, and constants defined... */ }; class X64Arch { /* many typedefs, structures, enums, and constants defined... */ }; #define RR_ARCH_FUNCTION(f, arch, args...) \ switch (arch) { \ default: \ assert(0 && "Unknown architecture"); \ case x86: \ return f<X86Arch>(args); \ case x86_64: \ return f<X64Arch>(args); \ } class Registers { public: SupportedArch arch() const { ... } intptr_t syscallno() const { switch (arch()) { case x86: return u.x86.eax; case x86_64: return u.x64.rax; } } // And so on for argN accessors and so forth... private: union RegisterUnion { X86Arch::user_regs x86; X64Arch::user_regs x64; } u. }; template <typename Arch> static void process_syscall_arch(Task* t, int syscall_number) { switch (syscall_number) { case Arch::read: remote_ptr buf = t->regs().arg2(); // do stuff with buf break; case Arch::clock_gettime: // We ensure Arch::timespec is defined with the appropriate types so it // is exactly the size |struct timespec| would be on the target arch. remote_ptr tp = t->regs().arg2(); // do stuff with tp break; case Arch::mmap: switch (Arch::mmap_argument_semantics) { case Arch::MmapRegisterArguments: // x86-64 break; case Arch::MmapStructArguments: // x86 break; } break; case Arch::mmap2: // Arch::mmap2 is always defined, but is a negative number on architectures // where SYS_mmap2 isn't defined. // do stuff break; case Arch::clone: switch (Arch::clone_argument_ordering) { case Arch::FlagsStackParentTLSChild: // x86 break; case Arch::FlagsStackParentChildTLS: // x86-64 break; } break; ... } } void process_syscall(Task* t, int syscall_number) { int syscall_number = t->regs().syscallno(); RR_ARCH_FUNCTION(process_syscall_arch, t->arch(), t, syscall_number); }

The definitions of X86Arch and X64Arch also contain static_asserts to try and ensure that we’ve defined structures correctly for at least the host architecture.  And even now the definitions of the structures aren’t completely bulletproof; I don’t think the X86Arch definitions of some structures are robust on a 64-bit host because of differences in structure field alignment between 32-bit and 64-bit, for instance.  So that’s still something to fix in rr.

Templates handle the bulk of target-specific code in rr.  There are a couple of places where we need to care about how the target implements mmap and other syscalls which aren’t amenable to templates (or, at least, we didn’t use them; it’s probably possible to (ab)use templates for these purposes), and so we have code like:

Task* t = ... if (has_mmap2_syscall(t->arch())) { // do something specifically for mmap2 } else { // do something with mmap }

Finally, various bits of rr’s main binary and its testsuite are written in assembly, so of course those needed to be carefully ported over.

That’s all the major source-code related work that needed to be done. I’ll leave the target-specific runtime work required for a future post.

x86-64 support for rr hasn’t been formally released, but the x86-64 support in the github repository is functional: x86-64 rr passes all the tests in rr’s test suite and is able to record and replay Firefox mochitests.  I will note that it’s not nearly as fast as the x86 version; progress is being made in improving performance, but we’re not quite there yet.

If you’re interested in trying 64-bit rr out, you’ll find the build and installation instructions helpful, with one small modification: you need to add the command-line option -Dforce64bit=ON to any cmake invocations.  Therefore, to build with Makefiles, one needs to do:

git clone https://github.com/mozilla/rr.git mkdir obj64 cd obj64 cmake -Dforce64bit=ON ../rr make -j4 make check

Once you’ve done that, the usage instructions will likely be helpful.  Please try it out and report bugs if you find any!

Categorieën: Mozilla-nl planet

Joel Maher: A case of the weekends?

do, 30/10/2014 - 17:29

Case of the Mondays

What was famous 15 years ago as a case of the Mondays has manifested itself in Talos.  In fact, I wonder why I get so many regression alerts on Monday as compared to other days.  It is more to a point of we have less noise in our Talos data on weekends.

Take for example the test case tresize:

linux32,

* in fact we see this on other platforms as well linux32/linux64/osx10.8/windowsXP

30 days of linux tresize

Many other tests exhibit this.  What is different about weekends?  Is there just less data points?

I do know our volume of tests go down on weekends mostly as a side effect of less patches being landed on our trees.

Here are some ideas I have to debug this more:

  • Run massive retrigger scripts for talos on weekends to validate # of samples is/isnot the problem
  • Reduce the volume of talos on weekdays to validate the overall system load in the datacenter is/isnot the problem
  • compare the load of the machines with all branches and wait times to that of the noise we have in certain tests/platforms
  • Look at platforms like windows 7, windows 8, and osx 10.6 as to why they have more noise on weekends or are more stable.  Finding the delta in platforms would help provide answers

If you have ideas on how to uncover this mystery, please speak up.  I would be happy to have this gone and make any automated alerts more useful!


Categorieën: Mozilla-nl planet

Peter Bengtsson: Shout-out to eventlog

do, 30/10/2014 - 17:05

If you do things with the Django ORM and want an audit trails of all changes you have two options:

  1. Insert some cleverness into a pre_save signal that writes down all changes some way.

  2. Use eventlog and manually log things in your views.

(you have other options too but I'm trying to make a point here)

eventlog is almost embarrassingly simple. It's basically just a model with three fields:

  • User
  • An action string
  • A JSON dump field

You use it like this:

from eventlog.models import log def someview(request): if request.method == 'POST': form = SomeModelForm(request.POST) if form.is_valid(): new_thing = form.save() log(request.user, 'mymodel.create', { 'id': new_thing.id, 'name': new_thing.name, # You can put anything JSON # compatible in here }) return redirect('someotherview') else: form = SomeModelForm() return render(request, 'view.html', {'form': form})

That's all it does. You then have to do something with it. Suppose you have an admin page that only privileged users can see. You can make a simple table/dashboard with these like this:

from eventlog.models import Log # Log the model, not log the function def all_events(request): all = Log.objects.all() return render(request, 'all_events.html', {'all': all})

And something like this to to all_events.html:

<table> <tr> <th>Who</th><th>When</th><th>What</th><th>Details</th> </tr> {% for event in all %} <tr> <td>{{ event.user.username }}</td> <td>{{ event.timestamp | date:"D d M Y" }}</td> <td>{{ event.action }}</td> <td>{{ event.extra }}</td> </tr> {% endfor %} </table>

What I like about it is that it's very deliberate. By putting it into views at very specific points you're making it an audit log of actions, not of data changes.

Projects with overly complex model save signals tend to dig themselves into holes that make things slow and complicated. And it's not unrealistic that you'll then record events that aren't particularly important to review. For example, a cron job that increments a little value or something. It's more interesting to see what humans have done.

I just wanted to thank the Eldarion guys for eventlog. It's beautifully simple and works perfectly for me.

Categorieën: Mozilla-nl planet

Adam Lofting: Learning about Learning Analytics @ #Mozfest

do, 30/10/2014 - 15:53

If I find a moment, I’ll write about many of the fun and inspiring things I saw at Mozfest this weekend, but this post is about a single session I had the pleasure of hosting alongside Andrew, Doug and Simon; Learning Analytics for Good in the Age of Big Data.

We had an hour, no idea if anyone else would be interested, or what angle people would come to the session from. And given that, I think it worked out pretty well.

la_session

We had about 20 participants, and broke into four groups to talk about Learning Analytics from roughly 3 starting points (though all the discussions overlapped):

  1. Practical solutions to measuring learning as it happens online
  2. The ethical complications of tracking (even when you want to optimise for something positive – e.g. Learning)
  3. The research opportunities for publishing and connecting learning data
But, did anyone learn anything in our Learning Analytics session?

Well, I know for sure the answer is yes… as I personally learned things. But did anyone else?

I spoke to people later in the day who told me they learned things. Is that good enough?

As I watched the group during the session I saw conversations that bounced back and forth in a way that rarely happens without people learning something. But how does anyone else who wasn’t there know if our session had an impact?

How much did people learn?

This is essentially the challenge of Learning Analytics. And I did give this some thought before the session…

IMG_0184

As a meta-exercise, everyone who attended the session had a question to answer at the start and end. We also gave them a place to write their email address and to link their ‘learning data’ to them in an identifiable way. It was a little bit silly, but it was something to think about.

This isn’t good science, but it tells a story. And I hope it was a useful cue for the people joining the session.

Response rate:
  • We had about 20 participants
  • 10 returned the survey (i.e. opted in to ‘tracking’), by answering question 1
  • 5 of those answered question 2
  • 5 gave their email address (not exactly the same 5 who answered both questions)
Here is our Learning Analytics data from our session

Screen Shot 2014-10-30 at 13.53.26

Is that demonstrable impact?

Even though this wasn’t a serious exercise. I think we can confidently argue that some people did learn, in much the same way certain newspapers can make a headline out of two data points…

What, and how much they learned, and if it will be useful later in their life is another matter.

Even with the deliberate choice of question which was almost impossible to not show improvement from start to end of the session, one respondent claims to be less sure what the session was about after attending (but let’s not dwell on that!).

Post-it notes and scribbles

If you were at the session, and want to jog your memory about what we talked about. I kind-of documented the various things we captured on paper.

Screen Shot 2014-10-30 at 14.40.57

Click for gallery of bigger images

Into 2015

I’m looking forward to exploring Learning Analytics in the context of Webmaker much more in 2015.

And to think that this was just one hour in a weekend full of the kinds of conversations that repeat in your mind all the way until next Mozfest. It’s exhausting in the best possible way.

Categorieën: Mozilla-nl planet

Tim Taubert: Why including a backup pin in your Public-Key-Pinning header is a good idea

do, 30/10/2014 - 14:00

In my last post “Deploying TLS the hard way” I explained how TLS and its extensions (as well as a few HTTP extensions) work and what to watch out for when enabling TLS for your server. One of the HTTP extensions mentioned is HTTP Public-Key-Pinning (HPKP). As a short reminder, the header looks like this:

Public-Key-Pins: pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk="; pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s="; max-age=15768000; includeSubDomains

You can see that it specifies two pin-sha256 values, that is the pins of two public keys. One is the public key of your currently valid certificate and the other is a backup key in case you have to revoke your certificate.

I received a few questions as to why I suggest including a backup pin and what the requirements for a backup key would be. I will try to answer those with a more detailed overview of how public key pinning and TLS certificates work.

How are RSA keys represented?

Let us go back to the beginning and start by taking a closer look at RSA keys:

$ openssl genrsa 4096

The above command generates a 4096 bit RSA key and prints it to the console. Although it says -----BEGIN RSA PRIVATE KEY----- it does not only return the private key but an ASN.1 structure that also contains the public key - we thus actually generated an RSA key pair.

A common misconception when learning about keys and certificates is that the RSA key itself for a given certificate expires. RSA keys however never expire - after all they are just three numbers. Only the certificate containing the public key can expire and only the certificate can be revoked. Keys “expire” or are “revoked” as soon as there are no more valid certificates using the public key, and you threw away the keys and stopped using them altogether.

What does the TLS certificate contain?

By submitting the Certificate Signing Request (CSR) containing your public key to a Certificate Authority it will issue a valid certificate. That will again contain the public key of the RSA key pair we generated above and an expiration date. Both the public key and the expiration date will be signed by the CA so that modifications of any of the two would render the certificate invalid immediately.

For simplicity I left out a few other fields that X.509 certificates contain to properly authenticate TLS connections, for example your server’s hostname and other details.

How does public key pinning work?

The whole purpose of public key pinning is to detect when the public key of a certificate for a specific host has changed. That may happen when an attacker compromises a CA such that they are able to issue valid certificates for any domain. A foreign CA might also just be the attacker, think of state-owned CAs that you do not want to be able to {M,W}ITM your site. Any attacker intercepting a connection from a visitor to your server with a forged certificate can only be prevented by detecting that the public key has changed.

After the server sent a TLS certificate with the handshake, the browser will look up any stored pins for the given hostname and check whether any of those stored pins match any of the SPKI fingerprints (the output of applying SHA-256 to the public key information) in the certificate chain. The connection must be terminated immediately if pin validation fails.

If the browser does not find any stored pins for the current hostname then it will directly continue with the usual certificate checks. This might happen if the site does not support public key pinning and does not send any HPKP headers at all, or if this is the first time visiting and the server has not seen the HPKP header yet in a previous visit.

Pin validation should happen as soon as possible and thus before any basic certificate checks are performed. An expired or revoked certificate will be happily accepted at the pin validation stage early in the handshake when any of the SPKI fingerprints of its chain match a stored pin. Only a little later the browser will see that the certificate already expired or was revoked and will reject it.

Pin validation also works for self-signed certificates, but they will of course raise the same warnings as usual as soon as the browser determined they were not signed by a trusted third-party.

What if your certificate was revoked?

If your server was compromised and an attacker obtained your private key you have to revoke your certificate as the attacker can obviously fully intercept any TLS connection to your server and record every conversation. If your HPKP header contained only a single pin-sha256 token you are out of luck until the max-age directive given in the header lets those pins expire in your visitors’ browsers.

Pin validation requires checking the SPKI fingerprints of all certificates in the chain. When for example StartSSL signed your certificate you have another intermediate Class 1 or 2 certificate and their root certificate in the chain. The browser trusts only the root certificate but the intermediate ones are signed by the root certificate. The intermediate certificate in turn signs the certificate deployed on your server and that is called a chain of trust.

To prevent getting stuck after your only pinned key was compromised, you could for example provide the SPKI fingerprint of StartSSL’s Class 1 intermediate certificate. An attacker would now have to somehow get a certificate issued by StartSSL’s Class 1 tier to successfully impersonate you. You are however again out of luck should you decide to upgrade to Class 2 in a month because you decided to start paying for a certificate.

Pinning StartSSL’s root certificate would let you switch Classes any time and the attacker would still have to get a certificate issued by StartSSL for your domain. This is a valid approach as long as you are trusting your CA (really?) and as long as the CA itself is not compromised. In case of a compromise however the attacker would be able to get a valid certificate for your domain that passes pin validation. After the attack was discovered StartSSL would quickly revoke all currently issued certificates, generate a new key pair for their root certificate and issue new certificates. And again we would be out of luck because suddenly pin validation fails and no browser will connect to our site.

Include the pin of a backup key

The safest way to pin your certificate’s public key and be prepared to revoke your certificate when necessary is to include the pin of a second public key: your backup key. This backup RSA key should in no way be related to your first key, just generate a new one.

A good advice is to keep this backup key pair (especially the private key) in a safe place until you need it. Uploading it to the server is dangerous: when your server is compromised you lose both keys at once and have no backup key left.

Generate a pin for the backup key exactly as you did for the current key and include both pin-sha256 values as shown above in the HPKP header. In case the current key is compromised make sure all vulnerabilities are patched and then remove the revoked pin. Generate a CSR for the backup key, let your CA issue a new certificate, and revoke the old one. Upload the new certificate to your server and you are done.

Finally, do not forget to generate a new backup key and include that pin in your HPKP header again. Once a browser successfully establishes a TLS connection the next time, it will see your updated HPKP header and replace any stored pins with the new ones.

Categorieën: Mozilla-nl planet

Tim Taubert: HTTP Public-Key-Pinning explained

do, 30/10/2014 - 14:00

In my last post “Deploying TLS the hard way” I explained how TLS and its extensions (as well as a few HTTP extensions) work and what to watch out for when enabling TLS for your server. One of the HTTP extensions mentioned is HTTP Public-Key-Pinning (HPKP). As a short reminder, the header looks like this:

Public-Key-Pins: pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk="; pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s="; max-age=15768000; includeSubDomains

You can see that it specifies two pin-sha256 values, that is the pins of two public keys. One is the public key of your currently valid certificate and the other is a backup key in case you have to revoke your certificate.

I received a few questions as to why I suggest including a backup pin and what the requirements for a backup key would be. I will try to answer those with a more detailed overview of how public key pinning and TLS certificates work.

How are RSA keys represented?

Let us go back to the beginning and start by taking a closer look at RSA keys:

$ openssl genrsa 4096

The above command generates a 4096 bit RSA key and prints it to the console. Although it says -----BEGIN RSA PRIVATE KEY----- it does not only return the private key but an ASN.1 structure that also contains the public key - we thus actually generated an RSA key pair.

A common misconception when learning about keys and certificates is that the RSA key itself for a given certificate expires. RSA keys however never expire - after all they are just three numbers. Only the certificate containing the public key can expire and only the certificate can be revoked. Keys “expire” or are “revoked” as soon as there are no more valid certificates using the public key, and you threw away the keys and stopped using them altogether.

What does the TLS certificate contain?

By submitting the Certificate Signing Request (CSR) containing your public key to a Certificate Authority it will issue a valid certificate. That will again contain the public key of the RSA key pair we generated above and an expiration date. Both the public key and the expiration date will be signed by the CA so that modifications of any of the two would render the certificate invalid immediately.

For simplicity I left out a few other fields that X.509 certificates contain to properly authenticate TLS connections, for example your server’s hostname and other details.

How does public key pinning work?

The whole purpose of public key pinning is to detect when the public key of a certificate for a specific host has changed. That may happen when an attacker compromises a CA such that they are able to issue valid certificates for any domain. A foreign CA might also just be the attacker, think of state-owned CAs that you do not want to be able to {M,W}ITM your site. Any attacker intercepting a connection from a visitor to your server with a forged certificate can only be prevented by detecting that the public key has changed.

After the server sent a TLS certificate with the handshake, the browser will look up any stored pins for the given hostname and check whether any of those stored pins match any of the SPKI fingerprints (the output of applying SHA-256 to the public key information) in the certificate chain. The connection must be terminated immediately if pin validation fails.

If the browser does not find any stored pins for the current hostname then it will directly continue with the usual certificate checks. This might happen if the site does not support public key pinning and does not send any HPKP headers at all, or if this is the first time visiting and the server has not seen the HPKP header yet in a previous visit.

Pin validation should happen as soon as possible and thus before any basic certificate checks are performed. An expired or revoked certificate will be happily accepted at the pin validation stage early in the handshake when any of the SPKI fingerprints of its chain match a stored pin. Only a little later the browser will see that the certificate already expired or was revoked and will reject it.

Pin validation also works for self-signed certificates, but they will of course raise the same warnings as usual as soon as the browser determined they were not signed by a trusted third-party.

What if your certificate was revoked?

If your server was compromised and an attacker obtained your private key you have to revoke your certificate as the attacker can obviously fully intercept any TLS connection to your server and record every conversation. If your HPKP header contained only a single pin-sha256 token you are out of luck until the max-age directive given in the header lets those pins expire in your visitors’ browsers.

Pin validation requires checking the SPKI fingerprints of all certificates in the chain. When for example StartSSL signed your certificate you have another intermediate Class 1 or 2 certificate and their root certificate in the chain. The browser trusts only the root certificate but the intermediate ones are signed by the root certificate. The intermediate certificate in turn signs the certificate deployed on your server and that is called a chain of trust.

To prevent getting stuck after your only pinned key was compromised, you could for example provide the SPKI fingerprint of StartSSL’s Class 1 intermediate certificate. An attacker would now have to somehow get a certificate issued by StartSSL’s Class 1 tier to successfully impersonate you. You are however again out of luck should you decide to upgrade to Class 2 in a month because you decided to start paying for a certificate.

Pinning StartSSL’s root certificate would let you switch Classes any time and the attacker would still have to get a certificate issued by StartSSL for your domain. This is a valid approach as long as you are trusting your CA (really?) and as long as the CA itself is not compromised. In case of a compromise however the attacker would be able to get a valid certificate for your domain that passes pin validation. After the attack was discovered StartSSL would quickly revoke all currently issued certificates, generate a new key pair for their root certificate and issue new certificates. And again we would be out of luck because suddenly pin validation fails and no browser will connect to our site.

Include the pin of a backup key

The safest way to pin your certificate’s public key and be prepared to revoke your certificate when necessary is to include the pin of a second public key: your backup key. This backup RSA key should in no way be related to your first key, just generate a new one.

A good advice is to keep this backup key pair (especially the private key) in a safe place until you need it. Uploading it to the server is dangerous: when your server is compromised you lose both keys at once and have no backup key left.

Generate a pin for the backup key exactly as you did for the current key and include both pin-sha256 values as shown above in the HPKP header. In case the current key is compromised make sure all vulnerabilities are patched and then remove the revoked pin. Generate a CSR for the backup key, let your CA issue a new certificate, and revoke the old one. Upload the new certificate to your server and you are done.

Finally, do not forget to generate a new backup key and include that pin in your HPKP header again. Once a browser successfully establishes a TLS connection the next time, it will see your updated HPKP header and replace any stored pins with the new ones.

Categorieën: Mozilla-nl planet

Gregory Szorc: Soft Launch of MozReview

do, 30/10/2014 - 12:15

We performed a soft launch of MozReview: Mozilla's new code review tool yesterday!

What does that mean? How do I use it? What are the features? How do I get in touch or contribute? These are all great questions. The answers to those and more can all be found in the MozReview documentation. If they aren't, it's a bug in the documentation. File a bug or submit a patch. Instructions to do that are in the documentation.

Categorieën: Mozilla-nl planet

Kev Needham: things that interest me this week – 29 oct 2014

do, 30/10/2014 - 03:13

Quick Update: A couple of people mentioned there’s no Mozilla items in here. They’re right, and it’s primarily because the original audience of this type of thing was Mozilla. I’ll make sure I add them where relevant, moving forward.

Every week I put together a bunch of news items I think are interesting to the people I work with, and that’s usually limited to a couple wiki pages a handful of people read. I figured I may as well put it in a couple other places, like here, and see if people are interested. Topics focus on the web, the technologies that power it, and the platforms that make use of it. I work for Mozilla, but these are my own opinions and takes on things.

I try to have three sections:

  • Something to Think About – Something I’m seeing a company doing that I think is important, why I think it’s important, and sometimes what I think should be done about it. Some weeks these won’t be around, because they tend to not show their faces much.
  • Worth a Read – Things I think are worth the time to read if you’re interested in the space as a whole. Limited to three items max, but usually two. If you don’t like what’s in here, tell me why.
  • Notes – Bits and bobs people may or may not be interested in, but that I think are significant, bear watching, or are of general interest.

I’ll throw these out every Wednesday, and standard disclaimers apply – this is what’s in my brain, and isn’t representative of what’s in anyone else’s brain, especially the folks I work with at Mozilla. I’ll also throw a mailing list together if there’s interest, and feedback is always welcome (your comment may get stuck in a spam-catcher, don’t worry, I’ll dig it out).

– k

Something to Think About

Lifehacker posted an article this morning around all the things you can do from within Chrome’s address bar. Firefox can do a number of the same things, but it’s interesting to see the continual improvements the Chrome team has made around search (and service) integration, and also the productivity hacks (like searching your Google drive without actually going there) that people come up with to make a feature more useful than it’s intended design.

Why I think people should care: Chrome’s modifications to the address bar aren’t ground-breaking, nor are they changes that came about overnight. They are a series of iterative changes to a core function that work well with Google’s external services, and focus on increasing utility which, not coincidentally, increases the value and stickiness of the Google experience as a whole. Continued improvements to existing features (and watching how people are riffing on those features) is a good thing, and is something to consider as part of our general product upkeep, particularly around the opportunity to do more with services (both ours, and others) that promote the open web as a platform.

Worth a Read
  • Benedict Evans updated his popular “Mobile Is Eating the World” presentation, and posits that mobile effectively ”is” everything technology today. I think it needs a “Now” at the end, because what he’s describing has happened before, and will happen again. Mobile is a little different currently, mainly because of the gigantic leaps in hardware for fewer dollars that continue to be made as well as carrier subsidies fueling 2-year upgrade cycles. Mobile itself is also not just phones, it’s things other than desktops and laptops that have a network connection. Everything connected is everything. He’s also put together a post on Tablets, PCs and Office that goes a little bit into technology cycles and how things like tablets are evolving to fill more than just media consumption needs, but the important piece he pushes in both places is the concept of network connected screens being the window to your stuff, and the platform under the screen being a commodity (e.g. processing power is improving on every platform to the point the hardware platform is mattering less) that is really simply the interface that better fits the task at hand.
  • Ars Technica has an overview of some of the more interesting changes in Lollipop which focus on unbundling apps and APIs to mitigate fragmentation risk, an enhanced setup process focusing on user experience, and the shift in the Nexus brand from a market-share builder to a premium offering.
  • Google’s Sundar Pichai was promoted last week in a move that solidifies Google’s movement towards a unified, backend-anchored, multi-screen experience. Pichai is a long time Google product person, and has been fronting the Android and Chrome OS (and a couple other related services) teams, and now takes on Google’s most important web properties as well, including Gmail, Search, AdSense, and the infrastructure that runs it. This gives those business units inside Google better alignment around company goals, and shows the confidence Google has in Pichai. Expect further alignment in Google’s unified experience movement through products like Lollipop, Chrome OS, Inbox and moving more Google Account data (and related experiences like notifications and Web Intents) into the cloud, where it doesn’t rely on a specific client and can be shared/used on any connected screen.

Notes
Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 34 beta3 to beta4

wo, 29/10/2014 - 22:27

  • 38 changesets
  • 64 files changed
  • 869 insertions
  • 625 deletions

ExtensionOccurrences js16 cpp16 jsm9 h9 java4 xml2 jsx2 html2 mn1 mm1 list1 css1

ModuleOccurrences browser19 gfx10 content8 mobile6 services5 layout4 widget3 netwerk3 xpfe2 toolkit2 modules1 accessible1

List of changesets:

Nicolas SilvaBug 1083071 - Backout the additional blacklist entries. r=jmuizelaar, a=sledru - 31acf5dc33fc Jeff MuizelaarBug 1083071 - Disable D3D11 and D3D9 layers on broken drivers. r=bjacob, a=sledru - 618a12c410bb Ryan VanderMeulenBacked out changeset 6c46c21a04f9 (Bug 1074378) - 3e2c92836231 Cosmin MalutanBug 1072244 - Correctly throw the exceptions in TPS framework. r=hskupin a=testonly DONTBUILD - 48e3c2f927d5 Mark BannerBug 1081959 - "Something went wrong" isn't displayed when the call fails in the connection phase. r=dmose, a=lmandel - 8cf65ccdce3d Jared WeinBug 1062335 - Loop panel size increases after switching themes. r=mixedpuppy, a=lmandel - 033942f8f817 Wes JohnstonBug 1055883 - Don't reshow header when hitting the bottom of short pages. r=kats, a=lmandel - 823ecd23138b Patrick McManusBug 1073825 - http2session::cleanupstream failure. r=hurley, a=lmandel - eed6613c5568 Paul AdenotBug 1078354 - Part 1: Make sure we are not waking up an OfflineGraphDriver. r=jesup, a=lmandel - 9d0a16097623 Paul AdenotBug 1078354 - Part 2: Don't try to measure a PeriodicWave size when an OscillatorNode is using a basic waveform. r=erahm, a=lmandel - b185e7a13e18 Gavin SharpBug 1086958 - Back out change to default browser prompting for Beta 34. r=Gijs, a=lmandel - d080a93fd9e1 Yury DelendikBug 1072164 - Fixes pdf.js for CMYK jpegs. r=bdahl, a=lmandel - d1de09f2d1b0 Neil RashbrookBug 1070768 - Move XPFE's autocomplete.css to communicator so it doesn't conflict with toolkit's new global autocomplete.css. r=Ratty, a=lmandel - 78b9d7be1770 Markus StangeBug 1078262 - Only use the fixed epsilon for the translation components. r=roc, a=lmandel - 2c49dc84f1a0 Benjamin ChenBug 1079616 - Dispatch PushBlobRunnable in RequestData function, and remove CreateAndDispatchBlobEventRunnable. r=roc, a=lmandel - d9664db594e9 Brad LasseyBug 1084035 - Add the ability to mirror tabs from desktop to a second screen, don't block browser sources when specified in constraints from chrome code. r=jesup, a=lmandel - 47065beeef20 Gijs KruitboschBug 1074520 - Use CSS instead of hacks to make the forget button lay out correctly. r=jaws, a=lmandel - 46916559304f Markus StangeBug 1085475 - Don't attempt to use vibrancy in 32-bit mode. r=smichaud, a=lmandel - 184b704568ff Mark FinkleBug 1088952 - Disable "Enable wi-fi" toggle on beta due to missing permission. r=rnewman, a=lmandel - 9fd76ad57dbe Yonggang LuoBug 1066459 - Clamp the new top row index to the valid range before assigning it to mTopRowIndex when scrolling. r=kip a=lmandel - 4fd0f4651a61 Mats PalmgrenBug 1085050 - Remove a DEBug assertion. r=kip a=lmandel - 1cd947f5b6d8 Jason OrendorffBug 1042567 - Reflect JSPropertyOp properties more consistently as data properties. r=efaust, a=lmandel - 043c91e3aaeb Margaret LeibovicBug 1075232 - Record which suggestion of the search screen was tapped in telemetry. r=mfinkle, a=lmandel - a627934a0123 Benoit JacobBug 1088858 - Backport ANGLE fixes to make WebGL work on Windows in Firefox 34. r=jmuizelaar, a=lmandel - 85e56f19a5a1 Patrick McManusBug 1088910 - Default http/2 off on gecko 34 after EARLY_BETA. r=hurley, a=lmandel - 74298f48759a Benoit JacobBug 1083071 - Avoid touching D3D11 at all, even to test if it works, if D3D11 layers are blacklisted. r=Bas, r=jmuizelaar, a=sledru - 6268e33e8351 Randall BarkerBug 1080701 - TabMirror needs to be updated to work with the chromecast server. r=wesj, r=mfinkle, a=lmandel - 0811a9056ec4 Xidorn QuanBug 1088467 - Avoid adding space for bullet with list-style: none. r=surkov, a=lmandel - 2e54d90546ce Michal NovotnyBug 1083922 - Doom entry when parsing security info fails. r=mcmanus, a=lmandel - 34988fa0f0d8 Ed LeeBug 1088729 - Only allow http(s) directory links. r=adw, a=sledru - 410afcc51b13 Mark BannerBug 1047410 - Desktop client should display Call Failed if an incoming call - d2ef2bdc90bb Mark BannerBug 1088346 - Handle "answered-elsewhere" on incoming calls for desktop on Loop. r=nperriault a=lmandel - 67d9122b8c98 Mark BannerBug 1088636 - Desktop ToS url should use hello.firefox.com not call.mozilla.com. r=nperriault a=lmandel - 45d717da277d Adam Roach [:abr]Bug 1033579 - Add channel to POST calls for Loop to allow different servers based on the channel. r=dmose a=lmandel - d43a7b8995a6 Ethan HuggBug 1084496 - Update whitelist for screensharing r=jesup a=lmandel - 080cfa7f5d79 Ryan VanderMeulenBacked out changeset 043c91e3aaeb (Bug 1042567) for deBug jsreftest failures. - 15bafc2978d8 Jim ChenBug 1066982 - Try to not launch processes on pre-JB devices because of Android bug. r=snorp, a=lmandel - 5a4dfee44717 Randell JesupBug 1080755 - Push video frames into MediaStreamGraph instead of waiting for pulls. r=padenot, a=lmandel - 22cfde2bf1ce

Categorieën: Mozilla-nl planet

David Boswell: Please complete and share the contributor survey

wo, 29/10/2014 - 20:53

We are conducting a research project to learn about the values and motivations of Mozilla’s contributors (both volunteers and staff) and to understand how we can improve their experiences.

Part of this effort is a survey for contributors that has just been launched at:

http://www.surveygizmo.com/s3/1852460/Mozilla-Community-Survey

Please take a few minutes to fill this out and then share this link with the communities you work with. Having more people complete this will give us a more complete understanding of how we can improve the experience for all contributors.

We plan to have results from this survey and the data analysis project available by the time of the Portland work week in December.


Categorieën: Mozilla-nl planet

K Lars Lohn: Judge the Project, Not the Contributors

wo, 29/10/2014 - 18:16
I recently read a blog posting titled, The 8 Essential Traits of a Great Open Source Contributor I am disturbed by this posting. While clearly not the intended effect, I feel the posting just told a huge swath of people that they are neither qualified nor welcome to contribute to Open Source. The intent of the posting was to say that there is a wide range of skills needed in Open Source. Even if a potential contributor feels they lack an essential technical skill, here's an enumeration of other skills that are helpful.
Over the years, I’ve talked to many people who have wanted to contribute to open source projects, but think that they don’t have what it takes to make a contribution. If you’re in that situation, I hope this post helps you get out of that mindset and start contributing to the projects that matter to you. See? The author has completely good intentions. My fear is that the posting has the opposite effect. It raises a bar as if it is an ad for a paid technical position. He uses superlatives that say to me, “we are looking for the top people as contributors, not common people”.

Unfortunately, my interpretation of this blog posting is not the need for a wide range of skills, it communicates that if you contribute, you'd better be great at doing so. In fact, if you do not have all these skills, you cannot be considered great. So where is the incentive to participate? It makes Open Source sound as if it an invitation to be judged as either great or inadequate.

Ok, I know this interpretation is through my own jaundiced eyes. So to see if my interpretation was just a reflection of my own bad day, I shared the blog posting with a couple colleagues.  Both colleagues are women that judge their own skills unnecessarily harshly, but, in my judgement are really quite good. I chose these two specifically, because I knew both suffer “imposter syndrome”, a largely unshakable feeling of inadequacy that is quite common among technical people.   Both reacted badly to the posting, one saying that it sounded like a job posting for a position for which there would be no hope of ever landing.

I want to turn this around. Let's not judge the contributors, let's judge the projects instead. In fact, we can take these eight traits and boil them down to one: essential trait of a great open source project:
Essential trait of a great open source project:
Leaders & processes that can advance the project while marshalling imperfect contributors gracefully.
That's a really tall order. By that standard, my own Open Source projects are not great. However, I feel much more comfortable saying that the project is not great, rather than sorting the contributors.

If I were paying people to work on my project, I'd have no qualms about judging their performance any where along a continuum of “great” to “inadequate”. Contributors are NOT employees subject to performance review.  In my projects, if someone contributes, I consider both the contribution and the contributor to be “great”. The contribution may not make it into the project, but it was given to me for free, so it is naturally great by that aspect alone.

Contribution: Voluntary Gift
Perhaps if the original posting had said, "these are the eight gifts we need" rather than saying the the gifts are traits of people we consider "great", I would not have been so uncomfortable.
A great Open Source project is one that produces a successful product and is inclusive. An Open Source project that produces a successful product, but is not inclusive, is merely successful.
Categorieën: Mozilla-nl planet

Tim Taubert: Talk: Keeping secrets with JavaScript - An Introduction to the WebCrypto API

wo, 29/10/2014 - 17:00

With the web slowly maturing as a platform the demand for cryptography in the browser has risen, especially in a post-Snowden era. Many of us have heard about the upcoming Web Cryptography API but at the time of writing there seem to be no good introductions available. We will take a look at the proposed W3C spec and its current state of implementation.

Video Slides Code

https://github.com/ttaubert/secret-notes

Categorieën: Mozilla-nl planet

Pagina's