mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 weken 12 uur geleden

Mike Hommey: git-cinnabar experimental features

zo, 02/04/2017 - 00:54

Since version 0.4.0, git-cinnabar has a few hidden experimental features. Two of them are available in 0.4.0, and a third was recently added on the master branch.

The basic mechanism to enable experimental features is to set a preference in the git configuration with a comma-separated list of features to enable, or all, for all of them. That preference is cinnabar.experiments.

Any means to set a git configuration can be used. You can:

  • Add the following to .git/config: [cinnabar] experiments=feature
  • Or run the following command: $ git config cinnabar.experiments feature
  • Or only enable the feature temporarily for a given command: $ git -c cinnabar.experiments=feature command arguments

But what features are there?

wire

In order to talk to Mercurial repositories, git-cinnabar normally uses mercurial python modules. This experimental feature allows to access Mercurial repositories without using the mercurial python modules. It then relies on git-cinnabar-helper to connect to the repository through the mercurial wire protocol.

As of version 0.4.0, the feature is automatically enabled when Mercurial is not installed.

merge

Git-cinnabar currently doesn’t allow to push merge commits. The main reason for this is that generating the correct mercurial data for those merges is tricky, and needs to be gotten right.

In version 0.4.0, enabling this feature allows to push merge commits as long as the parent commits are available on the mercurial repository. If they aren’t, you need to first push them independently, and then push the merge.

On current master, that limitation doesn’t exist anymore ; you can just push everything in one go.

The main caveat with this experimental support for pushing merges is that it currently doesn’t handle the case where a file was moved on one of the branches the same way mercurial would (i.e. the information would be lost to mercurial users).

clonebundles

As of mercurial 3.6, Mercurial servers can opt-in to providing pre-generated bundles, which, when clients support it, takes CPU load off the server when a clone is performed. Good for servers, and usually good for clients too when they have a fast network connection, because downloading a pre-generated bundle is usually faster than waiting for the server to generate one.

As of a few days ago, the master branch of git-cinnabar supports cloning using those pre-generated bundles, provided the server advertizes them (mozilla-central does).

Categorieën: Mozilla-nl planet

Mike Hommey: Progress on git-cinnabar memory usage

za, 01/04/2017 - 11:45

This all started when I figured out that git-cinnabar was using crazy amounts of memory when cloning mozilla-central. That pointed to memory allocation patterns that triggered a suboptimal behavior in the glibc memory allocator, and, while overall, git-cinnabar wasn’t really abusing memory all things considered, it happened to be realloc()ating way too much.

It also turned out that recent changes on the master branch had made most uses of fast-import synchronous, making the whole process significantly slower.

This is where we started from on 0.4.0:

And on the master branch as of be75326:

An interesting thing to note here is that the glibc allocator runaway memory use was, this time, more pronounced on 0.4.0 than on master. It was the opposite originally, but as I mentioned in the past ASLR is making it not happen exactly the same way each time.

While I’m here, one thing I failed to mention in the previous posts is that all these measurements were done by cloning a local mercurial clone of mozilla-central, served from localhost via HTTP to eliminate the download time from hg.mozilla.org. And while mozilla-central itself has received new changesets since the first post, the local clone has not been updated, such that all subsequent clone tests I did were cloning the exact same repository under the exact same circumstances.

After last blog post, I focused on the low hanging fruits identified so far:

  • Moving the mercurial to git SHA1 mapping to the helper process (Finding a git bug in the process).
  • Tracking mercurial manifest heads in the helper process.
  • Removing most of the synchronous calls to the helper happening during a clone.

And this is how things now look on the master branch as of 35c18e7:

So where does that put us?

  • The overall clone is now about 11 minutes faster than 0.4.0 (and about 50 minutes faster than master as of be75326!)
  • Non-shared memory use of the git-remote-hg process stays well under 2GB during the whole clone, with no spike at the end.
  • git-cinnabar-helper now uses more memory, but the sum of both processes is less than what it used to be, even when compensating for the glibc memory allocator issue. One thing to note is that while the git-cinnabar-helper memory use goes above 2GB at the end of the clone, a very large part is due to the pack window size being 1GB on 64-bits (vs. 32MB on 32-bits). Memory usage should stay well under the 2GB address space limit on a 32-bits system.
  • CPU usage is well above 100% for most of the clone.

On a more granular level:

  • The “Import manifests” phase is now 13 minutes faster than it was in 0.4.0.
  • The “Read and import files” phase is still almost 4 minutes slower than in 0.4.0.
  • The “Import changesets” phase is still almost 2 minutes slower than in 0.4.0.
  • But the “Finalization” phase is now 3 minutes faster than in 0.4.0.

What this means is that there’s still room for improvement. But at this point, I’d rather focus on other things.

Logging all the memory allocations with the python allocator disabled still resulted in a 6.5GB compressed log file, containing 2.6 billion calls to malloc, calloc, free and realloc (down from 2.7 billions in be75326). The number of allocator calls done by the git-remote-hg process is down to 2.25 billions (from 2.34 billion in be75326).

Surprisingly, while more things were moved to the helper, it still made less allocations than in be75326: 345 millions, down from 363 millions. Presumably, this is because the number of commands processed by the fast-import code was reduced.

Let’s now take a look at the various metrics we analyzed previously (the horizontal axis represents the number of allocator calls that happened before the measurement):

A few observations to make here:

  • The allocated memory (requested bytes) is well below what it was, and the spike at the end is entirely gone. It also more closely follows the amount of raw data we’re holding on to (which makes sense since most of the bookkeeping was moved to the helper)
  • The number of live allocations (allocated memory pointers that haven’t been free()d yet) has gone significantly down as well.
  • The cumulated[*] bytes are now in a much more reasonable range, with the lower bound close to the total amount of data we’re dealing with during the clone, and the upper bound slightly over twice that amount (the upper bound for the be75326 is not shown here, but it was around 45TB; less than 7TB is a big improvement).
  • There are less allocator calls during the first phases and the “Importing changesets” phase, but more during the “Reading and importing files” and “Importing manifests” phases.

[*] The upper bound is the sum of all sizes ever given to malloc, calloc, realloc etc. and the lower bound is the same, but removing the size of allocations passed as input to realloc (in practical words, this pretends reallocs never happened and that the final size for a given reallocated pointer is the one that counts)

So presumably, some of the changes led to more short-lived allocations. Considering python uses its own allocator for sizes smaller than 512 bytes, it’s probably not so much of a problem. But let’s look at the distribution of buffer sizes (including all sizes given to realloc).

(Bucket size is 16 bytes)

What is not completely obvious from the logarithmic scale is that, in fact, 98.4% of the allocations are less than 512 bytes with the current master (35c18e7), and they were 95.5% with be75326. Interestingly, though, in absolute numbers, there are less allocations smaller than 512 bytes in current master than in be75326 (1,194,268,071 vs 1,214,784,494). This suggests the extra allocations that happen during some phases are larger than that.

There are clearly less allocations across the board (apart from very few exceptions), and close to an order of magnitude less allocations larger than 1MiB. In fact, widening the bucket size to 32KiB shows an order of magnitude difference (or close) for most buckets:

An interesting thing to note is how some sizes are largely overrepresented in the data with buckets of 16 bytes, like 768, 1104, 2048, 4128, with other smaller bumps for e.g. 2144, 2464, 2832, 3232, 3696, 4208, 4786, 5424, 6144, 6992, 7920… While some of those are powers of 2, most aren’t, and some of them may actually represent objects sized with a power of 2, but that have an extra PyObject overhead.

While looking at allocation stats, I got to wonder what the lifetimes of those allocations looked like. So I scanned the allocator logs and measured the distance between when an allocation is made and when it is freed, ignoring reallocs.

To give a few examples of what I mean, the following allocation for p gets a lifetime of 0:

void *p = malloc(42); free(p);

The following a lifetime of 1:

void *p = malloc(42); void *other = malloc(42); free(p);

And the following a lifetime of 1 as well:

void *p = malloc(42); p = realloc(p, 84); free(p);

(that is, it is not counted as two malloc/free pairs)

The further away the free is from the corresponding malloc, the larger the lifetime. And the largest the lifetime can ever be is the total number of allocator function calls minus two, in the hypothetical case the very first allocation is freed as the very last (minus two because we defined the lifetime as the distance).

What comes out of this data:

  • As expected, there are more short-lived allocations in 35c18e7.
  • Around 90% of allocations have a lifetime spanning 10% of the process life or less. This is a rather surprisingly large amount of allocations with a very large lifetime.
  • Around 80% of allocations have a lifetime spanning 0.01% of the process life or less.
  • The median lifetime is around 0.0000002% (2*10-7%) of the process life, which, in absolute terms is around 500 allocator function calls between a malloc and a free.
  • If we consider every imported changeset, manifest and file to require a similar number of allocations, and considering there are about 2.7M of them in total, each spans about 3.7*10-7%. About 53% of all allocations on be75326 and 57% on 35c18e7 have a lifetime below that. Whenever I get to look more closely to memory usage again, I’ll probably look at the data separately for each individual phase.
  • One surprising fact, that doesn’t appear on the graph because of the logarithmic scale not showing “0” on the horizontal axis, is that 9.3% on be75326 and 7.3% on 35c18e7 of all allocations have a lifetime of 0. That is, whatever the code using them is doing, it’s not allocating or freeing anything else, and not reallocating them either.

All in all, what the data shows is that we’re definitely in a better place now than we used to be a few days ago, and that there is still work to do on the memory front, but:

  • As mentioned in a previous post, there are bigger wins to be had from not keeping manifests data around in memory at all, and by importing it directly instead.
  • In time, a lot of the import code is meant to move to the helper, where the constraints are completely different, and it might not be worth spending time now on reducing the memory usage of python code that might go away soon(ish). The situation was bad and necessitated action rather quickly, but we’re now in a place where it’s not as bad anymore.

So at this point, I won’t look any deeper into the memory usage of the git-remote-hg python process, and will instead focus on the planned metadata storage changes. They will allow to share the metadata more easily (allowing faster and more straightforward gecko-dev graft), and will allow to import manifests earlier, which, as mentioned already, will help reduce memory use, but, more importantly, will allow to do more actual work while downloading the data. On slow networks, this is crucial to make clones and pulls faster.

Categorieën: Mozilla-nl planet

Gervase Markham: Root Store Policy 2.4.1 Published

vr, 31/03/2017 - 22:11

Version 2.4.1 of Mozilla’s CA Policy has now been published. This document incorporates by reference the Common CCADB Policy 1.0 and the Mozilla CCADB Policy 1.0. Neither of these latter two documents has changed in this revision cycle.

This version has no new normative provisions; it is a rearrangement and reordering of the existing policy 2.4. Diffs against 2.4 are not provided because they are not useful; everything appears to have changed textually, even if nothing has changed normatively.

It’s on days like this that one remembers that making the Internet a better, safer and more secure place often involves doing things which are very mundane. :-) The next job will be to work on version 2.5, of which more later.

Categorieën: Mozilla-nl planet

Gervase Markham: Happy Birthday, Mozilla!

vr, 31/03/2017 - 22:08

Mozilla is 19 today :-)

Categorieën: Mozilla-nl planet

Air Mozilla: Bedrock: From Code to Production

vr, 31/03/2017 - 21:28

 From Code to Production A presentation on how changes to our flagship website (www.mozilla.org) are made, and how to request them so that they're as high-quality and quick-to-production as...

Categorieën: Mozilla-nl planet

Air Mozilla: Bedrock: From Code to Production

vr, 31/03/2017 - 21:28

 From Code to Production A presentation on how changes to our flagship website (www.mozilla.org) are made, and how to request them so that they're as high-quality and quick-to-production as...

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Friend of Add-ons: Prasanth

vr, 31/03/2017 - 17:45

Please meet our newest Friend of Add-ons, Prasanth! Prasanth became a Mozillian in 2015 when he joined the Mozilla TamilNadu community and became a Firefox Student Ambassador. Over the last two years, he has contributed to a variety of projects at Mozilla with great enthusiasm. Last year, he organized a group of eleven participants to test featured add-ons for e10s compatibility.

In January, Prasanth became a member of the Add-ons Advisory Board, and has emerged as someone very adept at finding great add-ons to feature. “Prasanth has shown a true talent for identifying great add-ons,” comments Scott DeVaney, Editorial & Campaign Manager for the add-ons team.

In addition to organizing community events and contributing to the Advisory Board, Prasanth is also learning how to write scripts for testing automation and helping contributors participate in QA bugdays.

Of his experience as a contributor at Mozilla, Prasanth says,

“Contributing in an open source community like Mozilla gave me the opportunity to know many great contributors and get their help in developing my skills. It showed me a way to rediscover myself as a person who loves open source philosophy and practices.”

In his spare time, Prasanth enjoys hanging out with friends and watching serials like The Flash and Green Arrow.

Congratulations, Prasanth, and thank you for your contributions to the add-ons community!

Are you a contributor to the add-ons community or know of someone who should be recognized? Please be sure to add them to our Recognition Wiki!

The post Friend of Add-ons: Prasanth appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 061. Twisted bugs.

vr, 31/03/2017 - 11:05
webcompat life webcompat issues webcompat.com dev To read

Otsukare!

Categorieën: Mozilla-nl planet

The Mozilla Blog: U.S. Broadband Privacy Rules: We will Fight to Protect User Privacy

vr, 31/03/2017 - 01:08

In the U.S., Congress voted to overturn rules that the Federal Communications Commission (FCC) created to protect the privacy of broadband customers. Mozilla supported the creation and enactment of these rules because strong rules are necessary to promote transparency, respect user privacy and support user control.

The Federal Trade Commission has authority over the online industry in general, but these rules were crafted to create a clear policy framework for broadband services where the FTC’s policies don’t apply. They require internet service providers (ISPs) to notify us and get permission from us before any of our information would be collected or shared. ISPs know a lot about us, and this information (which includes your web browsing history) can potentially be shared with third-parties.

We take a stand where we see users lacking meaningful choice with their online privacy because of a lack of transparency and understanding – and, in the case of broadband services, often a lack of options for competitive services. These rules help empower users, and it’s unclear whether remaining laws and policies built around the FCC’s existing consumer privacy framework or other services will be helpful – or whether the current FCC will enforce them.

Now, this is in front of the President to sign or reject, although the White House has already said it “strongly supports” the move and will advise the President to sign. We hope that broadband privacy will be prioritized and protected by the U.S. government, but regardless, we are ready to fight along with you for the right to privacy online.

If these rules are overturned, we will need to be vigilant in monitoring broadband provider practices to demand transparency, and work closely with the FCC to demand accountability. Mozilla – and many other tech companies – strive to deliver better online privacy and security through our products as well as our policy and advocacy work, and that job is never done because technology and threats to online privacy and security are always evolving.

The post U.S. Broadband Privacy Rules: We will Fight to Protect User Privacy appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Firefox Nightly: Guest post: India uses Firefox Nightly – A Campaign especially for India

do, 30/03/2017 - 18:45

Biraj Karmakar This is a guest post by Biraj Karmakar, who has been active promoting Mozilla and Mozilla software in India for over 7 years. Biraj is taking the initiative of organizing a series of workshops throughout the country to convince technical people to (mozillians or not) that may be interested in getting involved in Mozilla to use Firefox Nightly.

 

 

Fellow mozillians, I am super excited to inform you that very soon we are going to release a new campaign in India  called “India uses Firefox Nightly“. Behind this campaign, our mission is to increases Firefox nightly usages in India.

Why India?

As we all know we have a great Mozilla community around India. We have a large number of dedicated students, developers and evangelists who are really passionate about Mozilla.

We have seen that very few people in India actually know about Firefox Nightly. So we have taken an initiative to run a pilot campaign for Firefox Nightly throughout India.

Firefox Nightly, as a pre-release version of Firefox targeting power-users and core Mozilla contributors, is a glimpse of what the future of Firefox will be for hundreds of millions of people. Having a healthy and strong technical community using and testing Nightly is a great way to easily get involved in Mozilla by providing a constant feedback loop to developers. Here you can test lots of pre-release features.

So it needs a little bit of general promotion, which will help bring a good number of tech-savvy, power-users who may become new active community members.

Few Key points

Time Frame: 2 months Max              Hashtag: #INUsesFxNightly

Event Duration: 3 – 5 Hours              Total events: 15

Who will join us: We invite students, community members, developers, open source evangelists to run this campaign.

Parts of Campaign Online activities:

Mozillians spread the message of this campaign around India as well as through social media (Facebook, Twitter, Instagram), blogs, promotional snippets, email, mailing list, website news items etc.

Offline activities:

Here, any community member or open source enthusiast can host one event in their area or join any nearby event to help organizers. The event can be held at a startup company, Schools, Universities, Community centres, Home, Cafés.

Goals for this initiative

Impact:

  • 1000 Nightly Installed
  • 20 New Active Contributors

Strength:

  • 30 Mozillians run events (2 mozillians per event)
  • 500 Attendees

 

BTW have you tried Firefox Nightly yet, download it now?

 

More details will come soon. Stay tuned!

 

We need many core campaign volunteers who will help us to run this initiative smoothly. If you are interested in joining the campaign team, please let me know.

Have design skills? We need a logo for this campaign, please come and help us here.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Yes C is unsafe, but…

do, 30/03/2017 - 10:04

I posted curl is C a few days ago and it raced on hacker news, reddit and elsewhere and got well over a thousand comments in those forums alone. The blog post has been read more than 130,000 times so far.

Addendum a few days later

Many commenters of my curl is C post struck down on my claim that most of our security flaws aren’t due to curl being written in C. It turned out into some sort of CVE counting game in some of the threads.

I think that’s missing the point I was trying to make. Even if 75% of them happened due to us using C, that fact alone would still not be a strong enough reason for me to reconsider our language of choice (at this point in time). We use C for a whole range of reasons as I tried to lay out there in spite of the security challenges the language brings. We know C has tricky corners and we know we are likely to do more mistakes going forward.

curl is currently one of the most distributed and most widely used software components in the universe, be it open or proprietary and there are easily way over three billion instances of it running in appliances, servers, computers and devices across the globe. Right now. In your phone. In your car. In your TV. In your computer. Etc.

If we then have had 40, 50 or even 60 security problems because of us using C, through-out our 19 years of history, it really isn’t a whole lot given the scale and time we’re talking about here.

Using another language would’ve caused at least some problems due to that language, plus I feel a need to underscore the fact that none of the memory safe languages anyone would suggest we should switch to have been around for 19 years. A portion of our security bugs were even created in our project before those alternatives you would suggest were available! Let alone as stable and functional alternatives.

This is of course no guarantee that there isn’t still more ugly things to discover or that we won’t mess up royally in the future, but who will throw the first stone when it comes to that? We will continue to work hard on minimizing risks, detecting problems early by ourselves and work closely together with everyone who reports suspected problems to us.

Number of problems as a measurement

The fact that we have 62 CVEs to date (and more will follow surely) is rather a proof that we work hard on fixing bugs, that we have an open process that deals with the problems in the most transparent way we can think of and that people are on their toes looking for these problems. You should not rate a project in any way purely based on the number of CVEs – you really need to investigate what lies behind the numbers if you want to understand and judge the situation.

Future

Let me clarify this too: I can very well imagine a future where we transition to another language or attempt various others things to enhance the project further – security wise and more. I’m not really ruling anything out as I usually only have very vague ideas of what the future might look like. I just don’t expect it to be happening within the next few years.

These “you should switch language” remarks are strangely enough from the backseat drivers of the Internet. Those who can tell us with confidence how to run our project but who don’t actually show us any code.

Languages

What perhaps made me most sad in the aftermath of said previous post, is everyone who failed to hold more than one thought at a time in their heads. In my post I wrote 800 words on some of the reasoning behind us sticking to the language C in the curl project. I specifically did not say that I dislike certain other languages or that any of those alternative languages are bad or should be avoided. Please friends, I wrote about why curl uses C. There are many fine languages out there and you should all use them as much as you possibly can, and I will too – but not in the curl project (at the moment). So no, I don’t hate language XXXX. I didn’t say so, and I didn’t imply it either. Don’t put that label on me, thanks.

Categorieën: Mozilla-nl planet

Karl Dubost: Our (css) discussion is broken

do, 30/03/2017 - 03:39

I have seen clever, thoughtful and hardworking people trying to be right about the CSS is not broken and CSS is broken. PopCorn. I will not attempt to be right on anything in this post. I'm just sharing a feeling.

Let me steal the image from this provocative tweet/thread. You can also read the thread in these tweets.

CSS in JS

I guess looking at the image I understood how/why the discussions would never have any resolutions. Just let clarify a few things.

A Bit Of An Introduction

CSS means Cascading Style Sheets. Cascading Style Sheets (CSS) is a simple mechanism for adding style (e.g., fonts, colors, spacing) to Web documents.. Words have meanings. It is 20 years old spec wise. But the idea came from a proposal on www-talk on October 10, 1994. Fast forward, there is an ongoing effort to formalize CSS Object Model aka CSSOM defines APIs (including generic parsing and serialization rules) for Media Queries, Selectors, and of course CSS itself.

Let's look at the very recent CSS Grid, currently in Candidate Recommendation phase and starting to be well deployed in browsers. Look carefully at the prose. The word DOM is cited only twice in an example. CSS is a language for describing styling rules.

The Controversy

What people are argueing about and not discussing or dialoguing about is not about CSS, but how to apply the style rules to a document.

  • Some developers prefers to use style elements and files using the properties of the cascade and specificity (which are useful), aka CSS.
  • Some developers wants to apply the style on each individual node in the DOM using JavaScript to constraint (remove) the cascade and the specificity because they consider it annoying for their use case.

devtools screenshot

I do not have a clear cut opinion about it. I don't think CSS is broken. I think it is perfectly usable. But I also completely understand what the developers who wants to use JavaScript to set style on elements are doing. It makes me uncomfortable the same way that Flash (circa 2000s) or frame (circa 1995) was making me uncomfortable. It is something related with la rouille du Web (The Rusty Web). The perennity of what we create on the Web. I guess in this discussion there are sub-discussions about craft and its love, the Web perennity and the notion of Web industrialization.

There is one thing which rubs me in the wrong direction is when people talk about HTML and CSS with the "in JS" term associated. When we manipulate the DOM, create new nodes and associate style to it, we are not doing anymore HTML and CSS. We are basically modifying the DOM, which is a complete different layer. It's easy to see how the concerns are different. When we open a web site made with React for example, the HTML semantics is often gone. You could use only div and span and it would be exactly the same.

To better express my feeling, let's rephrase this:

You could use only div and span and it would be exactly the same.

It would become.

pronoun verb verb adverb noun conjunction noun conjunction pronoun verb verb adverb determiner noun.

then we would apply some JavaScript to convey meaning on it.

So…

As I said I don't think I'm adding anything useful to the debates. I'm not a big fan of everything apps through JavaScript, maybe because I'm old or maybe because I value time.

I also would be curious for the advocates of applying styles to nodes in the DOM, if they made the experiment of generating (programmatically) a hierachical CSS from the DOM. Basically the salmon ladder in the river to go back to the source. I'm not talking about creating a snapshot with plenty of style attributes, but reverse engineering the cascade. At least just as an experiment to understand the two paths and what we could learn about it. It would minimize the CSS selectors, take advantage of the cascade, avoid as much as possible !important, etc.

Otsukare!

Categorieën: Mozilla-nl planet

Nathan Froyd: on mutex performance and WTF::Lock

wo, 29/03/2017 - 17:25

One of the things I’ve been doing this quarter is removing Gecko’s dependence on NSPR locks.  Gecko’s (non-recursive) mutexes and condition variables now use platform-specific constructs, rather than indirecting through NSPR.  This change makes things smaller, especially on POSIX platforms, and uses no dynamic memory allocation, so there are fewer untested failure paths.  I haven’t rigorously benchmarked things yet, but I suspect various operations are faster, too.

As I’ve done this, I’ve fielded questions about why we’re not using something like WTF::Lock or the Rust equivalent in parking_lot.  My response has always been some variant of the following: the benchmarks for the WTF::Lock blog post were conducted on OS X.  We have anecdotal evidence that mutex overhead can be quite high on OS X, and that changing locking strategies on OS X can be beneficial.  The blog post also says things like:

One advantage of OS mutexes is that they guarantee fairness: All threads waiting for a lock form a queue, and, when the lock is released, the thread at the head of the queue acquires it. It’s 100% deterministic. While this kind of behavior makes mutexes easier to reason about, it reduces throughput because it prevents a thread from reacquiring a mutex it just released.

This is certainly true for mutexes on OS X, as the measurements in the blog post show.  But fairness is not guaranteed for all OS mutexes; in fact, fairness isn’t even guaranteed in the pthreads standard (which OS X mutexes follow).  Fairness in OS X mutexes is an implementation detail.

These observations are not intended to denigrate the WTF::Lock work: the blog post and the work it describes are excellent.  But it’s not at all clear that the conclusions reached in that post necessarily carry over to other operating systems.

As a partial demonstration of the non-cross-platform applicability of some of the conclusions, I ported WebKit’s lock fairness benchmark to use raw pthreads constructs; the code is available on GitHub.  The benchmark sets up a number of threads that are all contending for a single lock.  The number of lock acquisitions for each thread over a given period of time is then counted.  While both of these qualities are configurable via command-line parameters in WebKit’s benchmark, they are fixed at 10 threads and 100ms in mine, mostly because I was lazy. The output I get on my Mac mini running OS X 10.10.5 is as follows:

1509 1509 1509 1509 1509 1509 1509 1508 1508 1508

Each line indicates the number of lock acquisitions performed by a given thread.  Notice the nearly-identical output for all the threads; this result follows from the fairness of OS X’s mutexes.

The output I get on my Linux box is quite different (aside from each thread performing significantly more lock acquisitions because of differences in processor speed, etc.):

108226 99025 103122 105539 101885 104715 104608 105590 103170 105476

The counts vary significantly between threads: Linux mutexes are not fair by default–and that’s perfectly OK.

What’s more, the developers of OS X have recognized this and added a way to make their mutexes non-fair.  In <pthread_spis.h>, there’s a OS X-only function, pthread_mutexattr_setpolicy_np.  (pthread mutex attributes control various qualities of pthread mutexes: normal, recursively acquirable, etc.  This particular function, supported since OS X 10.7, enables setting the fairness policy of mutexes to either _PTHREAD_MUTEX_POLICY_FAIRSHARE (the default) or _PTHREAD_MUTEX_POLICY_FIRSTFIT.  The firstfit policy is not documented anywhere, but I’m guessing that it’s something akin to the “barging” locks described in the WTF::Lock blog post: the lock is made available to whatever thread happens to get to it first, rather than enforcing a queue to acquire the lock.  (If you’re curious, the code dealing with firstfit policy can be found in Apple’s pthread_mutex.c.)

Running the benchmark on OS X with mutexes configured with the firstfit policy yields quite different numbers:

14627 13239 13503 13720 13989 13449 13943 14376 13927 14142

The variation in these numbers are more akin to what we saw with the non-fair locks on Linux, and what’s more, they’re almost an order of magnitude higher than the fair locks. Maybe we should start using firstfit locks in Gecko!  I don’t know how firstfit policy locks compare to something like WTF::Lock on my Mac mini, but it’s clear that saying simply “OS mutexes are slow” doesn’t tell the whole story. And of course there are other concerns, such as the size required by locks, that motivated the WTF::Lock work.

I have vague plans of doing more benchmarking, especially on Windows, where we may want to use slim reader/writer locks rather than critical sections, and evaluating Rust’s parking_lot on more platforms.  Pull requests welcome.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Announcing the Equal Rating Innovation Challenge Winners

wo, 29/03/2017 - 16:04

Six months ago, we created the Equal Rating Innovation Challenge to add an additional dimension to the important work Mozilla has been leading around the concept of “Equal Rating.” In addition to policy and research, we wanted to push the boundaries and find news ways to provide affordable access to the Internet while preserving net neutrality. An open call for new ideas was the ideal vehicle.

An Open and Engaging Process

The Equal Rating Innovation Challenge was founded on the belief that reaching out to the local expertise of innovators, entrepreneurs, and researchers from all around the world would be the right way for Mozilla to help bring the power of the Internet to the next billion and beyond. It has been a thrilling and humbling experience to see communities engage, entrepreneurs conceive of new ideas, and regulatory, technology, and advocacy groups start new conversations.

Through our Innovation Challenge website, www.equalrating.com, in webinars, conferences, and numerous community events within the six week submission period, we reached thousands of people around the world. Ultimately, we received 98 submissions from 27 countries, which all taken together demonstrates the viability and the potential of the Equal Rating approach. Whereas previously many people believed providing affordable access was the domain of big companies and government, we are now experiencing a groundswell of entrepreneurs and ideas celebrating the power of community to bring all of the Internet to all people.

Moderating a panel discussion with our esteemed Judges at the Equal Rating Conference in New York City

Our diverse expert Judges selected five teams as semifinalists in January. Mozilla staff from around the world provided six weeks of expert mentorship to help the semifinalists hone their projects, and on 9 March at our Equal Rating Conference in New York City, these teams presented their solutions to our panel of Judges. In keeping with Mozilla’s belief in openness and participation, we then had a one-week round of online public voting, the results of which formed part of the Judges’ final deliberation. Today, we are delighted to share the Judges’ decisions on the Equal Rating Innovation Challenge winners.

The Winners
With an almost unanimous vote, the Overall Winner of the Equal Rating Innovation Challenge, receiving US$125,000 in funding, is Gram Marg Solution for Rural Broadband. This project is spearheaded by Professor Abhay Karandikar, Dean (Faculty Affairs) and Institute Chair Professor of Electrical Engineering and Dr Sarbani Banerjee Belur, Senior Project Research Scientist, at Indian Institute of Technology (IIT) Bombay in Mumbai, India.

Dr Sarbani Banerjee Belur (India) presenting Gram Marg Solution for Rural Broadband at Mozilla’s Equal Rating Conference in New York City

Gram Marg, which translates as “roadmap” in Hindi, captured the attention of the Judges and audience by focusing on the urgent need to bring 640,000 rural villages in India online. The team reinforced the incredible potential these communities could achieve if they had online access to e-Governance services, payment and financial services, and general digital information. In order to close the digital divide and empower these rural communities, the Gram Marg team has created an ingenious and “indigenous” technology that utilizes unused white space on the TV spectrum to backhaul data from village wifi clusters to provide broadband access (frugal 5G).

The team of academics and practitioners have created a low-cost and ruggedized TV UHF device that converts a 2.4 Ghz signal to connect villages in even the most difficult terrains. Their journey has been one of resilience and perseverance as they have deployed their solution in 25 pilot villages, all while reducing costs, size, and perfecting their solution. This top prize of the Innovation Challenge is awarded to a solution the Judges recognize as creating a robustly scalable solution — Gram Marg is both technology enabler and social partner, and delivered beyond our hopes.

“All five semifinalists were equally competitive and it was really a challenge to pitch our solution among them. We are humbled by the Judges’ decision to choose our solution as the winner,” Professor Karandikar told us. “We will continue to improve our technology solution to make it more efficient. We are also working on a sustainable business model that can enable local village entrepreneurs to deploy and manage access networks. We believe that a decentralized and sustainable model is the key to the success of a technology solution for connecting the unconnected.”

As “Runner-Up” with a funding award of US$75,000, our Judges selected Afri-Fi: Free Public WiFi, lead by Tim Human (South Africa). The project is an extension of the highly awarded and successful Project Isizwe, which offers 500MB of data for free per day, but the key goal of this project is to create a sustainable business model by linking together free wifi networks throughout South Africa and engaging users meaningfully with advertisers so they can “earn” free wifi.

The team presented a compelling and sophisticated way to use consumer data, protect privacy, and bolster entrepreneurship in their solution. “The team has proven how their solution for a FREE internet is supporting thriving communities in South Africa. Their approach towards community building, partnerships, developing local community entrepreneurs and inclusivity, with a goal of connecting some of the most marginalized communities, are all key factors in why they deserve this recognition and are leading the FREE Internet movement in Southern Africa”, concluded Marlon Parker, Founder of Reconstructed Living Labs, on behalf of the jury.

Finally, the “Most Novel” award worth US$30,000 goes to Bruno Vianna (Brazil) and his team from the Free Networks P2P Cooperative. Fueled by citizen science and community technology, this team is building on the energy of the free networks movement in Brazil to tackle the digital divide. Rather than focusing on technology, the Coop has created a financial and logistical model that can be tailored to each village’s norms and community. The team was able to experiment more adventurously with ways to engage communities through “barn-raising” group activities, deploying “open calls” for leadership to reinforce the democratic nature of their approach, and instituting a sense of “play” for the villagers when learning how to use the equipment. The innovative way the team deconstructed the challenge around empowering communities to build their own infrastructure in an affordable and sustainable way proved to be the deciding factor for the Judges.

Semifinalists from left to right: Steve Song (Canada), Freemium Mobile Internet (FMI), Dr Carlos Rey-Moreno (South Africa), Zenzeleni “Do it for yourselves” Networks (ZN), Bruno Vianna (Brazil), Free Networks P2P Cooperative, Tim Genders (South Africa), Afri-Fi: Free Public WiFi, Dr Sarbani Banerjee Belur (India), Gram Marg Solution for Rural Broadband

Enormous thanks to all who participated in this Innovation Challenge through their submissions, engagement in meetups and events, as well as to our expert panel of Judges for their invaluable insights and time, and to the Mozilla mentors who supported the semifinalists in advancing their projects. We also want to thank all who took part in our online community voting. During the week-long period, we received almost 6,000 votes, with Zenzeleni and Gram Marg leading as the top two vote-getters.

Mozilla started this initiative because we believe in the power of collaborative solutions to tackle big issues. We wanted to take action and encourage change. With the Innovation Challenge, we not only highlighted a broader set of solutions, and broadened the dialogue around these issues, but built new communities of problem-solvers that have strengthened the global network of people working toward connecting the next billion and beyond.

At Mozilla, our commitment to Equal Rating through policy, innovation, research, and support of entrepreneurs in the space will continue beyond this Innovation Challenge, but it will take a global community to bring all of the internet to all people. As our esteemed Judge Omobola Johnson, the former Communication Technology Minister of Nigeria and partner at venture capital fund TLcom, commented: “it’s not about the issue of the unconnected, it’s about the people on the ground who make the connection.” We couldn’t agree more!

Visit us on www.equalrating.com, join our community, let your voice be heard. We’re all in this together — and today congratulate our five final teams for their tremendous leadership, vision, and drive. They are the examples of what’s best in all of us!

Announcing the Equal Rating Innovation Challenge Winners was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Announcing the Equal Rating Innovation Challenge Winners

wo, 29/03/2017 - 14:17

Six months ago, we created the Equal Rating Innovation Challenge to add an additional dimension to the important work Mozilla has been leading around the concept of “Equal Rating.” In addition to policy and research, we wanted to push the boundaries and find news ways to provide affordable access to the Internet while preserving net neutrality. An open call for new ideas was the ideal vehicle.

An Open and Engaging Process

The Equal Rating Innovation Challenge was founded on the belief that reaching out to the local expertise of innovators, entrepreneurs, and researchers from all around the world would be the right way for Mozilla to help bring the power of the Internet to the next billion and beyond. It has been a thrilling and humbling experience to see communities engage, entrepreneurs conceive of new ideas, and regulatory, technology, and advocacy groups start new conversations.

Through our Innovation Challenge website, equalrating.com, in webinars, conferences, and numerous community events within the six week submission period, we reached thousands of people around the world. Ultimately, we received 98 submissions from 27 countries, which all taken together demonstrates the viability and the potential of the Equal Rating approach. Whereas previously many people believed providing affordable access was the domain of big companies and government, we are now experiencing a groundswell of entrepreneurs and ideas celebrating the power of community to bring all of the Internet to all people.

Our diverse expert Judges selected five teams as semifinalists in January. Mozilla staff from around the world provided six weeks of expert mentorship to help the semifinalists hone their projects, and on 9 March at our Equal Rating Conference in New York City, these teams presented their solutions to our panel of Judges. In keeping with Mozilla’s belief in openness and participation, we then had a one-week round of online public voting, the results of which formed part of the Judges’ final deliberation. Today, we are delighted to share the Judges’ decisions on the Equal Rating Innovation Challenge winners.

The Winners

With an almost unanimous vote, the Overall Winner of the Equal Rating Innovation Challenge, receiving US$125,000 in funding, is Gram Marg Solution for Rural Broadband. This project is spearheaded by Professor Abhay Karandikar, Dean (Faculty Affairs) and Institute Chair Professor of Electrical Engineering and Dr Sarbani Banerjee Belur, Senior Project Research Scientist, at Indian Institute of Technology (IIT) Bombay in Mumbai, India.

Dr Sarbani Banerjee Belur (India) presenting Gram Marg Solution for Rural Broadband at Mozilla’s Equal Rating Conference in New York City

Gram Marg, which translates as “roadmap” in Hindi, captured the attention of the Judges and audience by focusing on the urgent need to bring 640,000 rural villages in India online. The team reinforced the incredible potential these communities could achieve if they had online access to e-Governance services, payment and financial services, and general digital information. In order to close the digital divide and empower these rural communities, the Gram Marg team has created an ingenious and “indigenous” technology that utilizes unused white space on the TV spectrum to backhaul data from village wifi clusters to provide broadband access (frugal 5G).

The team of academics and practitioners have created a low-cost and ruggedized TV UHF device that converts a 2.4 Ghz signal to connect villages in even the most difficult terrains. Their journey has been one of resilience and perseverance as they have deployed their solution in 25 pilot villages, all while reducing costs, size, and perfecting their solution. This top prize of the Innovation Challenge is awarded to a solution the Judges recognize as creating a robustly scalable solution – Gram Marg is both technology enabler and social partner, and delivered beyond our hopes.

“All five semifinalists were equally competitive and it was really a challenge to pitch our solution among them. We are humbled by the Judges’ decision to choose our solution as the winner,” Professor Karandikar told us. “We will continue to improve our technology solution to make it more efficient. We are also working on a sustainable business model that can enable local village entrepreneurs to deploy and manage access networks. We believe that a decentralized and sustainable model is the key to the success of a technology solution for connecting the unconnected.”

As “Runner-Up” with a funding award of US$75,000, our Judges selected Afri-Fi: Free Public WiFi, lead by Tim Human (South Africa). The project is an extension of the highly awarded and successful Project Isizwe, which offers 500MB of data for free per day, but the key goal of this project is to create a sustainable business model by linking together free wifi networks throughout South Africa and engaging users meaningfully with advertisers so they can “earn” free wifi.

The team presented a compelling and sophisticated way to use consumer data, protect privacy, and bolster entrepreneurship in their solution. “The team has proven how their solution for a FREE internet is supporting thriving communities in South Africa. Their approach towards community building, partnerships, developing local community entrepreneurs and inclusivity, with a goal of connecting some of the most marginalized communities, are all key factors in why they deserve this recognition and are leading the FREE Internet movement in Southern Africa”, concluded Marlon Parker, Founder of Reconstructed Living Labs, on behalf of the jury.

Finally, the “Most Novel” award worth US$30,000 goes to Bruno Vianna (Brazil) and his team from the Free Networks P2P Cooperative. Fueled by citizen science and community technology, this team is building on the energy of the free networks movement in Brazil to tackle the digital divide. Rather than focusing on technology, the Coop has created a financial and logistical model that can be tailored to each village’s norms and community. The team was able to experiment more adventurously with ways to engage communities through “barn-raising” group activities, deploying “open calls” for leadership to reinforce the democratic nature of their approach, and instituting a sense of “play” for the villagers when learning how to use the equipment. The innovative way the team deconstructed the challenge around empowering communities to build their own infrastructure in an affordable and sustainable way proved to be the deciding factor for the Judges.

From left to right: Steve Song (Canada), Freemium Mobile Internet (FMI), Dr Carlos Rey-Moreno (South Africa), Zenzeleni “Do it for yourselves” Networks (ZN), Bruno Vianna (Brazil), Free Networks P2P Cooperative, Tim Genders (South Africa), Afri-Fi: Free Public WiFi, Dr Sarbani Banerjee Belur (India), Gram Marg Solution for Rural Broadband

Enormous thanks to all who participated in this Innovation Challenge through their submissions, engagement in meetups and events, as well as to our expert panel of Judges for their invaluable insights and time, and to the Mozilla mentors who supported the semifinalists in advancing their projects. We also want to thank all who took part in our online community voting. During the week-long period, we received almost 6,000 votes, with Zenzeleni and Gram Marg leading as the top two vote-getters.

Mozilla started this initiative because we believe in the power of collaborative solutions to tackle big issues. We wanted to take action and encourage change. With the Innovation Challenge, we not only highlighted a broader set of solutions, and broadened the dialogue around these issues, but built new communities of problem-solvers that have strengthened the global network of people working toward connecting the next billion and beyond.

At Mozilla, our commitment to Equal Rating through policy, innovation, research, and support of entrepreneurs in the space will continue beyond this Innovation Challenge, but it will take a global community to bring all of the internet to all people. As our esteemed Judge Omobola Johnson, the former Communication Technology Minister of Nigeria and partner at venture capital fund TLcom, commented: “it’s not about the issue of the unconnected, it’s about the people on the ground who make the connection.” We couldn’t agree more!

Visit us on equalrating.com, join our community, let your voice be heard. We’re all in this together – and today congratulate our five final teams for their tremendous leadership, vision, and drive. They are the examples of what’s best in all of us!

The post Announcing the Equal Rating Innovation Challenge Winners appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Pagina's