mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Planet Mozilla Interns: Michael Sullivan: Forcing memory barriers on other CPUs with mprotect(2)

Mozilla planet - di, 24/02/2015 - 04:36

I have something of an unfortunate fondness for indefensible hacks.

Like I discussed in my last post, RCU is a synchronization mechanism that excels at protecting read mostly data. It is a particularly useful technique in operating system kernels because full control of the scheduler permits many fairly simple and very efficient implementations of RCU.

In userspace, the situation is trickier, but still manageable. Mathieu Desnoyers and Paul E. McKenney have built a Userspace RCU library that contains a number of different implementations of userspace RCU. For reasons I won’t get into, efficient read side performance in userspace seems to depend on having a way for a writer to force all of the reader threads to issue a memory barrier. The URCU library has one version that does this using standard primitives: it sends signals to all other threads; in their signal handlers the other threads issue barriers and indicate so; the caller waits until every thread has done so. This is very heavyweight and inefficient because it requires running all of the threads in the process, even those that aren’t currently executing! Any thread that isn’t scheduled now has no reason to execute a barrier: it will execute one as part of getting rescheduled. Mathieu Desnoyers attempted to address this by adding a membarrier() system call to Linux that would force barriers in all other running threads in the process; after more than a dozen posted patches to LKML and a lot of back and forth, it got silently dropped.

While pondering this dilemma I thought of another way to force other threads to issue a barrier: by modifying the page table in a way that would force an invalidation of the Translation Lookaside Buffer (TLB) that caches page table entries! This can be done pretty easily with mprotect or munmap.

Full details in the patch commit message.

Categorieën: Mozilla-nl planet

Nicholas Nethercote: Fix your damned data races

Mozilla planet - di, 24/02/2015 - 04:27

Nathan Froyd recently wrote about how he has been using ThreadSanitizer to find data races in Firefox, and how a number of Firefox developers — particular in the networking and JS GC teams — have been fixing these.

This is great news. I want to re-emphasise and re-state one of the points from Nathan’s post, which is that data races are a class of bug that almost everybody underestimates. Unless you have, say, authored a specification of the memory model for a systems programming language, your intuition about the potential impact of many data races is probably wrong. And I’m going to give you three links to explain why.

Hans Boehm’s paper How to miscompile programs with “benign” data races explains very clearly that it’s possible to correctly conclude that a data race is benign at the level of machine code, but it’s almost impossible at the level of C or C++ code. And if you try to do the latter by inspecting the code generated by a C or C++ compiler, you are not allowing for the fact that other compilers (including future versions of the compiler you used) can and will generate different code, and so your conclusion is both incomplete and temporary.

Dmitri Vyukov’s blog post Benign data races: what could possibly go wrong? covers similar ground, giving more examples of how compilers can legitimately compile things in surprising ways. For example, at any point the storage used by a local variable can be temporarily used to hold a different variable’s value (think register spilling). If another thread reads this storage in an racy fashion, it could read the value of an unrelated value.

Finally, John Regehr’s blog has many posts that show how C and C++ compilers take advantage of undefined behaviour to do surprising (even shocking) program transformations, and how the extent of these transformations has steadily increased over time. Compilers genuinely are getting smarter, and are increasingly pushing the envelope of what a language will let them get away with. And the behaviour of a C or C++ programs is undefined in the presence of data races. (This is sometimes called “catch-fire” semantics, for reasons you should be able to guess.)

So, in summary: if you write C or C++ code that deals with threads in Firefox — and that probably includes everybody who writes C or C++ code in Firefox — you should have read at least the first two links I’ve given above. If you haven’t, please do so now. If you have read them and they haven’t made you at least slightly terrified, you should read them again. And if TSan identifies a data race in code that you are familiar with, please take it seriously, and fix it. Thank you.

Categorieën: Mozilla-nl planet

Planet Mozilla Interns: Michael Sullivan: Why We Fight

Mozilla planet - di, 24/02/2015 - 04:21
Why We Fight, or Why Your Language Needs A (Good) Memory Model, or The Tragedy Of memory_order_consume’s Unimplementability

This, one of the most terrifying technical documents I’ve ever read, is why we fight: https://www.kernel.org/doc/Documentation/RCU/rcu_dereference.txt.

Background

For background, RCU is a mechanism used heavily in the Linux kernel for locking around read-mostly data structures; that is, data structures that are read frequently but fairly infrequently modified. It is a scheme that allows for blazingly fast read-side critical sections (no atomic operations, no memory barriers, not even any writing to cache lines that other CPUs may write to) at the expense of write-side critical sections being quite expensive.

The catch is that writers might be modifying the data structure as readers access it: writers are allowed to modify the data structure (often a linked list) as long as they do not free any memory removed until it is “safe”. Since writers can be modifying data structures as readers are reading from it, without any synchronization between them, we are now in danger of running afoul of memory reordering. In particular, if a writer initializes some structure (say, a routing table entry) and adds it to an RCU protected linked list, it is important that any reader that sees that the entry has been added to the list also sees the writes that initialized the entry! While this will always be the case on the well-behaved x86 process, architectures like ARM and POWER don’t provide this guarantee.

The simple solution to make the memory order work out is to add barriers on both sides on platforms where it is need: after initializing the object but before adding it to the list and after reading a pointer from the list but before accessing its members (including the next pointer). This cost is totally acceptable on the write-side, but is probably more than we are willing to pay on the read-side. Fortunately, we have an out: essentially all architectures (except for the notoriously poorly behaved Alpha) will not reorder instructions that have a data dependency between them. This means that we can get away with only issuing a barrier on the write-side and taking advantage of the data dependency on the read-side (between loading a pointer to an entry and reading fields out of that entry). In Linux this is implemented with macros “rcu_assign_pointer” (that issues a barrier if necessary, and then writes the pointer) on the write-side and “rcu_dereference” (that reads the value and then issues a barrier on Alpha) on the read-side.

There is a catch, though: the compiler. There is no guarantee that something that looks like a data dependency in your C source code will be compiled as a data dependency. The most obvious way to me that this could happen is by optimizing “r[i ^ i]” or the like into “r[0]”, but there are many other ways, some quite subtle. This document, linked above, is the Linux kernel team’s effort to list all of the ways a compiler might screw you when you are using rcu_dereference, so that you can avoid them.

This is no way to run a railway.

Language Memory Models

Programming by attempting to quantify over all possible optimizations a compiler might perform and avoiding them is a dangerous way to live. It’s easy to mess up, hard to educate people about, and fragile: compiler writers are feverishly working to invent new optimizations that will violate the blithe assumptions of kernel writers! The solution to this sort of problem is that the language needs to provide the set of concurrency primitives that are used as building blocks (so that the compiler can constrain its code transformations as needed) and a memory model describing how they work and how they interact with regular memory accesses (so that programmers can reason about their code). Hans Boehm makes this argument in the well-known paper Threads Cannot be Implemented as a Library.

One of the big new features of C++11 and C11 is a memory model which attempts to make precise what values can be read by threads in concurrent programs and to provide useful tools to programmers at various levels of abstraction and simplicity. It is complicated, and has a lot of moving parts, but overall it is definitely a step forward.

One place it falls short, however, is in its handling of “rcu_dereference” style code, as described above. One of the possible memory orders in C11 is “memory_order_consume”, which establishes an ordering relationship with all operations after it that are data dependent on it. There are two problems here: first, these operations deeply complicate the semantics; the C11 memory model relies heavily on a relation called “happens before” to determine what writes are visible to reads; with consume, this relation is no longer transitive. Yuck! Second, it seems to be nearly unimplementable; tracking down all the dependencies and maintaining them is difficult, and no compiler yet does it; clang and gcc both just emit barriers. So now we have a nasty semantics for our memory model and we’re still stuck trying to reason about all possible optimizations. (There is work being done to try to repair this situation; we will see how it turns out.)

Shameless Plug

My advisor, Karl Crary, and I are working on designing an alternate memory model (called RMC) for C and C++ based on explicitly specifying the execution and visibility constraints that you depend on. We have a paper on it and I gave a talk about it at POPL this year. The paper is mostly about the theory, but the talk tried to be more practical, and I’ll be posting more about RMC shortly. RMC is quite flexible. All of the C++11 model apart from consume can be implemented in terms of RMC (although that’s probably not the best way to use it) and consume style operations are done in a more explicit and more implementable (and implemented!) way.

Categorieën: Mozilla-nl planet

Mozilla WebDev Community: Beer and Tell – February 2015

Mozilla planet - ma, 23/02/2015 - 22:06

Once a month, web developers from across the Mozilla Project get together to speedrun classic video games. Between runs, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: Refract

Osmose (that’s me!) started off with Refract, a website that can turn any website into an installable application. It does this by generating an Open Web App on the fly that does nothing but redirect to the specified site as soon as it is opened. The name and icon of the generated app are auto-detected from the site, or they can be customized by the user.

Michael Kelly: Sphere Online Judge Utility

Next, Osmose shared spoj, a Python-based command line tool for working on problems from the Sphere Online Judge. The tool lets you list and read problems, as well as create solutions and test them against the expected input and output.

Adrian Gaudebert: Spectateur

Next up was adrian, who shared Spectateur, a tool to run reports against the Crash-Stats API. The webapp lets you set up a data model using attributes available from the API, and then process that data via JavaScript that the user provides. The JavaScript is executed in a sandbox, and the resulting view is displayed at the bottom of the page. Reports can also be saved and shared with others.

Peter Bengtsson: Autocompeter

Peterbe stopped by to share Autocompeter, which is a service for very fast auto-completion. Autocompeter builds upon peterbe’s previous work with fast autocomplete backed by Redis. The site is still not production-ready, but soon users will be able to request an API key to send data to the service for indexing, and Air Mozilla will be one of the first sites using it.

Pomax: inkdb

The ever-productive Pomax returns with inkdb.org, a combination of the many color- and ink-related tools he’s been sharing recently. Among other things, inkdb lets you browse fountain pen inks, map them on a graph based on similarity, and find inks that match the colors in an image. The website is also a useful example of the Mozilla Foundation Client-side Prototype in action.

Matthew Claypotch: rockbot

Lastly, potch shared a web interface for suggesting songs to a Rockbot station. Rockbot currently only has Android and iOS apps, and potch decided to create a web interface to allow people without Rockbot accounts or phones to suggest songs.

No one could’ve anticipated willkg’s incredible speedrun of Mario Paint. When interviewed after his blistering 15 hour and 24 minute run, he refused to answer any questions and instead handed out fliers for the grand opening of his cousin’s Inkjet Cartridge and Unlicensed Toilet Tissue Outlet opening next Tuesday at Shopper’s World on Worcester Road.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Categorieën: Mozilla-nl planet

Nick Cameron: Creating a drop-in replacement for the Rust compiler

Mozilla planet - ma, 23/02/2015 - 21:33
Many tools benefit from being a drop-in replacement for a compiler. By this, I mean that any user of the tool can use `mytool` in all the ways they would normally use `rustc` - whether manually compiling a single file or as part of a complex make project or Cargo build, etc. That could be a lot of work; rustc, like most compilers, takes a large number of command line arguments which can affect compilation in complex and interacting ways. Emulating all of this behaviour in your tool is annoying at best, especially if you are making many of the same calls into librustc that the compiler is.

The kind of things I have in mind are tools like rustdoc or a future rustfmt. These want to operate as closely as possible to real compilation, but have totally different outputs (documentation and formatted source code, respectively). Another use case is a customised compiler. Say you want to add a custom code generation phase after macro expansion, then creating a new tool should be easier than forking the compiler (and keeping it up to date as the compiler evolves).

I have gradually been trying to improve the API of librustc to make creating a drop-in tool easier to produce (many others have also helped improve these interfaces over the same time frame). It is now pretty simple to make a tool which is as close to rustc as you want it to be. In this tutorial I'll show how.

Note/warning, everything I talk about in this tutorial is internal API for rustc. It is all extremely unstable and likely to change often and in unpredictable ways. Maintaining a tool which uses these APIs will be non- trivial, although hopefully easier than maintaining one that does similar things without using them.

This tutorial starts with a very high level view of the rustc compilation process and of some of the code that drives compilation. Then I'll describe how that process can be customised. In the final section of the tutorial, I'll go through an example - stupid-stats - which shows how to build a drop-in tool.

Continue reading on GitHub...

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting

Mozilla planet - ma, 23/02/2015 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Andreas Gal: Search works differently than you may think

Mozilla planet - ma, 23/02/2015 - 19:26

Search is the main way we all navigate the Web, but it works very differently than you may think. In this blog post I will try to explain how it worked in the past, why it works differently today and what role you play in the process.

The services you use for searching, like Google, Yahoo and Bing, are called a search engines. The very name suggests that they go through a huge index of Web pages to find every one that contains the words you are searching for. 20 years ago search engines indeed worked this way. They would “crawl” the Web and index it, making the content available for text searches.

As the Web grew larger, searches would often find the same word or phrase on more and more pages. This was starting to make search results less and less useful because humans don’t like to read through huge lists to manually find the page that best matches their search. A search for the word “door” on Google, for example, gives you more than 1.9 billion results. It’s impractical — even impossible — for anyone look through all of them to find the most relevant page.

search-1

Google finds about 1.9 billion results for the search query “door”.

To help navigate the ever growing Web, search engines introduced algorithms to rank results by their relevance. In 1996, two Stanford graduate students, Larry Page and Sergey Brin, discovered a way to use the information available on the Web itself to rank results. They called it PageRank.

Pages on the Web are connected by links. Each link contains anchor text that explains to readers why they should follow the link. The link itself points to another page that the author of the source page felt was relevant to the anchor text. Page and Brin discovered that they could rank results by analyzing the incoming links to a page and treating each one as a vote for its quality. A result is more likely to be relevant if many links point to it using anchor text that is similar to the search terms. Page and Brin founded a search engine company in 1998 to commercialize the idea: Google.

PageRank worked so well that it completely changed the way people interact with search results. Because PageRank correctly offered the most relevant results at the top of the page, users started to pay less attention to anything below that. This also meant that pages that didn’t appear on top of the results page essentially started to become “invisible”: users stopped finding and visiting them.

To experience the “invisible Web” for yourself, head over to Google and try to look through more than just the first page of results. So few users ever wander beyond the first page that Google doesn’t even bother displaying all the 1.9 billion search results it claims to have found for “door.” Instead, the list just stops at page 63, about a 100 million pages short of what you would have expected.

Despite reporting over 1.9 billion results, in reality Google’s search results for “door” are quite finite and end at page 63.

With publishers and online commerce sites competing for that small number of top search results, a new business was born: search engine optimization (or SEO). There are many different methods of SEO, but the principal goal is to game the PageRank algorithm in your favor by increasing the number of incoming links to your own page and tuning the anchor text. With sites competing for visitors — and billions in online revenue at stake — PageRank eventually lost this arms race. Today, links and anchor text are no longer useful to determine the most relevant results and, as a result, the importance of PageRank has dramatically decreased.

Search engines have since evolved to use machine learning to rank results. People perform 1.2 trillion searches a year on Google alone  — that’s about 3 billion a day and 40,000 a second. Each search becomes part of this massive query stream as the search engine simultaneously “sees” what billions of people are searching for all over the world. For each search, it offers a range of results and remembers which one you considered most relevant. It then uses these past searches to learn what’s most relevant to the average user to provide the most relevant results for future searches.

Machine learning has made text search all but obsolete. Search engines can answer 90% or so of searches by looking at previous search terms and results. They no longer search the Web in most cases — they instead search past searches and respond based on the preferred result of previous users.

This shift from PageRank to machine learning also changed your role in the process. Without your searches — and your choice of results — a search engine couldn’t learn and provide future answers to others. Every time you use a search engine, the search engine uses you to rank its results on a massive scale. That makes you its most important asset.


Filed under: Mozilla
Categorieën: Mozilla-nl planet

David Tenser: User Success – We’re hiring!

Mozilla planet - ma, 23/02/2015 - 18:18

Just a quick +1 to Roland’s plug for the Senior Firefox Community Support Lead:

  • Ever loved a piece of software so much that you learned everything you
    could about it and helped others with it?
  • Ever coordinated an online community? Especially one around supporting users?
  • Ever measured and tweaked a website’s content so that more folks could find it and learn from it?

Got 2 out of 3 of the above?

Then work with me (since Firefox works closely with my area: Firefox for Android and in the future iOS via cloud services like Sync) and the rest of my colleagues on the fab Mozilla User Success team (especially my fantastic Firefox savvy colleagues over at User Advocacy).

And super extra bonus: you’ll also work with our fantastic community like all Mozilla employees AND Firefox product management, marketing and engineering.

Take a brief detour and head over to Roland’s blog to get a sense of one of the awesome people you’d get to work closely with in this exciting role (trust me, you’ll want to work with Roland!). After that, I hope you know what to do! :)


Categorieën: Mozilla-nl planet

Ben Kelly: That Event Is So Fetch

Mozilla planet - ma, 23/02/2015 - 16:00

The Service Workers builds have been updated as of yesterday, February 22:

Firefox Service Worker Builds

Notable contributions this week were:

  • Josh Mathews landed Fetch Event support in Nightly. This is important, of course, because without the Fetch Event you cannot actually intercept any network requests with your Service Worker. | bug 1065216
  • Catalin Badea landed more of the Service Worker API in Nightly, including the ability to communicate with the Service Worker using postMessage(). | bug 982726
  • Nikhil Marathe landed some more of his spec implementations to handle unloading documents correctly and to treat activations atomically. | bug 1041340 | bug 1130065
  • Andrea Marchesini landed fixes for FirefoxOS discovered by the team in Paris. | bug 1133242
  • Jose Antonio Olivera Ortega contributed a work-in-progress patch to force Service Worker scripts to update when dom.serviceWorkers.test.enabled is set. | bug 1134329
  • I landed my implementation of the Fetch Request and Response clone() methods. | bug 1073231

As always, please let us know if you run into any problems. Thank you for testing!

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 36 beta10 to rc

Mozilla planet - ma, 23/02/2015 - 15:52

For the RC build, we landed a few last minutes changes. We disabled <meta referrer> because of a last minute issue, we landed a compatibility fix for addons and, last but not least, some graphic crash fixes.

Note that a RC2 has been build from the same source code in order to tackle the AMD CPU bug (see comment #22).

  • 11 changesets
  • 32 files changed
  • 220 insertions
  • 48 deletions

ExtensionOccurrences cpp6 js5 jsm2 ini2 xml1 sh1 json1 hgtags1 h1

ModuleOccurrences mobile12 gfx5 browser5 toolkit3 dom2 testing1 parser1

List of changesets:

Robert StrongBug 945192 - Followup to support Older SDKs in loaddlls.cpp. r=bbondy a=Sylvestre - cce919848572 Armen Zambrano GasparnianBug 1110286 - Pin mozharness to 2264bffd89ca instead of production. r=rail a=testing - 948a2c2e31d4 Jared WeinBug 1115227 - Loop: Add part of the UITour PageID to the Hello tour URLs as a query parameter. r=MattN, a=sledru - 1a2baaf50371 Boris ZbarskyBug 1134606 - Disable <meta referrer> in Firefox 36 pending some loose ends being sorted out. r=sstamm, a=sledru - 521cf86d194b Milan SreckovicBug 1126918 - NewShSurfaceHandle can return null. Guard against it. r=jgilbert, a=sledru - 89cfa8ff9fc5 Ryan VanderMeulenMerge beta to m-r. a=merge - 2f2abd6ffebb Matt WoodrowBug 1127925 - Lazily open shared handles in DXGITextureHostD3D11 to avoid holding references to textures that might not be used. r=jrmuizel, a=sledru - 47ec64cc562f Rob WuBug 1128478 - sdk/panel's show/hide events not emitted if contentScriptWhen != 'ready'. r=erikvold, a=sledru - c2a6bab25617 Matt WoodrowBug 1128170 - Use UniquePtr for TextureClient KeepAlive objects to make sure we don't leak them. r=jrmuizel, a=sledru - 67d9db36737e Hector ZhaoBug 1129287 - Fix not rejecting partial name matches for plugin blocklist entries. r=gfritzsche, a=sledru - 7d4016a05dd3 Ryan VanderMeulenMerge beta to m-r. a=merge - a2ffa9047bf4

Categorieën: Mozilla-nl planet

Adam Lofting: The week ahead 23 Feb 2015

Mozilla planet - ma, 23/02/2015 - 12:40

First, I’ll note that even taking the time to write these short ‘note to self’ type blog posts each week takes time and is harder to do than I expected. Like so many priorities, the long term important things often battle with the short term urgent things. And that’s in a culture where working open is more than just acceptable, it’s encouraged.

Anyway, I have some time this morning sitting in an airport to write this, and I have some time on a plane to catch up on some other reading and writing that hasn’t made it to the top of the todo list for a few weeks. I may even get to post a blog post or two in the near future.

This week, I have face-to-face time with lots of colleagues in Toronto. Which means a combination of planning, meetings, running some training sessions, and working on tasks where timezone parity is helpful. It’s also the design team work week, and though I’m too far gone from design work to contribute anything pretty, I’m looking forward to seeing their work and getting glimpses of the future Webmaker product. Most importantly maybe, for a week like this, I expect unexpected opportunities to arise.

One of my objectives this week is working with Ops to decide where my time is best spent this year to have the most impact, and to set my goals for the year. That will get closer to a metrics strategy this year to improve on last years ‘reactive’ style of work.

IMG_0456If you’re following along for the exciting stories of my shed>to>office upgrades. I don’t have much to show today, but I’m building a new desk next and insulation is my new favourite thing. This photo shows the visible difference in heat loss after fitting my first square of insulation material to the roof.

Categorieën: Mozilla-nl planet

Mozilla overweegt blacklist voor Superfish-certificaat - Security.nl

Nieuws verzameld via Google - ma, 23/02/2015 - 12:04

Mozilla overweegt blacklist voor Superfish-certificaat
Security.nl
Mozilla overweegt om het Superfish-certificaat dat op laptops van Lenovo werd geïnstalleerd op een blacklist te zetten. Dat blijkt uit een discussie op Mozilla's Bugzilla, waar ontwikkelaars problemen en bugs in Mozilla-software bespreken. Door het ...

en meer »
Categorieën: Mozilla-nl planet

The Mozilla Blog: MWC 2015: Experience the Latest Firefox OS Devices, Discover what Mozilla is Working on Next

Mozilla planet - ma, 23/02/2015 - 11:14

Preview TVs and phones powered by Firefox OS and demos such as an NFC payment prototype at the Mozilla booth. Hear Mozilla speakers discuss privacy, innovation for inclusion and future of the internet.

Panasonic unveiled their new line of 4K Ultra HD TVs powered by Firefox OS at their convention in Frankfurt today. The Panasonic 2015 4k UHD (Ultra HD) LED VIERA TV, which will be shipping this spring, will also be showcased at Mozilla’s stand at Mobile World Congress 2015 in Barcelona. Like last year, Firefox OS will take its place in Hall 3, Stand 3C30, alongside major operators and device manufacturers.

Mozilla's stand at Mobile World Congress 2015, Hall 3, Stand 3C30

Mozilla’s stand at Mobile World Congress 2015, Hall 3, Stand 3C30

In addition to the Panasonic TV and the latest Firefox OS smartphones announced, visitors have the opportunity to learn more about Mozilla’s innovation projects during talks at the “Fox Den” at Mozilla’s stand, Hall 3, Stand 3C30. Just one example from the demo program:

Mozilla, in collaboration with its partners at Deutsche Telekom Innovation Labs (centers in Silicon Valley and Berlin) and T-Mobile Poland, developed the design and implementation of Firefox OS’s NFC infrastructure to enable several applications including mobile payments, transportation services, door access and media sharing. The mobile wallet demo covering ‘MasterCard® Contactless’ technology together with few non-payment functionalities will be showcased in “FoxDen” talks.

Visit www.firefoxos.com/mwc for the full list of topics and schedule of “Fox Den” talks.

 How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

Schedule of Events and Speaking Appearances

Hear from Mozilla executives on trending topics in mobile at the following sessions:

‘Digital Inclusion: Connecting an additional one billion people to the mobile internet’ Seminar

Executive Director of the Mozilla Foundation Mark Surman will join a seminar that will explore the barriers and opportunities relating to the growth of mobile connectivity in developing markets, particularly in rural areas.
Date: Monday 2 March 12:00 – 13:30 CET
Location: GSMA Seminar Theatre CC1.1

‘Ensuring User-Centred Privacy in a Connected World’ Panel

Denelle Dixon-Thayer, SVP of business and legal affairs at Mozilla, will take part in a session that explores user-centric privacy in a connected world.
Date: Monday, 2 March 16:00 – 17:30 CET
Location: Hall 4, Auditorium 3

‘Innovation for Inclusion’ Keynote Panel

Mozilla Executive Chairwoman and Co-Founder Mitchell Baker will discuss how mobile will continue to empower individuals and societies.
Date: Tuesday, 3 March 11:15 – 12:45 CET
Location: Hall 4, Auditorium 1 (Main Conference Hall)

‘Connected Citizens, Managing Crisis’ Panel

Mark Surman, Executive Director of the Mozilla Foundation, will contribute to a panel on how mobile technology is playing an increasingly central role in shaping responses to some of the most critical humanitarian problems facing the global community today.
Date: Tuesday, 3 March 14:00 – 15:30 CET
Location: Hall 4, Auditorium 2

‘Defining the Future of the Internet’ Panel

Andreas Gal, CTO at Mozilla, will take part in a session that explores the future of the Internet, bringing together industry leaders to the forefront of the net neutrality debate.
Date: Wednesday, 4 March 15:15 – 16:15 CET
Location: Hall 4, Auditorium 5

More information:

  • Please visit Mozilla and experience Firefox OS in Hall 3, Stand 3C30, at the Fira Gran Via, Barcelona from March 2-5, 2015
  • To learn more about Mozilla at MWC, please visit: www.firefoxos.com/mwc
  • For further details or to schedule a meeting at the show please contact press@mozilla.com
  • For additional resources, such as high-resolution images and b-roll video, visit: https://blog.mozilla.org/press
Categorieën: Mozilla-nl planet

MWC 2015: Experience the Latest Firefox OS Devices, Discover what Mozilla is Working on Next

Mozilla Blog - ma, 23/02/2015 - 11:14

Preview TVs and phones powered by Firefox OS and demos such as an NFC payment prototype at the Mozilla booth. Hear Mozilla speakers discuss privacy, innovation for inclusion and future of the internet.

Panasonic unveiled their new line of 4K Ultra HD TVs powered by Firefox OS at their convention in Frankfurt today. The Panasonic 2015 4k UHD (Ultra HD) LED VIERA TV, which will be shipping this spring, will also be showcased at Mozilla’s stand at Mobile World Congress 2015 in Barcelona. Like last year, Firefox OS will take its place in Hall 3, Stand 3C30, alongside major operators and device manufacturers.

Mozilla's stand at Mobile World Congress 2015, Hall 3, Stand 3C30

Mozilla’s stand at Mobile World Congress 2015, Hall 3, Stand 3C30

In addition to the Panasonic TV and the latest Firefox OS smartphones announced, visitors have the opportunity to learn more about Mozilla’s innovation projects during talks at the “Fox Den” at Mozilla’s stand, Hall 3, Stand 3C30. Just one example from the demo program:

Mozilla, in collaboration with its partners at Deutsche Telekom Innovation Labs (centers in Silicon Valley and Berlin) and T-Mobile Poland, developed the design and implementation of Firefox OS’s NFC infrastructure to enable several applications including mobile payments, transportation services, door access and media sharing. The mobile wallet demo covering ‘MasterCard® Contactless’ technology together with few non-payment functionalities will be showcased in “FoxDen” talks.

Visit www.firefoxos.com/mwc for the full list of topics and schedule of “Fox Den” talks.

 How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

Schedule of Events and Speaking Appearances

Hear from Mozilla executives on trending topics in mobile at the following sessions:

‘Digital Inclusion: Connecting an additional one billion people to the mobile internet’ Seminar

Executive Director of the Mozilla Foundation Mark Surman will join a seminar that will explore the barriers and opportunities relating to the growth of mobile connectivity in developing markets, particularly in rural areas.
Date: Monday 2 March 12:00 – 13:30 CET
Location: GSMA Seminar Theatre CC1.1

‘Ensuring User-Centred Privacy in a Connected World’ Panel

Denelle Dixon-Thayer, SVP of business and legal affairs at Mozilla, will take part in a session that explores user-centric privacy in a connected world.
Date: Monday, 2 March 16:00 – 17:30 CET
Location: Hall 4, Auditorium 3

‘Innovation for Inclusion’ Keynote Panel

Mozilla Executive Chairwoman and Co-Founder Mitchell Baker will discuss how mobile will continue to empower individuals and societies.
Date: Tuesday, 3 March 11:15 – 12:45 CET
Location: Hall 4, Auditorium 1 (Main Conference Hall)

‘Connected Citizens, Managing Crisis’ Panel

Mark Surman, Executive Director of the Mozilla Foundation, will contribute to a panel on how mobile technology is playing an increasingly central role in shaping responses to some of the most critical humanitarian problems facing the global community today.
Date: Tuesday, 3 March 14:00 – 15:30 CET
Location: Hall 4, Auditorium 2

‘Defining the Future of the Internet’ Panel

Andreas Gal, CTO at Mozilla, will take part in a session that explores the future of the Internet, bringing together industry leaders to the forefront of the net neutrality debate.
Date: Wednesday, 4 March 15:15 – 16:15 CET
Location: Hall 4, Auditorium 5

More information:

  • Please visit Mozilla and experience Firefox OS in Hall 3, Stand 3C30, at the Fira Gran Via, Barcelona from March 2-5, 2015
  • To learn more about Mozilla at MWC, please visit: www.firefoxos.com/mwc
  • For further details or to schedule a meeting at the show please contact press@mozilla.com
  • For additional resources, such as high-resolution images and b-roll video, visit: https://blog.mozilla.org/press
Categorieën: Mozilla-nl planet

Nick Thomas: FileMerge bug

Mozilla planet - ma, 23/02/2015 - 10:23

FileMerge is a nice diff and merge tool for OS X, and I use it a lot for larger code reviews where lots of context is helpful. It also supports intra-line diff, which comes in pretty handy.

filemerge screenshot

However in recent releases, at least in v2.8 which comes as part of XCode 6.1, it assumes you want to be merging and shows that bottom pane. Adjusting it away doesn’t persist to the next time you use it, *gnash gnash gnash*.

The solution is to open a terminal and offer this incantation:

defaults write com.apple.FileMerge MergeHeight 0

Unfortunately, if you use the merge pane then you’ll have to do that again. Dear Apple, pls fix!

Categorieën: Mozilla-nl planet

Mozilla to ditch Adobe Flash - GMA News

Nieuws verzameld via Google - ma, 23/02/2015 - 09:54

Mozilla to ditch Adobe Flash
GMA News
Adobe's ubiquitous but oft-targeted Flash Player is about to lose support from yet another Internet entity - Mozilla's popular open-source Firefox browser. Mozilla is now experimenting with Project Shumway, a new technology that plays Flash content ...

Categorieën: Mozilla-nl planet

Ahmed Nefzaoui: A Bit Of Consulting: Entering the RTL Web Market [Part 1]

Mozilla planet - ma, 23/02/2015 - 09:34

As I’m writing this article, we (as Mozilla and everyone else working together on Firefox OS) were never closer to releasing a version of Firefox Operating System containing this much RTL (Right-To-Left UI) support inside, which is (as the one who were/is responsible for most of it I can tell you we have) a pretty damn competitive implementation that *no* one else currently has.

v2.2 which is the version that we’re talking about here. And as you already know by now (since, you know, assuming know about Firefox OS since you’re reading this) the OS is web-based, which means it relates a lot to the web, in fact it *is* the web we want!

So I decided to write a little blog post for anyone out there who wants to get started extending their web products/websites/services support to the RTL Market.
For starters, the RTL Market is anywhere in the world where a country has a language starting from the right as their native language. This means North Africa, Middle East and bits of Asia.

RTL Is NOT Translating To Arabic

Localizing for RTL means more than translating your website into Arabi, Farsi, Urdu or Hebrew and calling it a day. It might sound harsh but here’s my advice: Do RTL right or don’t bother. If you half-arse it, it will be obvious, and you will lose money and credibility. Know who you’re talking to and how they use the web and you’ll be that much closer to a meaningful connection with your users.

RTL Is UI, Patterns And More

So there’s this one time after I had a new Ubuntu install and used Gnome for it instead of Unity, and while personalizing it, I chose for the time & calendar the setting “Arabic (Tunisia)” and suddenly 4:45 PM became litteraly PM 45:4.
Seriously, WTF ^^ I can’t read this even though I’m a native Arabic speaker (so native RTL user), even though the time pattern is flipped and so on, but still it’s WRONG, we don’t read time that way.
So, moral of the story: RTL is not about flipping UI wherever you see a horizontal list of items near each other.

So I ended up switching to English (UK) for my time and date format. And again, if you do RTL wrong it will be obvious and people will not use it.

Back to the topic, in RTL there are exceptions that you should know about before kicking off, don’t be shy to ask the community, especially from the open source community, basically any native could give you valid advices about most of the UI.

Feel free to drop in any questions you’ve got :)

Categorieën: Mozilla-nl planet

Daniel Stenberg: Bug finding is slow in spite of many eyeballs

Mozilla planet - ma, 23/02/2015 - 07:39
“given enough eyeballs, all bugs are shallow”

The saying (also known as Linus’ law) doesn’t say that the bugs are found fast and neither does it say who finds them. My version of the law would be much more cynical, something like: “eventually, bugs are found“, emphasizing the ‘eventually’ part.

(Jim Zemlin apparently said the other day that it can work the Linus way, if we just fund the eyeballs to watch. I don’t think that’s the way the saying originally intended.)

Because in reality, many many bugs are never really found by all those given “eyeballs” in the first place. They are found when someone trips over a problem and is annoyed enough to go searching for the culprit, the reason for the malfunction. Even if the code is open and has been around for years it doesn’t necessarily mean that any of all the people who casually read the code or single-stepped over it will actually ever discover the flaws in the logic. The last few years several world-shaking bugs turned out to have existed for decades until discovered. In code that had been read by lots of people – over and over.

So sure, in the end the bugs were found and fixed. I would argue though that it wasn’t because the projects or problems were given enough eyeballs. Some of those problems were found in extremely popular and widely used projects. They were found because eventually someone accidentally ran into a problem and started digging for the reason.

Time until discovery in the curl project

I decided to see how it looks in the curl project. A project near and dear to me. To take it up a notch, we’ll look only at security flaws. Not only because they are the probably most important bugs we’ve had but also because those are the ones we have the most carefully noted meta-data for. Like when they were reported, when they were introduced and when they were fixed.

We have no less than 30 logged vulnerabilities for curl and libcurl so far through-out our history, spread out over the past 16 years. I’ve spent some time going through them to see if there’s a pattern or something that sticks out that we should put some extra attention to in order to improve our processes and code. While doing this I gathered some random info about what we’ve found so far.

On average, each security problem had been present in the code for 2100 days when fixed – that’s more than five and a half years. On average! That means they survived about 30 releases each. If bugs truly are shallow, it is still certainly not a fast processes.

Perhaps you think these 30 bugs are really tricky, deeply hidden and complicated logic monsters that would explain the time they took to get found? Nope, I would say that every single one of them are pretty obvious once you spot them and none of them take a very long time for a reviewer to understand.

Vulnerability ages

This first graph (click it for the large version) shows the period each problem remained in the code for the 30 different problems, in number of days. The leftmost bar is the most recent flaw and the bar on the right the oldest vulnerability. The red line shows the trend and the green is the average.

The trend is clearly that the bugs are around longer before they are found, but since the project is also growing older all the time it sort of comes naturally and isn’t necessarily a sign of us getting worse at finding them. The average age of flaws is aging slower than the project itself.

Reports per year

How have the reports been distributed over the years? We have a  fairly linear increase in number of lines of code but yet the reports were submitted like this (now it goes from oldest to the left and most recent on the right – click for the large version):

vuln-trend

Compare that to this chart below over lines of code added in the project (chart from openhub and shows blanks in green, comments in grey and code in blue, click it for the large version):

curl source code growth

We received twice as many security reports in 2014 as in 2013 and we got half of all our reports during the last two years. Clearly we have gotten more eyes on the code or perhaps users pay more attention to problems or are generally more likely to see the security angle of problems? It is hard to say but clearly the frequency of security reports has increased a lot lately. (Note that I here count the report year, not the year we announced the particular problems, as they sometimes were done on the following year if the report happened late in the year.)

On average, we publish information about a found flaw 19 days after it was reported to us. We seem to have became slightly worse at this over time, the last two years the average has been 25 days.

Did people find the problems by reading code?

In general, no. Sure people read code but the typical pattern seems to be that people run into some sort of problem first, then dive in to investigate the root of it and then eventually they spot or learn about the security problem.

(This conclusion is based on my understanding from how people have reported the problems, I have not explicitly asked them about these details.)

Common patterns among the problems?

I went over the bugs and marked them with a bunch of descriptive keywords for each flaw, and then I wrote up a script to see how the frequent the keywords are used. This turned out to describe the flaws more than how they ended up in the code. Out of the 30 flaws, the 10 most used keywords ended up like this, showing number of flaws and the keyword:

9 TLS
9 HTTP
8 cert-check
8 buffer-overflow

6 info-leak
3 URL-parsing
3 openssl
3 NTLM
3 http-headers
3 cookie

I don’t think it is surprising that TLS, HTTP or certificate checking are common areas of security problems. TLS and certs are complicated, HTTP is huge and not easy to get right. curl is mostly C so buffer overflows is a mistake that sneaks in, and I don’t think 27% of the problems tells us that this is a problem we need to handle better. Also, only 2 of the last 15 flaws (13%) were buffer overflows.

The discussion following this blog post is on hacker news.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 71

Mozilla planet - ma, 23/02/2015 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

The big news

Rust 1.0.0-alpha.2 was released on Friday, but keep using nightlies. Six more weeks until the beta, which should become 1.0. Only six more weeks.

What's cooking on master?

157 pull requests were merged in the last week, and 15 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes Other Changes New Contributors
  • Adam Jacob
  • Alexander Bliskovsky
  • Brian Brooks
  • caipre
  • Darrell Hamilton
  • Dave Huseby
  • Denis Defreyne
  • Elantsev Serj
  • Henrik Schopmans
  • Ingo Blechschmidt
  • Jormundir
  • Lai Jiangshan
  • posixphreak
  • Ryan Riginding
  • Wesley Wiser
  • Will
  • wonyong kim
Approved RFCs

This covers two weeks since last week I wasn't able review RFCs in time.

New RFCs Friend of the Tree

The Rust Team likes to occassionally recognize people who have made outstanding contributions to The Rust Project, its ecosystem, and its community. These people are 'friends of the tree'.

This week's friend of the tree was ... Toby Scrace.

"Today I would like to nominate Toby Scrace as Friend of the Tree. Toby emailed me over the weekend about a login vulnerability on crates.io where you could log in to whomever the previously logged in user was regardless of whether the GitHub authentication was successful or not. I very much appreciate Toby emailing me privately ahead of time, and I definitely feel that Toby has earned becoming Friend of the Tree."

Quote of the Week <Manishearth> In other news, I have r+ on rust now :D <Ms2ger> No good deed goes unpunished

From #servo. Thanks to SimonSapin for the tip.

Notable Links Project Updates Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Categorieën: Mozilla-nl planet

Mozilla mulls Superfish torpedo - The Register

Nieuws verzameld via Google - ma, 23/02/2015 - 03:30

The Register

Mozilla mulls Superfish torpedo
The Register
Mozilla may neuter the likes of Superfish by blacklisting dangerous root certificates revealed less than a week ago to be used in Lenovo laptops. The move will be another blow against Superfish, which is under a sustained barrage of criticism for its ...
Lenovo Releases 'Crapware'; Tool To Remove 'Superfish' Hidden AdwareFrontline Desk
Lenovo releases tool to remove Superfish 'crapware'The Next Digit

alle 75 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Pagina's