mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Stenberg: curl, 17 years old today

Mozilla planet - vr, 20/03/2015 - 08:04

Today we celebrate the fact that it is exactly 17 years since the first public release of curl. I have always been the lead developer and maintainer of the project.

Birthdaycake

When I released that first version in the spring of 1998, we had only a handful of users and a handful of contributors. curl was just a little tool and we were still a few years out before libcurl would become a thing of its own.

The tool we had been working on for a while was still called urlget in the beginning of 1998 but as we just recently added FTP upload capabilities that name turned wrong and I decided cURL would be more suitable. I picked ‘cURL’ because the word contains URL and already then the tool worked primarily with URLs, and I thought that it was fun to partly make it a real English word “curl” but also that you could pronounce it “see URL” as the tool would display the contents of a URL.

Much later, someone (I forget who) came up with the “backronym” Curl URL Request Library which of course is totally awesome.

17 years are 6209 days. During this time we’ve done more than 150 public releases containing more than 2600 bug fixes!

We started out GPL licensed, switched to MPL and then landed in MIT. We started out using RCS for version control, switched to CVS and then git. But it has stayed written in good old C the entire time.

The term “Open Source” was coined 1998 when the Open Source Initiative was started just the month before curl was born, which was superseded with just a few days by the announcement from Netscape that they would free their browser code and make an open browser.

We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…

We have grown to support a ridiculous amount of protocols and curl can be built to run on virtually every modern operating system and CPU architecture.

The list of helpful souls who have contributed to make curl into what it is now have grown at a steady pace all through the years and it now holds more than 1200 names.

Employments

In 1998, I was employed by a company named Frontec Tekniksystem. I would later leave that company and today there’s nothing left in Sweden using that name as it was sold and most employees later fled away to other places. After Frontec I joined Contactor for many years until I started working for my own company, Haxx (which we started on the side many years before that), during 2009. Today, I am employed by my forth company during curl’s life time: Mozilla. All through this project’s lifetime, I’ve kept my work situation separate and I believe I haven’t allowed it to disturb our project too much. Mozilla is however the first one that actually allows me to spend a part of my time on curl and still get paid for it!

The Netscape announcement which was made 2 months before curl was born later became Mozilla and the Firefox browser. Where I work now…

Future

I’m not one of those who spend time glazing toward the horizon dreaming of future grandness and making up plans on how to go there. I work on stuff right now to work tomorrow. I have no idea what we’ll do and work on a year from now. I know a bunch of things I want to work on next, but I’m not sure I’ll ever get to them or whether they will actually ship or if they perhaps will be replaced by other things in that list before I get to them.

The world, the Internet and transfers are all constantly changing and we’re adapting. No long-term dreams other than sticking to the very simple and single plan: we do file-oriented internet transfers using application layer protocols.

Rough estimates say we may have a billion users already. Chances are, if things don’t change too drastically without us being able to keep up, that we will have even more in the future.

1000 million users

It has to feel good, right?

I will of course point out that I did not take curl to this point on my own, but that aside the ego-boost this level of success brings is beyond imagination. Thinking about that my code has ended up in so many places, and is driving so many little pieces of modern network technology is truly mind-boggling. When I specifically sit down or get a reason to think about it at least.

Most of the days however, I tear my hair when fixing bugs, or I try to rephrase my emails to no sound old and bitter (even though I can very well be that) when I once again try to explain things to users who can be extremely unfriendly and whining. I spend late evenings on curl when my wife and kids are asleep. I escape my family and rob them of my company to improve curl even on weekends and vacations. Alone in the dark (mostly) with my text editor and debugger.

There’s no glory and there’s no eternal bright light shining down on me. I have not climbed up onto a level where I have a special status. I’m still the same old me, hacking away on code for the project I like and that I want to be as good as possible. Obviously I love working on curl so much I’ve been doing it for over seventeen years already and I don’t plan on stopping.

Celebrations!

Yeps. I’ll get myself an extra drink tonight and I hope you’ll join me. But only one, we’ll get back to work again afterward. There are bugs to fix, tests to write and features to add. Join in the fun! My backlog is only growing…

Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: The Evolutionary Prototype

Mozilla planet - vr, 20/03/2015 - 06:00

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

I came upon a new (for me) term recently: evolutionary prototyping. This is in contrast to the rapid or throwaway prototype.

Another term for the rapid prototype: the “close-ended prototype.” The prototype with a sunset, unlike the evolutionary prototype which is expected to become the final product, even if every individual piece of work will only end up as disposable scaffolding for the final product.

The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it.

The first version of the product, written primarily late at night, was definitely a throwaway prototype. All imperative jQuery UI and lots of copy-and-paste code. It served its purpose. I was able to extend that code reasonably well – and I played with many ideas during that initial stage – but it was unreasonable to ask anyone else to touch it, and even I hated the code when I had stepped away from it for a couple weeks. So most of the code is being rewritten for the next phase.

To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest.

Thinking about this, it’s a lot like the Minimal Viable Product approach. Of which I am skeptical. And maybe I’m skeptical because I see MVP as reductive, encouraging the aggressive stripping down of a product, and in the process encouraging design based on conventional wisdom instead of critical engagement. When people push me in that direction I get cagey and defensive (not a great response on my part, just acknowledging it). The framing of the evolutionary prototype feels more humble to me. I don’t want to focus on the question “how can we most quickly get this into users hands?” but instead “what do we know we should build, so we can collect a fuller list of questions we want to answer?”

Categorieën: Mozilla-nl planet

Gregory Szorc: New High Scores for hg.mozilla.org

Mozilla planet - vr, 20/03/2015 - 04:22

It's been a rough week.

The very short summary of events this week is that Firefox's release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The Firefox release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of Firefox release automation was making more requests than it should have and was doing so in a way that increased the chances of a prolonged service outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TiB of data. This is a significant increase from the ~4 TiB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GiB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GiB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making mis-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.

Categorieën: Mozilla-nl planet

Niko Matsakis: The danger of negative thinking

Mozilla planet - do, 19/03/2015 - 22:59

One of the aspects of language design that I find the most interesting is trying to take time into account. That is, when designing a type system in particular, we tend to think of the program as a fixed, immutable artifact. But of course real programs evolve over time, and when designing a language it’s important to consider what impact the type rules will have on the ability of people to change their programs. Naturally as we approach the 1.0 release of Rust this is very much on my mind, since we’ll be making firmer commitments to compatibility than we ever have before.

Anyway, with that introduction, I recently realized that our current trait system contains a forward compatibility hazard concerned with negative reasoning. Negative reasoning is basically the ability to decide if a trait is not implemented for a given type. The most obvious example of negative reasoning are negative trait bounds, which have been proposed in a rather nicely written RFC. However, what’s perhaps less widely recognized is that the trait system as currently implemented already has some amount of negative reasoning, in the form of the coherence system.

This blog post covers why negative reasoning can be problematic, with a focus on the pitfalls in the current coherence system. This post only covers the problem. I’ve been working on prototyping possible solutions and I’ll be covering those in the next few blog posts.

A goal

Let me start out with an implicit premise of this post. I think it’s important that we be able to add impls of existing traits to existing types without breaking downstream code (that is, causing it to stop compiling, or causing it to radically different things). Let me give you a concrete example. libstd defines the Range<T> type. Right now, this type is not Copy for various good reasons. However, we might like to make it Copy in the future. It feels like that should be legal. However, as I’ll show you below, this could in fact cause existing code not to compile. I think this is a problem.

(In the next few posts when I start covering solutions, we’ll see that it may be that one cannot always add impls of any kind for all traits to all types. If so, I can live with it, but I think we should try to make it possible to add as many kinds of impls as possible.)

Negative reasoning in coherence today, the simple case

“Coherence” refers to a set of rules that Rust uses to enforce the idea that there is at most one impl of any trait for any given set of input types. Let me introduce an example crate hierarchy that I’m going to be coming back to throughout the post:

1 2 3 4 5 6 7 8 libstd | +-> lib1 --+ | | +-> lib2 --+ | v app

This diagram shows that four crates: libstd, two libraries (creatively titled lib1 and lib2), and an application app. app uses both of the libraries (and, transitively, libstd). The libraries are otherwise defined independently from one another., We say that libstd is a parent of the other crates, and that lib[12] are cousins.

OK, so, imagine that lib1 defines a type Carton but doesn’t implement any traits for it. This is a kind of smart pointer, like Box.

1 2 // In lib1 struct Carton<T> { }

Now imagine that the app crate defines a type AppType that uses the Debug trait.

1 2 3 // In app struct AppType { } impl Debug for AppType { }

At some point, app has a Carton<AppType> that it is passing around, and it tries to use the Debug trait on that:

1 2 3 4 5 // In app fn foo(c: Carton<AppType>) { println!("foo({:?})", c); // Error ... }

Uh oh, now we encounter a problem because there is no impl of Debug for Carton<AppType>. But app can solve this by adding such an impl:

1 2 // In app impl Debug for Carton<AppType> { ... }

You might expect this to be illegal per the orphan rules, but in fact it is not, and this is no accident. We want people to be able to define impls on references and boxes to their types. That is, since Carton is a smart pointer, we want impls like the one above to work, just like you should be able to do an impl on &AppType or Box<AppType>.

OK, so, what’s the problem? The problem is that now maybe lib1 notices that Carton should define Debug, and it adds a blanket impl for all types:

1 2 // In lib1 impl<T:Debug> Debug for Carton<T> { }

This seems like a harmless change, but now if app tries to recompile, it will encounter a coherence violation.

What went wrong? Well, if you think about it, even a simple impl like

1 impl Debug for Carton<AppType> { }

contains an implicit negative assertion that no ancestor crate defines an impl that could apply to Carton<AppType>. This is fine at any given moment in time, but as the ancestor crates evolve, they may add impls that violate this negative assertion.

Negative reasoning in coherence today, the more complex case

The previous example was relatively simple in that it only involved a single trait (Debug). But the current coherence rules also allow us to concoct examples that employ multiple traits. For example, suppose that app decided to workaround the absence of Debug by defining it’s own debug protocol. This uses Debug when available, but allows app to add new impls if needed.

1 2 3 4 5 6 7 8 9 10 // In lib1 (note: no `Debug` impl yet) struct Carton<T> { } // In app, before `lib1` added an impl of `Debug` for `Carton` trait AppDebug { } impl<T:Debug> AppDebug for T { } // Impl A struct AppType { } impl Debug for AppType { } impl AppDebug for Carton<AppType> { } // Impl B

This is all perfectly legal. In particular, implementing AppDebug for Carton<AppType> is legal because there is no impl of Debug for Carton, and hence impls A and B are not in conflict. But now if lib1 should add the impl of Debug for Carton<T> that it added before, we get a conflict again:

1 2 // Added to lib1 impl<T:Debug> Debug for Carton<T> { }

In this case though the conflict isn’t that there are two impls of Debug. Instead, adding an impl of Debug caused there to be two impls of AppDebug that are applicable to Carton<AppType>, whereas before there was only one.

Negative reasoning from OIBIT and RFC 586

The conflicts I showed before have one thing in common: the problem is that when we add an impl in the supercrate, they cause there to be too many impls in downstream crates. This is an important observation, because it can potentially be solved by specialization or some other form conflict resolution – basically a way to decide between those duplicate impls (see below for details).

I don’t believe it is possible today to have the problem where adding an impl in one crate causes there to be too few impls in downstream crates, at least not without enabling some feature-gates. However, you can achieve this easily with OIBIT and RFC 586. This suggests to me that we want to tweak the design of OIBIT – which has been accepted, but is still feature-gated – and we do not want to accept RFC 586.

I’ll start by showing what I mean using RFC 586, because it’s more obvious. Consider this example of a trait Release that is implemented for all types that do not implement Debug:

1 2 3 // In app trait Release { } impl<T:!Debug> Release for T { }

Clearly, if lib1 adds an impl of Debug for Carton, we have a problem in app, because whereas before Carton<i32> implemented Release, it now does not.

Unfortunately, we can create this same scenario using OIBIT:

1 2 trait Release for .. { } impl<T:Debug> !Release for T { }`

In practice, these sorts of impls are both feature-gated and buggy (e.g. #23072), and there’s a good reason for that. When I looked into fixing the bugs, I realized that this would entail implementing essentially the full version of negative bounds, which made me nervous. It turns out we don’t need conditional negative impls for most of the uses of OIBIT that we have in mind, and I think that we should forbid them before we remove the feature-gate.

Orphan rules for negative reasoning

One thing I tried in researching this post is to apply a sort of orphan condition to negative reasoning. To see what I tried, let me walk you through how the overlap check works today. Consider the following impls:

1 2 3 trait AppDebug { ... } impl<T:Debug> AppDebug for T { } impl AppDebug for Carton<AppType> { }

(Assume that there is no impl of Debug for Carton.) The overlap checker would check these impls as follows. First, it would create fresh type variables for T and unify, so that T=Carton<AppType>. Because T:Debug must hold for the first impl to be applicable, and T=Carton<AppType>, that implies that if both impls are to be applicable, then Carton<AppType>: Debug must hold. But by searching the impls in scope, we can see that it does not hold – and thanks to the coherence orphan rules, we know that nobody else can make it hold either. So we conclude that the impls do not overlap.

It’s true that Carton<AppType>: Debug doesn’t hold now – but this reasoning doesn’t take into account time. Because Carton is defined in the lib1 crate, and not the app crate, it’s not under “local control”. It’s plausible that lib1 can add an impl of Debug for Carton<T> for all T or something like that. This is the central hazard I’ve been talking about.

To avoid this hazard, I modified the checker so that it could only rely on negative bounds if either the trait is local or else the type is a struct/enum defined locally. The idea being that the current crate is in full control of the set of impls for either of those two cases. This turns out to work somewhat OK, but it breaks a few patterns we use in the standard library. The most notable is IntoIterator:

1 2 3 4 5 6 // libcore trait IntoIterator { } impl<T:Iterator> for IntoIterator { } // libcollections impl<'a,T> IntoIterator for &'a Vec<T> { }

In particular, the final impl there is illegal, because it relies on the fact that &Vec<T>: Iterator, and the type &Vec is not a struct defined in the local crate (it’s a reference to a struct). In particular, the coherence checker here is pointing out that in principle we could add an impl like impl<T:Something> Iterator for &T, which would (maybe) conflict. This pattern is one we definitely want to support, so we’d have to find some way to allow this. (See below for some further thoughts.)

Limiting OIBIT

As an aside, I mentioned that OIBIT as specified today is equivalent to negative bounds. To fix this, we should add the constraint that negative OIBIT impls cannot add additional where-clauses beyond those implied by the types involved. (There isn’t much urgency on this because negative impls are feature-gated.) Therefore, one cannot write an impl like this one, because it would be adding a constraint T:Debug:

1 2 trait Release for .. { } impl<T:Debug> !Release for T { }`

However, this would be legal:

1 2 3 struct Foo<T:Debug> { } trait Release for .. { } impl<T:Debug> !Release for Foo<T> { }`

The reason that this is ok is because the type Foo<T> isn’t even valid if T:Debug doesn’t hold. We could also just skip such “well-formedness” checking in negative impls and then say that there should be no where-clauses at all.

Either way, the important point is that when checking a negative impl, the only thing we have to do is try and unify the types. We could even go farther, and have negative impls use a distinct syntax of some kind.

Still to come.

OK, so this post laid out the problem. I have another post or two in the works exploring possible solutions that I see. I am currently doing a bit of prototyping that should inform the next post. Stay tuned.

Categorieën: Mozilla-nl planet

Avi Halachmi: Firefox e10s Performance on Talos

Mozilla planet - do, 19/03/2015 - 19:57

Electrolysis, or e10s, is a Firefox project whose goal is to spread the work of browsing the web over multiple processes. The main initial goal is to separate the UI from web content and reduce negative effects one could have over the other.

e10s is already enabled by default on Firefox Nightly builds, and tabs which run on a different process than the UI are marked with an underline at the tab’s title.

While currently the e10s team’s main focus is correctness more than performance (one bug list and another), we can start collecting performance data and understand roughly where we stand.

jmaher, wlach and myself worked to make Talos run well in e10s Firefox and provide meaningful results. The Talos harness and tests now run well on Windows and Linux, while OS X should be handled shortly (bug 1124728). Session restore tests are still not working with e10s (bug 1098357).

Talos e10s tests run by default on m-c pushes, though Treeherder still hides the e10s results (they can be unhidden from the top right corner of the Treeherder job page).

To compare e10s Talos results with non-e10s we use compare.py, a script which is available in the Talos repository. We’ve improved it recently to make such comparisons more useful. It’s also possible to use the compare-talos web tool.

Here are some numbers on Windows 7 and Ubuntu 32 comparing e10s to non-e10s Talos results of a recent build using compare.py (the output below has been made more readable but the numbers have not been modified).

At the beginning of each line:

  • A plus + means that e10s is better.
  • A minus - means that e10s is worse.

The change % value simply compare the numbers on both sides. For most tests raw numbers are lower-is-better and therefore a negative percentage means that e10s is better. Tests where higher-is-better are marked with an asterix * near the percentage value (and for these values positive percentage means that e10s is better).

Descriptions of all Talos tests and what their numbers mean.

$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Win7 --master-revision 42afc7ef5ccb Windows 7 [ non-e10s ] [ e10s ] [ results ] change % [ results ] - tresize 15.1 [ +1.7%] 15.4 - kraken 1529.3 [ +3.9%] 1589.3 + v8_7 17798.4 [ +1.6%]* 18080.1 + dromaeo_css 5815.2 [ +3.7%]* 6033.2 - dromaeo_dom 1310.6 [ -0.5%]* 1304.5 + a11yr 178.7 [ -0.2%] 178.5 ++ ts_paint 797.7 [ -47.8%] 416.3 + tpaint 155.3 [ -4.2%] 148.8 ++ tsvgr_opacity 228.2 [ -56.5%] 99.2 - tp5o 225.4 [ +5.3%] 237.3 + tart 8.6 [ -1.0%] 8.5 + tcanvasmark 5696.9 [ +0.6%]* 5732.0 ++ tsvgx 199.1 [ -24.7%] 149.8 + tscrollx 3.0 [ -0.2%] 3.0 --- glterrain 5.1 [+268.9%] 18.9 + cart 53.5 [ -1.2%] 52.8 ++ tp5o_scroll 3.4 [ -13.0%] 3.0 $ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Linux --master-revision 42afc7ef5ccb Ubuntu 32 [ non-e10s ] [ e10s ] [ results ] change [ results ] ++ tresize 17.2 [ -25.1%] 12.9 - kraken 1571.8 [ +2.2%] 1606.6 + v8_7 19309.3 [ +0.5%]* 19399.8 + dromaeo_css 5646.3 [ +3.9%]* 5866.8 + dromaeo_dom 1129.1 [ +3.9%]* 1173.0 - a11yr 241.5 [ +5.0%] 253.5 ++ ts_paint 876.3 [ -50.6%] 432.6 - tpaint 197.4 [ +5.2%] 207.6 ++ tsvgr_opacity 218.3 [ -60.6%] 86.0 -- tp5o 269.2 [ +21.8%] 328.0 -- tart 6.2 [ +13.9%] 7.1 -- tcanvasmark 8153.4 [ -15.6%]* 6877.7 -- tsvgx 580.8 [ +10.2%] 639.7 ++ tscrollx 9.1 [ -16.5%] 7.6 + glterrain 22.6 [ -1.4%] 22.3 - cart 42.0 [ +6.5%] 44.7 ++ tp5o_scroll 8.8 [ -12.4%] 7.7

For the most part, the Talos scores are comparable with a few improvements and a few regressions - most of them relatively small. Windows e10s results fare a bit better than Linux results.

Overall, that’s a great starting point for e10s!

A noticeable improvement on both platforms is tp5o-scroll. This test scrolls the top-50 Alexa pages and measures how fast it can iterate with vsync disabled (ASAP mode).

A noticeable regression on Windows is WebGL (glterrain) - Firefox with e10s performs roughly 3x slower than non-e10s Firefox - bug 1028859 (bug 1144906 should also help for Windows).

A supposed notable improvement is of the tsvg-opacity test, however, this test is sometimes too sensitive to underlying platform changes (regardless of e10s), and we should probably keep an eye on it (yet again, e.g. bug 1027481).

We don’t have bugs filed yet for most Talos e10s regressions since we don’t have systems in place to alert us of them, and it’s still not trivial for developers to obtain e10s test results (e10s doesn’t run on try-server yet, and on m-c it also doesn’t run on every batch of pushes). See bug 1144120.

Snappiness is something that both the performance team and the e10s team care deeply about, and so we’ll be working closely together when it comes time to focus on making multi-process Firefox zippy.

Thanks to vladan and mconley for their valuable comments.

Categorieën: Mozilla-nl planet

Air Mozilla: Participation at Mozilla

Mozilla planet - do, 19/03/2015 - 18:00

Participation at Mozilla The Participation Forum

Categorieën: Mozilla-nl planet

Air Mozilla: Participation at Mozilla

Mozilla planet - do, 19/03/2015 - 18:00

Participation at Mozilla The Participation Forum

Categorieën: Mozilla-nl planet

Air Mozilla: Reps weekly

Mozilla planet - do, 19/03/2015 - 17:00

Reps weekly Weekly Mozilla Reps call

Categorieën: Mozilla-nl planet

Mike Conley: The Joy of Coding (Episode 6): Plugins!

Thunderbird - do, 19/03/2015 - 16:13

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Categorieën: Mozilla-nl planet

Mike Conley: The Joy of Coding (Episode 6): Plugins!

Mozilla planet - do, 19/03/2015 - 16:13

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Categorieën: Mozilla-nl planet

Mozilla Science Lab: Bullying & Imposter Phenomenon: the Fraught Process of Learning to Code in the Lab

Mozilla planet - do, 19/03/2015 - 16:00

I’ve been speaking and writing for some time now on the importance of communication and collaboration in the scientific community. We have to stop reinventing wheels, and we can’t expect to learn the skills in coding and data management we need from attending one workshop alone; we need to establish customs and venues for the free exchange of ideas, and for practicing the new skills we are trying to become fluent in. Normally, these comments are founded on values of efficiency and sound learning strategies. But lately, I find myself reaching the same conclusions from a different starting point: the vital need for a reality check on how we treat the towering challenge of learning to code.

Everywhere I go, I meet students who tell me the same story: they are terrified of asking for help, or admitting they don’t know something to the other members of their research group – and the more computationally intensive the field, the more intense this aversion becomes around coding. Fears include: “What if my supervisor is disappointed that I asked such a ‘trivial’ question?“; “What if the other students lose respect for me if I admit I don’t know something?,” and, perhaps most disheartening of all, “What if this means I am not cut out for my field?

If this is what our students are thinking – what have we done, and where has this come from?

There can be, at times, a toxic machismo that creeps into any technical field: a vicious cycle begins when, fearing that admitting ‘ignorance’ will lead to disrepute (perhaps even as lost grants and lost promotions), we dismiss the challenges faced by others, and instill the same fear in our colleagues of admitting they don’t know everything. The arrogant colleague that treats as trivial every problem they don’t have to solve themselves has risen to the level of departmental trope, and it is beginning to cost us in new blood. I remember working on a piece of code once for weeks, for which I received neither feedback nor advice, but only the admonition ‘you should have been able to write that in five minutes‘. Should I have? By what divine inspiration, genetic memory, or deal with the devil would I, savant like, channel a complex and novel algorithm in three hundred seconds, as a new graduate student with absolutely no training in programming?

That rebuke was absurd – and for those less pathologically insensitive than myself, devastating as they accrue, year after year.

Even in the absence of such bullying, we have made things for the new coder double-bleak. The computing demands in almost every field of research are skyrocketing, and the extent to which we train our students in computing continues to stagnate. Think of the signal this sends: computing skills are, apparently, beneath contempt, not even worth speaking of and so trivial to be not worth the training. And yet, they are so central to a growing number of fields as to be indispensable. Is it any wonder then, that so many students and early career researchers feel alienated and isolated in their fields, and doubt themselves for being hobbled in their work when they ‘fail’ to miraculously intuit the skills their establishment has signaled should be obvious?

A couple of years ago, Angelina Fabbro, my friend and mentor as well as noted web developer, wrote a brilliant article on Imposter Phenomenon (aka Imposter Syndrome), which they define as ‘the experience of feeling like a fraud (or impostor) while participating in communities of highly skilled participants even when you are of a level of competence to match those around you.‘ I strongly recommend reading this article, because even though it was written with the tech world in mind, it is one hundred percent applicable to the experience of legions of academics making careers in the current age of adolescence in research coding. The behavior and effects I describe above have contributed to an epidemic of imposter phenomenon in the sciences, particularly surrounding coding and digital acumen and particularly in students. That fear is keeping us in our little silos, making us terrified to come out, share our work, and move forward together; I consider that fear to be one of the biggest obstacles to open science. Also from Fabbro’s article:

‘In the end I wasn’t shocked that the successful people I admired had experienced impostor phenomenon and talked to me about it — I was shocked that I somehow thought the people I see as heroes were somehow exempt from having it… We’re all just doing the best we know how to when it comes to programming, it’s just that some people have more practice coming across as confident than others do. Never mistake confidence for competence, though.’ – Angelina Fabbro

So What Are We Going To Do About It?

The cultural and curricular change around coding for research that ultimately needs to happen to cut to the root of these problems will be, like all institutional change, slow. But what we can do, right now, is to start making spaces at our local universities and labs where students and researchers can get together, struggle with problems, ask each other questions and work together on code in casual, no-bullying, no-stakes safe spaces, welcoming of beginners and where no question is too basic. These are the Study Groups, User’s Groups and Hacky Hours I’ve been talking about, and addressing the problems I described is the other dimension, beyond simple technical skill building, of why they are so important. In my travels, I’ve stumbled across a few; here’s a map:

Study Groups & Hacky Hours

Please, if you’re running a meetup group or something similar for researchers writing code, let me know (bill@mozillafoundation.org) – I’d love to add you to the map and invite you to tell your story here on this blog (see Kathi Unglert’s guest post for a great example). Also, if you’re curious about the idea of small, locally driven study groups, my colleague Noam Ross has assembled a panel Ask Me Anything event on the Mozilla Science Forum, kicking off at 6 PM EDT, Tuesday, March 24. Panelists from several different meetup groups will be available to answer your questions on this thread, from 6-8 PM EDT; more details are on the blog. Don’t forget to also check out our growing collection of lessons and links to curriculum ideas for a study group meetup, if you’d like some material to try working through.

There are tons of ways to do a good meetup – but to start, see if you can get a couple of people you know and trust to hang out once or twice a month, work on some code, and acknowledge that you’re all still learning together. If you can create a space like that, a whole lot of the anxiety and isolation around learning to code for research will fall away, and more people will soon want to join; I’d love to hear your stories, and I hope you’ll join us for the AMA on the 24th.

Categorieën: Mozilla-nl planet

Monica Chew: How do I turn on Tracking Protection? Let me count the ways.

Mozilla planet - do, 19/03/2015 - 15:58

I get this question a lot from various people, so it deserves its own post. Here's how to turn on Tracking Protection in Firefox to avoid connecting to known tracking domains from Disconnect's blocklist:
  1. Visit about:config and turn on privacy.trackingprotection.enabled. Because this works Firefox 35 or later, this is my favorite method. In Firefox 37 and later, it also works on Fennec.
  2. On Fennec Nightly, visit Settings > Privacy and select the checkbox "Tracking Protection".
  3. Install Lightbeam and toggle the "Tracking Protection" button in the top-right corner. Check out the difference in visiting only 2 sites with Tracking Protection on and off!
  4. On Firefox Nightly, visit about:config and turn on browser.polaris.enabled. This will enable privacy.trackingprotection.enabled and also show the checkbox for it in about:preferences#privacy, similar to the Fennec screenshot above. Because this only works in Nightly and also requires visiting about:config, it's my least favorite option.
  5. Do any of the above and sign into Firefox Sync. Tracking Protection will be enabled on all of your desktop profiles!
Categorieën: Mozilla-nl planet

Ben Hearsum: Release Automation Futures: Seamless integration of manual and automated steps

Mozilla planet - do, 19/03/2015 - 15:30

I've written about the history of our Release Automation systems in the past. We've gone from mostly manual releases to almost completely automated since I joined Mozilla. One thing I haven't talked about before is Ship It - our web tool for kicking off releases:



It may be ugly, but having it has meant that we don't have to log on to a single machine to ship a release. A release engineer doesn't even need to be around to start the release process - Release Management has direct access to Ship It to do it themselves. We're only needed to push releases live, and that's something we'd like to fix as well. We're looking at tackling that and other ancillary issues of releases, such as:

  • Improving and expanding validation of release automation inputs (revisions, branches, locales, etc.)
  • Scripting the publishing of Fennec to Google Play
  • Giving release Release Managers more direct control over updates
  • Updating metadata (ship dates, versions, locales) about releases
  • Improving security with better authentication (eg, HSMs or other secondary tokens) and authorization (eg, requiring multiple people to push updates)

Rail and I had a brainstorming session about this yesterday and a theme that kept coming up was that most of the things we want to improve are on the edges of release automation: they happen either before the current automation starts, or after the current automation ends. Everything in this list also needs someone to decide that it needs to happen -- our automation can't make the decision about what revision a release should be built with or when to push it to Google Play - it only knows how to do those things after being told that it should. These points where we jump back and forth between humans and automation are a big rough edge for us right now. The way they're implemented currently is very situation-specific, which means that adding new points of human-automation interaction is slow and full of uncertainty. This is something we need to fix in order to continue to ship as fast and effectively as we do.

We think we've come up a new design that will enable us to deal with all of the current human-automation interactions and any that come up in the future. It consists of three key components:

Workflows

A workflow is a DAG that represents an entire release process. It consists of human steps, automation steps, and potentially other types. An important point about workflows is that they aren't necessarily the same for every release. A Firefox Beta's workflow is different than a Fennec Beta or Firefox Release. The workflow for a Firefox Beta today may look very different than for one a few months from now. The details of a workflow are explicitly not baked into the system - they are part of the data that feeds it. Each node in the DAG will have upstreams, downstreams, and perhaps a list of notifications. The tooling around the workflow will respond to changes in state of each node and determine what can happen next. Much of each workflow will end up being the existing graph of Buildbot builders (eg: this graph of Firefox Beta jobs).

We're hoping to use existing software for this part. We've looked at Amazon's Simple Workflow Service already, but it doesn't support any dependencies between nodes, so we're not sure if it's going to fit the bill. We're also looking at Taskcluster which does do dependency management. If anyone knows of anything else that might be useful here please let know!

Ship It

As well as continuing to provide a human interface, Ship It will be the API between the workflow tool and humans/automation. When new nodes become ready it makes that information available to automation, or gives humans the option to enact them (depending on node type). It also receives state changes of nodes from automation (eg, build completion events). Ship It may also be given the responsibility of enforcing user ACLs.

Release Runner

Release Runner is the binding between Ship It and the backend parts of the automation. When Ship It is showing automation events ready to start, it will poke the right systems to make them go. When those jobs complete, it will send that information back to Ship It.

This will likely be getting a better name.

This design still needs some more thought and review, but we're very excited to be moving towards a world where humans and machines can integrate more seamlessly to get you the latest Firefox hotness more quickly and securely.

Categorieën: Mozilla-nl planet

Firefox 35 zum Download: Mozilla-Browser kann chatten - T-Online

Nieuws verzameld via Google - do, 19/03/2015 - 14:36

T-Online

Firefox 35 zum Download: Mozilla-Browser kann chatten
T-Online
Firefox 35 hat an der Oberfläche kaum Veränderungen erfahren, sondern wurde vor allem technisch weiter verbessert – der neue Mozilla-Browser wurde weiter beschleunigt und abgesichert. Insgesamt neun Sicherheitslücken werden geschlossen. Für den ...

Google Nieuws
Categorieën: Mozilla-nl planet

Andy McKay: Cutting the cord

Mozilla planet - do, 19/03/2015 - 08:00

The CRTC goes on and on about plans for TV, cable and unbundling and all that. Articles are written about how to watch all the things without paying for cable. Few discuss the main point, of watching less movies and television.

Television as show on cable is probably the worst thing in the world. You spend a large amount of money to receive the channels. Then you spend a large portion of that time watching adverts. The limit to adverts in Canada is 12% on specialty channels, not including promotion of their own shows.

Advertising is aspirational and as such depressing. It spends all its time telling you things you should buy, things you should be doing, things you should be spending on your money on and if you do all that... there's just more advertising to get you wanting different things.

Its even worse in the US where Americans spend on average over four and a half hours a day watching television. How that is even possible, I don't know. Of that 17 to 18 minutes is adverts. That means they watch some where around 46 minutes of adverts a day.

So you pay for advertising. Why would you do that? That is terrible.

Netflix does not have any adverts. If you need to watch more than Netflix multiple other sources exist e.g.: ctv.ca. If you need to watch more than that?

Just go do something else. Please go do something else. Read a book, meet your neighbours, play a game with friends, take up a sport... anything but watching that much television and adverts.

I can only hope that cable television dies off because its the worst thing ever.

Categorieën: Mozilla-nl planet

Frederic Wenzel: Updating Adobe Flash Without Restarting Firefox

Mozilla planet - do, 19/03/2015 - 08:00

No reason for a Flash upgrade to shut down your entire browser, even if it claims so.

It's 2015, and the love-hate relationship of the Web with Flash has not quite ended yet, though we're getting there. Click-to-play in Firefox makes sure most websites can't run Flash willy-nilly anymore, but most people, myself included, still have it installed, so keeping Flash up-to-date with its frequently necessary security updates is a process well-known to users.

Sadly, the Adobe Flash updater has the nasty habit of asking you to shut down Firefox entirely, or it won't install the update:

 Close Firefox

If you're anything like me, you have dozens of tabs open, half-read articles and a few draft emails open for good measure, and if there's one thing you don't want to do right now is restart your browser.

Fret not, the updater is lying.

Firefox runs the Flash plugin in an out of process plugin container, which is tech talk for: separately from your main Firefox browser.

Sure enough, in a command line window, I can search for a running instance of an application called plugin-container:

Firefox plugin container

Looks complicated, but tells me that Firefox Nightly is running a plugin container with process ID 7602.

Ka-boom

The neat thing is that we can kill that process without taking down the whole browser:

killall plugin-container

Note: killall is the sawed-off shotgun of process management. It'll close any process by the name you hand to it, so use with caution. For a lot more fine-grained control, find the process ID (in the picture above: 7602, but it'll be different for your computer) and then use the kill command on only that process ID (e.g., kill 7602).

This will, of course, stop all the Flash instances you might have running in your browser right now, so don't do it right in the middle of watching a movie on a Flash video site (note: Youtube does not use Flash by default anymore).

Now hit Retry in the Adobe Updater window and sure enough, it'll install the update without requiring you to close your entire browser.

Aaand we're done.

If you were in fact using Flash at the time of the update, you might see this in the browser when you're done:

 Flash plugin crashed

You can just reload the page to restart Flash.

Why won't Adobe do that for you, and instead asks you to close your browser entirely? I don't know. But if the agony of closing your entire browser outweighs the effort of a little command-line magic, now you know how to help yourself.

Hack on, friends.

Categorieën: Mozilla-nl planet

Cameron Kaiser: IonPower now beyond "doesn't suck" stage

Mozilla planet - do, 19/03/2015 - 04:45
Very pleased! IonPower (in Baseline mode) made it through the entire JavaScript JIT test suite tonight (which includes SunSpider and V8 but a lot else besides) without crashing or asserting. It doesn't pass everything yet, so we're not quite to phase 4, but the failures appear to group around some similar areas of code which suggest a common bug, and one of the failures is actually due to that particular test monkeying with JIT options we don't yet support (but will). Getting closer!

TenFourFox 31.6 is on schedule for March 31.

Categorieën: Mozilla-nl planet

Monica Chew: Tracking Protection talk on Air Mozilla

Mozilla planet - do, 19/03/2015 - 01:46
In August 2014, Georgios Kontaxis and I gave a talk on the implementation status of tracking protection in Firefox. At the time the talk was Mozillians only, but now it is public! Please visit Air Mozilla to view the talk, or see the slides below. The implementation status has not changed very much since last August, so most of the information is still pretty accurate.
Categorieën: Mozilla-nl planet

James Long: Backend Apps with Webpack, Part II: Driving with Gulp

Mozilla planet - do, 19/03/2015 - 01:00

In Part I of this series, we configured webpack for building backend apps. With a few simple tweaks, like leaving all dependencies from node_modules alone, we can leverage webpack's powerful infrastructure for backend modules and reuse the same system for the frontend. It's a relief to not maintain two separate build systems.

This series is targeted towards people already using webpack for the frontend. You may find babel's require hook fine for the backend, which is great. You might want to run files through multiple loaders, however, or share code between frontend and backend. Most importantly, you want to use hot module replacement. This is an experiment to reuse webpack for all of that.

In this post we are going to look at more fine-grained control over webpack, and how to manage both frontend and backend code at the same time. We are going to use gulp to drive webpack. This should be a usable setup for a real app.

Some of the responses to Part I criticized webpack as too complicated and not standards-compliant, and we should be moving to jspm and SystemJS. SystemJS is a runtime module loaded based on the ES6 specification. The people behind jspm are doing fantastic work, but all I can say is that they don't have many features that webpack users love. A simple example is hot module replacement. I'm sure in the years to come something like webpack will emerge based on the loader specification, and I'll gladly switch to it.

The most important thing is that we start writing ES6 modules. This affects the community a whole lot more than loaders, and luckily it's very simple to do with webpack. You need to use a compiler like Babel that supports modules, which you really want to do anyway to get all the good ES6 features. These compilers will turn ES6 modules into require statements, which can be processed with webpack.

I converted the backend-with-webpack repo to use the Babel loader and ES6 modules in the part1-es6 branch, and I will continue to use ES6 modules from here on.

Gulp

Gulp is nice task runner that makes it simple to automate anything. Even though we aren't using it to transform or bundle modules, its still useful as a "master controller" to drive webpack, test runners, and anything else you might need to do.

If you are going to use webpack for both frontend and backend code, you will need two separate configuration files. You could manually specify the desired config with --config, and run two separate watchers, but that gets redundant quickly. It's annoying to have two separate processes in two different terminals.

Webpack actually supports multiple configurations. Instead of exporting a single one, you export an array of them and it will run multiple processes for you. I still prefer using gulp instead because you might not want to always run both at the same time.

We need to convert our webpack usage to use the API instead of the CLI, and make a gulp task for it. Let's start by converting our existing config file into a gulp task.

The only difference is instead of exporting the config, you pass it to the webpack API. The gulpfile will look like this:

var gulp = require('gulp'); var webpack = require('webpack'); var config = { ... }; gulp.task('build-backend', function(done) { webpack(config).run(function(err, stats) { if(err) { console.log('Error', err); } else { console.log(stats.toString()); } done(); }); });

You can pass a config to the webpack function and you get back a compiler. You can call run or watch on the compiler, so if you wanted to make a build-watch task which automatically recompiles modules on change, you would call watch instead of run.

Our gulpfile is getting too big to show all of it here, but you can check out the new gulpfile.js which is a straight conversion of our old webpack.config.js. Note that we added a babel loader so we can write ES6 module syntax.

Multiple Webpack Configs

Now we're ready to roll! We can create another task for building frontend code, and simply provide a different webpack configuration. But we don't want to manage two completely separate configurations, since there are common properties between them.

What I like to do is create a base config and have others extend from it. Let's start with this:

var DeepMerge = require('deep-merge'); var deepmerge = DeepMerge(function(target, source, key) { if(target instanceof Array) { return [].concat(target, source); } return source; }); // generic var defaultConfig = { module: { loaders: [ {test: /\.js$/, exclude: /node_modules/, loaders: ['babel'] }, ] } }; if(process.env.NODE_ENV !== 'production') { defaultConfig.devtool = 'source-map'; defaultConfig.debug = true; } function config(overrides) { return deepmerge(defaultConfig, overrides || {}); }

We create a deep merging function for recursively merging objects, which allows us to override the default config, and we provide a function config for generating configs based off of it.

Note that you can turn on production mode by running the gulp task with NODE_ENV=production prefixed to it. If so, sourcemaps are not generated and you could add plugins for minifying code.

Now we can create a frontend config:

var frontendConfig = config({ entry: './static/js/main.js', output: { path: path.join(__dirname, 'static/build'), filename: 'frontend.js' } });

This makes static/js/main.js the entry point and bundles everything together at static/build/frontend.js.

Our backend config uses the same technique: customizing the config to be backend-specific. I don't think it's worth pasting here, but you can look at it on github. Now we have two tasks:

function onBuild(done) { return function(err, stats) { if(err) { console.log('Error', err); } else { console.log(stats.toString()); } if(done) { done(); } } } gulp.task('frontend-build', function(done) { webpack(frontendConfig).run(onBuild(done)); }); gulp.task('backend-build', function(done) { webpack(backendConfig).run(onBuild(done)); });

In fact, you could go crazy and provide several different interactions:

gulp.task('frontend-build', function(done) { webpack(frontendConfig).run(onBuild(done)); }); gulp.task('frontend-watch', function() { webpack(frontendConfig).watch(100, onBuild()); }); gulp.task('backend-build', function(done) { webpack(backendConfig).run(onBuild(done)); }); gulp.task('backend-watch', function() { webpack(backendConfig).watch(100, onBuild()); }); gulp.task('build', ['frontend-build', 'backend-build']); gulp.task('watch', ['frontend-watch', 'backend-watch']);

watch takes a delay as the first argument, so any changes within 100ms will only fire one rebuild.

You would typically run gulp watch to watch the entire codebase for changes, but you could just build or watch a specific piece if you wanted.

Nodemon

Nodemon is a nice process management tool for development. It starts a process for you and provides APIs to restart it. The goal of nodemon is to watch file changes and restart automatically, but we are only interested in manual restarts.

After installing with npm install nodemon and adding var nodemon = require('nodemon') to the top of the gulpfile, we can create a run task which executes the compiled backend file:

gulp.task('run', ['backend-watch', 'frontend-watch'], function() { nodemon({ execMap: { js: 'node' }, script: path.join(__dirname, 'build/backend'), ignore: ['*'], watch: ['foo/'], ext: 'noop' }).on('restart', function() { console.log('Restarted!'); }); });

This task also specifies dependencies on the backend-watch and frontend-watch tasks, so the watchers are automatically fired up and will code will recompile on change.

The execMap and script options specify how to actually run the program. The rest of the options are for nodemon's watcher, and we actually don't want it to watch anything. That's why ignore is *, watch is a non-existant directory, and ext is a non-existant file extension. Initially I only used the ext option but I ran into performance problems because nodemon still was watching everything in my project.

So how does our program actually restart on change? Calling nodemon.restart() does the trick, and we can do this within the backend-watch task:

gulp.task('backend-watch', function() { webpack(backendConfig).watch(100, function(err, stats) { onBuild()(err, stats); nodemon.restart(); }); });

Now, when running backend-watch, if you change a file it will be rebuilt and the process will automatically restart.

Our gulpfile is complete. After all this work, you just need to run this to start everything:

gulp run

As you code, everything will automatically be rebuilt and the server will restart. Hooray!

A Few Tips Better Performance

If you are using sourcemaps, you will notice compilation performance degrades the more files you have, even with incremental compilation (using watchers). This happens because webpack has to regenerate the entire sourcemap of the generated file even if a single module changes. This can be fixed by changing the devtool from source-map to #eval-source-map:

config.devtool = '#eval-source-map';

This tells webpack to process source-maps individually for each module, which it achieves by eval-ing each module at runtime with its own sourcemap. Prefixing it with # tells it you use the //# comment instead of the older //@ style.

Node Variables

I mentioned this in Part I, but some people missed it. Node defines variables like __dirname which are different for each module. This is a downside to using webpack, because we no longer have the node context for these variables, and webpack needs to fill them in.

Webpack has a workable solution, though. You can tell it how to treat these variables with the node configuration entry. You most likely want to set __dirname and __filename to true which will keep its real values. They default to "mock" which gives them dummy values (meant for browser environments).

Until Next Time

Our setup is now capable of building a large, complex app. If you want to share code between the frontend and backend, its easy to do since both sides use the same infrastructure. We get the same incremental compilation on both sides, and with the #eval-source-map setting, even with large amount of files modules are rebuilt in under 200ms.

I encourage you to modify this gulpfile to your heart's content. The great thing about webpack and gulp and is that its easy to customize it to your needs, so go wild.

These posts have been building towards the final act. We are now ready to take advantage of the most significant gain of this infrastructure: hot module replacement. React users have enjoyed this via react-hot-loader, and now that we have access to it on the backend, we can live edit backend apps. Part III will show you how to do this.

Thanks to Dan Abramov for reviewing this post.

Categorieën: Mozilla-nl planet

Gen Kanai: Analyse Asia – The Firefox Browser & Mobile OS with Gen Kanai

Mozilla planet - do, 19/03/2015 - 00:53

I had the pleasure to sit down with Bernard Leong, host of the Analyse Asia podcast, after my keynote presentation at FOSSASIA 2015. Please enjoy our discussion on Firefox, Firefox OS in Asia and other related topics.

Analyse Asia with Bernard Leong, Episode 22: The Firefox Browser & Mobile OS with Gen Kanai

 

Categorieën: Mozilla-nl planet

Pagina's