mozilla

Mozilla Nederland Logo De Nederlandse Mozilla gemeenschap

Jan de Mooij: Fast arrow functions in Firefox 31

Mozilla planet - vr, 11/04/2014 - 17:36

Last week I spent some time optimizing ES6 arrow functions. Arrow functions allow you to write function expressions like this:

a.map(s => s.length);

Instead of the much more verbose:

a.map(function(s){ return s.length });

Arrow functions are not just syntactic sugar though, they also bind their this-value lexically. This means that, unlike normal functions, arrow functions use the same this-value as the script in which they are defined. See the documentation for more info.

Firefox has had support for arrow functions since Firefox 22, but they used to be slower than normal functions for two reasons:

  1. Bound functions: SpiderMonkey used to do the equivalent of |arrow.bind(this)| whenever it evaluated an arrow expression. This made arrow functions slower than normal functions because calls to bound functions are currently not optimized or inlined in the JITs. It also used more memory because we’d allocate two function objects instead of one for arrow expressions.
    In bug 989204 I changed this so that we treat arrow functions exactly like normal function expressions, except that we also store the lexical this-value in an extended function slot. Then, whenever this is used inside the arrow function, we get it from the function’s extended slot. This means that arrow functions behave a lot more like normal functions now. For instance, the JITs will optimize calls to them and they can be inlined.
  2. Ion compilation: IonMonkey could not compile scripts containing arrow functions. I fixed this in bug 988993.

With these changes, arrow functions are about as fast as normal functions. I verified this with the following micro-benchmark:

function test(arr) { var t = new Date; arr.reduce((prev, cur) => prev + cur); alert(new Date - t); } var arr = []; for (var i=0; i<10000000; i++) { arr.push(3); } test(arr);

I compared a nightly build from April 1st to today’s nightly and got the following results:
arrow

We’re 64x faster because Ion is now able to inline the arrow function directly without going through relatively slow bound function code on every call.

Other browsers don’t support arrow functions yet, so they are not used a lot on the web, but it’s important to offer good performance for new features if we want people to start using them. Also, Firefox frontend developers love arrow functions (grepping for “=>” in browser/ shows hundreds of them) so these changes should also help the browser itself :)

Categorieën: Mozilla-nl planet

Mozilla's Prop. 8 uproar reveals much about tech, gay rights - SFGate

Nieuws verzameld via Google - vr, 11/04/2014 - 17:36

SFGate

Mozilla's Prop. 8 uproar reveals much about tech, gay rights
SFGate
When Mozilla CEO Brendan Eich was shown the door after being outed for giving $1,000 to Proposition 8's campaign to ban same-sex marriage in California, it was a turning point for both the gay rights movement and for Silicon Valley. Even before Eich ...
Mozilla CEO's downfall a lesson to all execs: 'Stay boring'Fortune
Mozilla's Statement on the Brendan Eich Controversy, ExplainedNational Review Online (blog)
Gay marriage foes outraged at Mozilla CEO flap, call for boycottRegister
USA TODAY -TheBlaze.com -Huffington Post
alle 463 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Planet Mozilla Interns: Tiziana Sellitto: Outreach Program For Women a year later

Mozilla planet - vr, 11/04/2014 - 16:14

It’s passed a year and a new Summer will begin…a new summer for the women that will be chosen and that will start soon the GNOME’s Outreach Program for Women.

This summer Mozilla will participate with three different projects listed here and among them the Mozilla Bug Wrangler for Desktop QA that is the one I applied for last year. It has been a great experience for me and I want to wish good luck to everyone who submitted the application.

I hope you’ll have a wonderful and productive summer :)

Categorieën: Mozilla-nl planet

Andrew Halberstadt: Part 2: How to deal with IFFY requirements

Mozilla planet - vr, 11/04/2014 - 15:24
My last post was basically a very long winded way of saying, "we have a problem". It kind of did a little dance around "why is there a problem" and "how do we fix it", but I want to explore these two questions in a bit more detail. Specifically, I want to return to the two case studies and explore why our test harnesses don't work and why mozharness does work even though both have IFFY (in flux for years) requirements. Then I will explore how to use the lessons learned to improve our general test harness design. ### DRY is not everything I talked a lot about the [DRY principle][1] in the last article. Basically the conclusion about it was that it is very useful, but that we tend to fixate on it to the point where we ignore other equally useful principles. Having reached this conclusion, I did a quick internet search and found [an article][2] by Joel Abrahamsson arguing the exact same point (albeit much more succinctly than me). Through his article I found out about the [SOLID principles][3] of object oriented design (have I been living under a rock?). They are all very useful guidelines, but there are two that immediately made me think of our test harnesses in a bad way. The first is the [single responsibility principle][4] (which I was delighted to find is meant to mitigate requirement changes) and the second is the [open/closed principle][5]. The single responsibility principle states that a class should only be responsible for one thing, and responsibility for that thing should not be shared with other classes. What is a responsibility? A responsibility is defined as a *reason to change*. To use the wikipedia example, a class that prints a block of text can undergo two changes. The content of the text can change, or the format of the text can change. These are two different responsibilities that should be split out into different classes. The open/closed principle states that software should be open for extension, but closed for modification. In other words, it should be possible to change the behaviour of the software only by adding new code without needing to modify any existing code. A popular way of implementing this is through abstract base classes. Here the interface is closed for modification, and each new implementation is an extension of that. Our test harnesses fail miserably at both of these principles. Instead of having several classes each with a well defined responsibility, we have a single class responsible for everything. Instead of being able to add some functionality without worrying about breaking something else, you have to take great pains that your change won't affect some other platform you don't even care about! Mozharness on the other hand, while not perfect, does a much better job at both principles. The concept of actions makes it easy to extend functionality without modifying existing code. Just add a new action to the list! The core library is also much better separated by responsibility. There is a clear separation between general script, build, and testing related functionality. ### Inheritance is evil This is probably old news to many people, but this is something that I'm just starting to figure out on my own. I like Zed Shaw's [analogy from *Learn Python the Hard Way*][6] the best. Instead of butchering it, here it is in its entirety. > In the fairy tales about heroes defeating evil villains there's always a dark forest of some kind. > It could be a cave, a forest, another planet, just some place that everyone knows the hero > shouldn't go. Of course, shortly after the villain is introduced you find out, yes, the hero has > to go to that stupid forest to kill the bad guy. It seems the hero just keeps getting into > situations that require him to risk his life in this evil forest. > > You rarely read fairy tales about the heroes who are smart enough to just avoid the whole situation > entirely. You never hear a hero say, "Wait a minute, if I leave to make my fortunes on the high seas > leaving Buttercup behind I could die and then she'd have to marry some ugly prince named Humperdink. > Humperdink! I think I'll stay here and start a Farm Boy for Rent business." If he did that there'd > be no fire swamp, dying, reanimation, sword fights, giants, or any kind of story really. Because of > this, the forest in these stories seems to exist like a black hole that drags the hero in no matter > what they do. > > In object-oriented programming, Inheritance is the evil forest. Experienced programmers know to > avoid this evil because they know that deep inside the Dark Forest Inheritance is the Evil Queen > Multiple Inheritance. She likes to eat software and programmers with her massive complexity teeth, > chewing on the flesh of the fallen. But the forest is so powerful and so tempting that nearly every > programmer has to go into it, and try to make it out alive with the Evil Queen's head before they > can call themselves real programmers. You just can't resist the Inheritance Forest's pull, so you go > in. After the adventure you learn to just stay out of that stupid forest and bring an army if you > are ever forced to go in again. > > This is basically a funny way to say that I'm going to teach you something you should avoid called > Inheritance. Programmers who are currently in the forest battling the Queen will probably tell you > that you have to go in. They say this because they need your help since what they've created is > probably too much for them to handle. But you should always remember this: > > Most of the uses of inheritance can be simplified or replaced with composition, and multiple > inheritance should be avoided at all costs. I had never heard the (apparently popular) term "composition over inheritance". Basically, unless you really really mean it, always go for "X has a Y" instead of "X is a Y". Never do "X is a Y" for the sole purpose of avoiding code duplication. This is exactly the mistake we made in our test harnesses. The Android and B2G runners just inherited everything from the desktop runner, but oops, turns out all three are actually quite different from one another. Mozharness, while again not perfect, does a better job at avoiding inheritance. While it makes heavy use of the mixin pattern (which, yes, is still inheritance) at least it promotes separation of concerns more than classic inheritance. ### Practical Lessons So this is all well and great, but how can we apply all of this to our automation code base? A smarter way to approach our test harness design would have been to have most of the shared code between the three platforms in a single (relatively) bare-bones runner that *has a* target environment (e.g desktop Firefox, Fennec or B2G in this case). In this model there is no inheritance, and no code duplication. It is easy to extend without modifying (just add a new target environment) and there are clear and distinct responsibilities between managing tests/results and actually launching them. In fact this how the gaia team implemented their [marionette-js-runner][7]. I'm not sure if that pattern is common to node's [mocha runner][8] or something of their design, but I like it. I'd also like our test harnesses to employ mozharness' concept of actions. Each action could be an atomic as possible setup step. For example, setting preferences in the profile is a single action. Setting environment is another. Parsing a manifest could be a third. Each target environment would consist of a list of actions that are run in a particular order. If code needs to be shared, simply add the corresponding action to whichever targets need it. If not, just don't include the action in the list for targets that don't need it. My dream end state here is that there is no distinction between test runners and mozharness scripts. They are both trying to do the same thing (perform setup, launch some code, collect results) so why bother wrapping one around the other? The test harness should just *be* a mozharness script and vice versa. This would bring actions into test harnesses, and allow mozharness scripts to live in-tree. ### Conclusion Is it possible to avoid code duplication with a project that has IFFY requirements? I think yes. But I still maintain it is exceptionally hard. It wasn't until after it was too late and I had a lot of time to think about it that I realized the mistakes we made. And even with what I know now, I don't think I would have fared much better given the urgency and time constraints we were under. Though next time, I think I'll at least have a better chance. [1]: http://en.wikipedia.org/wiki/DRY_principle [2]: http://joelabrahamsson.com/the-dry-obsession [3]: http://en.wikipedia.org/wiki/SOLID [4]: http://en.wikipedia.org/wiki/Single_responsibility_principle [5]: http://en.wikipedia.org/wiki/Open/closed_principle [6]: http://learnpythonthehardway.org/book/ex44.html [7]: https://github.com/mozilla-b2g/marionette-js-runner [8]: http://visionmedia.github.io/mocha
Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 9/10 2014

Mozilla planet - vr, 11/04/2014 - 11:28

In this post you can find an overview about the work happened in the Firefox Automation team during week 9 and 10. I for myself was a week on vacation. A bit of relaxing before the work on the TPS test framework should get started.

Highlights

In preparation to run Mozmill tests for Firefox Metro in our Mozmill-CI system, Andreea has started to get support for Metro builds and appropriate tests included.

With the help from Henrik we got Mozmill 2.0.6 released. It contains a helpful fix for waitForPageLoad(), which let you know about the page being loaded and its status in case of a failure. This might help us to nail down the intermittent failures when loading remote and even local pages. But the most important part of this release is indeed the support of mozcrash. Even that we cannot have a full support yet due to missing symbol files for daily builds on ftp.mozilla.org, we can at least show that a crash was happening during a testrun, and let the user know about the local minidump files.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 9 and week 10.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 9 and week 10.

Categorieën: Mozilla-nl planet

Gervase Markham: 21st Century Nesting

Mozilla planet - vr, 11/04/2014 - 10:42

Our neighbours have acquired a 21st century bird’s nest:

Not only is it behind a satellite dish but, if you look closely, large parts of it are constructed from the wire ties that the builders (who are still working on our estate) use for tying layers of bricks together. We believe it belongs to a couple of magpies, and it contains six (low-tech) eggs.

I have no idea what effect this has on their reception…

Categorieën: Mozilla-nl planet

Chris Double: Preventing heartbleed bugs with safe programming languages

Mozilla planet - vr, 11/04/2014 - 06:00

The Heartbleed bug in OpenSSL has resulted in a fair amount of damage across the internet. The bug itself was quite simple and is a textbook case for why programming in unsafe languages like C can be problematic.

As an experiment to see if a safer systems programming language could have prevented the bug I tried rewriting the problematic function in the ATS programming language. I’ve written about ATS as a safer C before. This gives a real world testcase for it. I used the latest version of ATS, called ATS2.

ATS compiles to C code. The function interfaces it generates can exactly match existing C functions and be callable from C. I used this feature to replace the dtls1_process_heartbeat and tls1_process_heartbeat functions in OpnSSL with ATS versions. These two functions are the ones that were patched to correct the heartbleed bug.

The approach I took was to follow something similar to that outlined by John Skaller on the ATS mailing list:

ATS on the other hand is basically C with a better type system. You can write very low level C like code without a lot of the scary dependent typing stuff and then you will have code like C, that will crash if you make mistakes. If you use the high level typing stuff coding is a lot more work and requires more thinking, but you get much stronger assurances of program correctness, stronger than you can get in Ocaml or even Haskell, and you can even hope for *better* performance than C by elision of run time checks otherwise considered mandatory, due to proof of correctness from the type system. Expect over 50% of your code to be such proofs in critical software and probably 90% of your brain power to go into constructing them rather than just implementing the algorithm. It's a paradigm shift.

I started with wrapping the C code directly and calling that from ATS. From there I rewrote the C code into unsafe ATS. Once that worked I added types to find errors.

I’ve put the modified OpenSSl code in a github fork. The two branches there, ats and ats_safe, represent the latter two stages in implementing the functions in ATS.

I’ll give a quick overview of the different paths I took then go into some detail about how I used ATS to find the errors.

Wrapping C code

I’ve written about this approach before. ATS allows embedding C directly so the first start was to embed the dtls1_process_heartbeat C code in an ATS file, call that from an ATS function and expose that ATS function as the real dtls1_process_heartbeat. The code for this is in my first attempt of d1_both.dats.

Unsafe ATS

The second stage was to write the functions using ATS but unsafely. This code is a direct translation of the C code with no additional typechecking via ATS features. It uses usafe ATS code. The rewritten d1_both.dats contains this version.

The code is quite ugly but compiles and matches the C version. When installed on a test system it shows the heartbleed bug still. It uses all the pointer arithmetic and hard coded offsets as the C code. Here’s a snippet of one branch of the function:

val buffer = OPENSSL_malloc(1 + 2 + $UN.cast2int(payload) + padding) val bp = buffer val () = $UN.ptr0_set<uchar> (bp, TLS1_HB_RESPONSE) val bp = ptr0_succ<uchar> (bp) val bp = s2n (payload, bp) val () = unsafe_memcpy (bp, pl, payload) val bp = ptr_add (bp, payload) val () = RAND_pseudo_bytes (bp, padding) val r = dtls1_write_bytes (s, TLS1_RT_HEARTBEAT, buffer, 3 + $UN.cast2int(payload) + padding) val () = if r >=0 && ptr_isnot_null (get_msg_callback (s)) then call_msg_callback (get_msg_callback (s), 1, get_version (s), TLS1_RT_HEARTBEAT, buffer, $UN.cast2uint (3 + $UN.cast2int(payload) + padding), s, get_msg_callback_arg (s)) val () = OPENSSL_free (buffer)

It should be pretty easy to follow this comparing the code to the C version.

Safer ATS

The third stage was adding types to the unsafe ATS version to check that the pointer arithmetic is correct and no bounds errors occur. This version of d1_both.dats fails to compile if certain bounds checks aren’t asserted. If the assertloc at line 123, line 178 or line 193 is removed then a constraint error is produced. This error is effectively preventing the heartbleed bug.

Testable Vesion

The last stage I did was to implement the fix to the tls1_process_heartbeat function and factor out some of the helper routines so it could be shared. This is in the ats_safe branch which is where the newer changes are happening. This version removes the assertloc usage and changes to graceful failure so it could be tested on a live site.

I tested this version of OpenSSL and heartbleed test programs fail to dump memory.

The approach to safety

The tls_process_heartbeat function obtains a pointer to data provided by the sender and the amount of data sent from one of the OpenSSL internal structures. It expects the data to be in the following format:

byte = hbtype ushort = payload length byte[n] = bytes of length 'payload length' byte[16]= padding

The existing C code makes the mistake of trusting the ‘payload length’ the sender supplies and passes that to a memcpy. If the actual length of the data is less than the ‘payload length’ then random data from memory gets sent in the response.

In ATS pointers can be manipulated at will but they can’t be dereferenced or used unless there is a view in scope that proves what is located at that memory address. By passing around views, and subsets of views, it becomes possible to check that ATS code doesn’t access memory it shouldn’t. Views become like capabilities. You hand them out when you want code to have the capability to do things with the memory safely and take it back when it’s done.

Views

To model what the C code does I created an ATS view that represents the layout of the data in memory:

dataview record_data_v (addr, int) = | {l:agz} {n:nat | n > 16 + 2 + 1} make_record_data_v (l, n) of (ptr l, size_t n) | record_data_v_fail (null, 0) of ()

A ‘view’ is like a standard ML datatype but exists at type checking time only. It is erased in the final version of the program so has no runtime overhead. This view has two constructors. The first is for data held at a memory address l of length n. The length is constrained to be greater than 16 + 2 + 1 which is the size of the ‘byte’, ‘ushort’ and ‘padding’ mentioned previously. By putting the constraint here we immediately force anyone creating this view to check the length they pass in. The second constructor, record_data_v_fail, is for the case of an invalid record buffer.

The function that creates this view looks like:

implement get_record (s) = val len = get_record_length (s) val data = get_record_data (s) in if len > 16 + 2 + 1 then (make_record_data_v (data, len) | data, len) else (record_data_v_fail () | null_ptr1 (), i2sz 0) end

Here the len and data are obtained from the SSL structure. The length is checked and the view is created and returned along with the pointer to the data and the length. If the length check is removed there is a compile error due to the constraint we placed earlier on make_record_data_v. Calling code looks like:

val (pf_data | p_data, data_len) = get_record (s)

p_data is a pointer. data_len is an unsigned value and pf_data is our view. In my code the pf_ suffix denotes a proof of some sort (in this case the view) and p_ denotes a pointer.

Proof functions

In ATS we can’t do anything at all with the p_data pointer other than increment, decrement and pass it around. To dereference it we must obtain a view proving what is at that memory address. To get speciailized views specific for the data we want I created some proof functions that convert the record_data_v view to views that provide access to memory. These are the proof functions:

(* These proof functions extract proofs out of the record_data_v to allow access to the data stored in the record. The constants for the size of the padding, payload buffer, etc are checked within the proofs so that functions that manipulate memory are checked that they remain within the correct bounds and use the appropriate pointer values *) prfun extract_data_proof {l:agz} {n:nat} (pf: record_data_v (l, n)): (array_v (byte, l, n), array_v (byte, l, n) -<lin,prf> record_data_v (l,n)) prfun extract_hbtype_proof {l:agz} {n:nat} (pf: record_data_v (l, n)): (byte @ l, byte @ l -<lin,prf> record_data_v (l,n)) prfun extract_payload_length_proof {l:agz} {n:nat} (pf: record_data_v (l, n)): (array_v (byte, l+1, 2), array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n)) prfun extract_payload_data_proof {l:agz} {n:nat} (pf: record_data_v (l, n)): (array_v (byte, l+1+2, n-16-2-1), array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n)) prfun extract_padding_proof {l:agz} {n:nat} {n2:nat | n2 <= n - 16 - 2 - 1} (pf: record_data_v (l, n), payload_length: size_t n2): (array_v (byte, l + n2 + 1 + 2, 16), array_v (byte, l + n2 + 1 + 2, 16) -<lin, prf> record_data_v (l, n))

Proof functions are run at type checking time. They manipulate proofs. Let’s breakdown what the extract_hbtype_proof function does:

prfun extract_hbtype_proof {l:agz} {n:nat} (pf: record_data_v (l, n)): (byte @ l, byte @ l -<lin,prf> record_data_v (l,n))

This function takes a single argument, pf, that is a record_data_v instance for an address l and length n. This proof argument is consumed. Once it is called it cannot be accessed again (it is a linear proof). The function returns two things. The first is a proof byte @ l which says “there is a byte stored at address l”. The second is a linear proof function that takes the first proof we returned, consumes it so it can’t be reused, and returns the original proof we passed in as an argument.

This is a fairly common idiom in ATS. What it does is takes a proof, destroys it, returns a new one and provides a way of destroying the new one and bring back the old one. Here’s how the function is used:

prval (pf, pff) = extract_hbtype_proof (pf_data) val hbtype = $UN.cast2int (!p_data) prval pf_data = pff (pf)

prval is a declaration of a proof variable. pf is my idiomatic name for a proof and pff is what I use for proof functions that destroy proofs and return the original.

The !p_data is similar to *p_data in C. It dereferences what is held at the pointer. When this happens in ATS it searches for a proof that we can access some memory at p_data. The pf proof we obtained says we have a byte @ p_data so we get a byte out of it.

A more complicated proof function is:

prfun extract_payload_length_proof {l:agz} {n:nat} (pf: record_data_v (l, n)): (array_v (byte, l+1, 2), array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n))

The array_v view repesents a contigous array of memory. The three arguments it takes are the type of data stored in the array, the address of the beginning, and the number of elements. So this function consume the record_data_v and produces a proof saying there is a two element array of bytes held at the 1st byte offset from the original memory address held by the record view. Someone with access to this proof cannot access the entire memory buffer held by the SSL record. It can only get the 2 bytes from the 1st offset.

Safe memcpy

One more complicated view:

prfun extract_payload_data_proof {l:agz} {n:nat} (pf: record_data_v (l, n)): (array_v (byte, l+1+2, n-16-2-1), array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n))

This returns a proof for an array of bytes starting at the 3rd byte of the record buffer. Its length is equal to the length of the record buffer less the size of the payload, and first two data items. It’s used during the memcpy call like so:

prval (pf_dst, pff_dst) = extract_payload_data_proof (pf_response) prval (pf_src, pff_src) = extract_payload_data_proof (pf_data) val () = safe_memcpy (pf_dst, pf_src add_ptr1_bsz (p_buffer, i2sz 3), add_ptr1_bsz (p_data, i2sz 3), payload_length) prval pf_response = pff_dst(pf_dst) prval pf_data = pff_src(pf_src)

By having a proof that provides access to only the payload data area we can be sure that the memcpy can not copy memory outside those bounds. Even though the code does manual pointer arithmetic (the add_ptr1_bszfunction) this is safe. An attempt to use a pointer outside the range of the proof results in a compile error.

The same concept is used when setting the padding to random bytes:

prval (pf, pff) = extract_padding_proof (pf_response, payload_length) val () = RAND_pseudo_bytes (pf | add_ptr_bsz (p_buffer, payload_length + 1 + 2), padding) prval pf_response = pff(pf)a Runtime checks

The code does runtime checks that constrain the bounds of various length variables:

if payload_length > 0 then if data_len >= payload_length + padding + 1 + 2 then ... ...

Without those checks a compile error is produced. The original heartbeat flaw was the absence of similar runtime checks. The code as structured can’t suffer from that flaw and still be compiled.

Testing

This code can be built and tested. First step is to install ATS2:

$ tar xvf ATS2-Postiats-0.0.7.tgz $ cd ATS2-Postiats-0.0.7 $ ./configure $ make $ export PATSHOME=`pwd` $ export PATH=$PATH:$PATSHOME/bin

Then compile the openssl code with my ATS additions:

$ git clone https://github.com/doublec/openssl $ cd openssl $ git checkout -b ats_safe origin/ats_safe $ ./config $ make $ make test

Try changing some of the code, modifying the constraints tests etc, to get an idea of what it is doing.

For testing in a VM, I installed Ubuntu, setup an nginx instance serving an HTTPS site and did something like:

$ git clone https://github.com/doublec/openssl $ cd openssl $ git diff 5219d3dd350cc74498dd49daef5e6ee8c34d9857 >~/safe.patch $ cd .. $ apt-get source openssl $ cd openssl-1.0.1e/ $ patch -p1 <~/safe.patch ...might need to fix merge conflicts here... $ fakeroot debian/rules build binary $ cd .. $ sudo dpkg -i libssl1.0.0_1.0.1e-3ubuntu1.2_amd64.deb \ libssl-dev_1.0.1e-3ubuntu1.2_amd64.deb $ sudo /etc/init.d/nginx restart

You can then use a heartbleed tester on the HTTPS server and it should fail.

Conclusion

I think the approach of converting unsafe C code piecemeal worked quite well in this instance. Being able to combine existing C code and ATS makes this much easier. I only concentrated on detecting this particular programming error but it would be possible to use other ATS features to detect memory leaks, abstraction violations and other things. It’s possible to get very specific in defining safe interfaces at a cost of complexity in code.

Although I’ve used ATS for production code this is my first time using ATS2. I may have missed idioms and library functions to make things easier so try not to judge the verbosity or difficulty of the code based on this experiement. The ATS community is helpful in picking up the language. My approach to doing this was basically add types then work through the compiler errors fixing each one until it builds.

One immediate question becomes “How do you trust your proof”. The record_data_v view and the proof functions that manipulate it define the level of checking that occurs. If they are wrong then the constraints checked by the program will be wrong. It comes down to having a trusted kernel of code (in this case the proof and view) and users of that kernel can then be trusted to be correct. Incorrect use of the kernel is what results in the stronger safety. From an auditing perspective it’s easier to check the small trusted kernel and then know the compiler will make sure pointer manipulations are correct.

The ATS specific additions are in the following files:

Categorieën: Mozilla-nl planet

Anthony Hughes: Google RMA, or how I finally got to use Firefox OS on Wind Mobile

Mozilla planet - do, 10/04/2014 - 22:28

The last 24 hours have really been quite an adventure in debugging. It all started last week when I decided to order a Nexus 5 from Google. It arrived yesterday, on time, and I couldn’t wait to get home to unbox it. Soon after unboxing my new Nexus 5 I would discover something was not well.

After setting up my Google account and syncing all my data I usually like to try out the camera. This did not go very well. I was immediately presented with a “Camera could not connect” error. After rebooting a couple times the error continued to persist.

I then went to the internet to research my problem and got the usual advice: clear the cache, force quit any unnecessary apps, or do a factory reset. Try as I might, all of these efforts would fail. I actually tried a factory reset three times and that’s where things got weirder.

On the third factory reset I decided to opt out of syncing my data and just try the camera with a completely stock install. However, this time the camera icon was completely missing. It was absent from my home screen and the app drawer. It was absent from the Gallery app. The only way I was able to get the Camera app to launch was to select the camera button on my lock screen.

Now that I finally got to the Camera app I noticed it had defaulted to the front camera, so naturally I tried to switch to the rear. However when I tried this, the icon to switch cameras was completely missing. I tried some third party camera apps but they would just crash on startup.

After a couple hours jumping through these hoops between factory resets I was about to give in. I gave it one last ditch effort and flashed the phone using Google’s stock Android 4.4 APK. It took me about another hour between getting my environment set up and getting the image flashed to the phone. However the result was the same: missing camera icons and crashing all over the place.

It was now past 1am, I had been at this for hours. I finally gave in and called up Google. They promptly sent me an RMA tag and I shipped the phone back to them this morning for a full refund. And so began the next day of my adventure.

I was now at a point where I had to decide what I wanted to do. Was I going to order another Nexus 5 and trust that one would be fine or would I save myself the hassle and just dig out an old Android phone I had lying around?

I remembered that I still had a Nexus S which was perfectly fine, albeit getting a bit slow. After a bit of research on MDN I decided to try flashing the Nexus S to use B2G. I had never successfully flashed any phone to B2G before and I thought yesterday’s events might have been pushing toward this moment.

I followed the documentation, checked out the source code, sat through the lengthy config and build process (this took about 2 hours), and pushed the bits to my phone. I then swapped in my SIM card and crossed my fingers. It worked! It seemed like magic, but it worked. I can again do all the things I want to: make phone calls, take pictures, check email, and tweet to my hearts content; all on a phone powered by the web.

I have to say the process was fairly painless (apart from the hours spent troubleshooting the Nexus 5). The only problem I encountered was a small hiccup in the config.sh process. Fortunately, I was able to work around this pretty easily thanks to Bugzilla. I can’t help but recognize my success was largely due to the excellent documentation provided by Mozilla and the work of developers, testers, and contributors alike who came before me.

I’ve found the process to be pretty rewarding. I built B2G, which I’ve never succeeded at before; I flashed my phone, which I’ve never succeeded at before; and I feel like I learned something new today.

I’ve been waiting a long time to be able to test B2G 1.4 on Wind Mobile, and now I can. Sure I’m sleep deprived, and sure it’s not an “official” Firefox OS phone, but that does not diminish the victory for me; not one bit.

 

Categorieën: Mozilla-nl planet

Mozilla's quest for smartphone relevance may hinge on these Firefox OS 2.0 ... - VentureBeat

Nieuws verzameld via Google - do, 10/04/2014 - 19:03

What Mobile

Mozilla's quest for smartphone relevance may hinge on these Firefox OS 2.0 ...
VentureBeat
It's been a tough week for the Mozilla Foundation with the controversy over and resignation of cofounder and CEO Brendan Eich. But the organization has forward momentum, as shown by newly-released mockups of the promised interface redo in Firefox OS ...
Firefox OS 2.0 starts emerging from its cocoonCNET (blog)

alle 11 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Gervase Markham: Copyright and Software

Mozilla planet - do, 10/04/2014 - 17:37

As part of our discussions on responding to the EU Copyright Consultation, Benjamin Smedberg made an interesting proposal about how copyright should apply to software. With Chris Riley’s help, I expanded that proposal into the text below. Mozilla’s final submission, after review by various parties, argued for a reduced term of copyright for software of 5-10 years, but did not include this full proposal. So I publish it here for comment.

I think the innovation, which came from Benjamin, is the idea that the spirit of copyright law means that proprietary software should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires.

We believe copyright terms should be much shorter for software, and that there should be a public benefit tradeoff for receiving legal protection, comparable to other areas of IP.

We start with the premise that the purpose of copyright is to promote new creation by giving to their authors an exclusive right, but that this right is necessary time-limited because the public as a whole benefits from the public domain and the free sharing and reproduction of works. Given this premise, copyright policy has failed in the domain of software. All software has a much, much shorter life than the standard copyright term; by the end of the period, there is no longer any public benefit to be gained from the software entering the public domain, unlike virtually all other categories of copyrighted works. There is already more obsolete software out there than anyone can enumerate, and software as a concept is barely even 50 years old, so none is in the public domain. Any which did fall into the public domain after 50 or 70 years would be useful to no-one, as it would have been written for systems long obsolete.

We suggest two ideas to help the spirit of copyright be more effectively realized in the software domain.

Proprietary software (that is, software for which the source code is not immediately available for reuse anyway) should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires. Unlike a book, which can be read and copied by anyone at any stage before or after its copyright expires, software is often distributed as binary code which is intelligible to computers but very hard for humans to understand. Therefore, in order for software to properly fall into the public domain at the end of the copyright term, the source code (the human-readable form) needs to be made available at that time – otherwise, the spirit of copyright law is not achieved, because the public cannot truly benefit from the copyrighted material. An escrow system would be ideal to implement this.

This is also similar to the tradeoff between patent law and trade secret protection; you receive a legal protection for your activity in exchange for making it available to be used effectively by the broader public at the end of that period. Failing to take that tradeoff risks the possibility that someone will reverse engineer your methods, at which point they are unprotected.

Separately, the term of software copyright protection should be made much shorter (through international processes as relevant), and fixed for software products. We suggest that 14 years is the most appropriate length. This would mean that, for example, Windows XP would enter the public domain in August 2015, which is a year after Microsoft ceases to support it (and so presumably no longer considers it commercially viable). Members of the public who wish to continue to run Windows XP therefore have an interest in the source code being available so technically-capable companies can support them.

Categorieën: Mozilla-nl planet

Christie Koehler: An Explanation of the Heartbleed bug for Regular People

Mozilla planet - do, 10/04/2014 - 17:16

I’ve put this explanation together for those who want to understand the Heartbleed bug, how it fits into the bigger picture of secure internet browsing, and what you can do to mitigate its affects.

HTTPS vs HTTP (padlock vs no padlock)

When you are browsing a site securely, you use https and you see a padlock icon in the url bar. When you are browsing insecurely you use http and you do not see a padlock icon.

Firefox url bar for HTTPS site (above) and non-HTTPS (below).Firefox url bar for HTTPS site (above) and non-HTTPS (below).

HTTPS relies on something called SSL/TLS.

Understanding SSL/TLS

SSL stands for Secure Sockets Layer and TLS stands for Transport Layer Security. TLS is the later version of the original, proprietary, SSL protocol developed by Netscape. Today, when people say SSL, they generally mean TLS, the current, standard version of the protocol.

Public and private keys

The TLS protocol relies heavily on public-key or asymmetric cryptography. In this kind of cryptography, two separate but paired keys are required: a public key and a private key. The public key is, as its name suggests, shared with the world and is used to encrypt plain-text data or to verify a digital signature. (A digital signature is a way to authenticate identity.) A matching private key, on the other hand, is used to decrypt data and to generate digital signatures. A private key should be safeguarded and never shared. Many private keys are protected by pass-phrases, but merely having access to the private key means you can likely use it.

Authentication and encryption

The purpose of SSL/TLS is to authenticate and encrypt web traffic.

Authenticate in this case means “verify that I am who I say I am.” This is very important because when you visit your bank’s website in your browser, you want to feel confident that you are visiting the web servers of — and thereby giving your information to — your actual bank and not another server claiming to be your bank. This authentication is achieved using something called certificates that are issued by Certificate Authorities (CA). Wikipedia explains thusly:

The digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the public key that is certified. In this model of trust relationships, a CA is a trusted third party that is trusted by both the subject (owner) of the certificate and the party relying upon the certificate.

In order to obtain a valid certificate from a CA, website owners must submit, at minimum, their server’s public key and demonstrate that they have access to the website (domain).

Encrypt in this case means “encode data such that only authorized parties may decode it.” Encrypting internet traffic is important for sensitive or otherwise private data because it is trivially easy eavesdrop on internet traffic. Information transmitted not using SSL is usually done so in plain-text and as such clearly readable by anyone. This might be acceptable for general internet broswing. After all, who cares who knows which NY Times article you are reading? But is not acceptable for a range of private data including user names, passwords and private messages.

Behind the scenes of an SSL/TLS connection

When you visit a website with HTTPs enabled, a multi-step process occurs so that a secure connection can be established. During this process, the sever and client (browser) send messages back and forth in order to a) authenticate the server’s (and sometimes the client’s) identity and, b) to negotiate what encryption scheme, including which cipher and which key, they will use for the session. Identities are authenticated using the digital certificates mentioned previously.

When all of that is complete, the secure connection is established and the server and client send traffic back and forth to each other.

All of this happens without you ever knowing about it. Once you see your bank’s login screen the process is complete, assuming you see the padlock icon in your browser’s url bar.

Keepalives and Heartbeats

Even though establishing an ssl connection happens almost imperceptibly to you, it does have an overhead in terms of computer and network resources. To minimize this overhead, network connections are often kept open and active until a given timeout threshold is exceed. When that happens, the connection is closed. If the client and server wish to communicate again, they need to re-negotiate the connection and re-incur the overhead of that negotiation.

One way to forestall a connection being closed is via keepalives. A keepalive message is used to tell a server “Hey, I know I haven’t used this connection in a little while, but I’m still here and I’m planning to use it again really soon.”

Keepalive functionality was added to the TLS protocol specification via the Heartbeat Extension. Instead of “Keepalives,” they’re called “Heartbeats,” but they do basically the same thing.

Specification vs Implementation

Let’s pause for a moment to talk about specifications vs implementations. A protocol is a defined way of doing something. In this case of TLS, that something is encrypted network communications. When a protocol is standardized, it means that a lot of people have agreed upon the exact way that protocol should work and this way is outlined in a specification. The specification for TLS is collaboratively developed, maintained and promoted by the standards body Internet Engineering Task Force (IETF). A specification in and of itself does not do anything. It is a set of documents, not a program. In order for a specifications to do something, they must be implemented by programmers.

OpenSSL implementation of TLS

OpenSSL is one implementation of the TLS protocol. There are others, including the open source GnuTLS as well as proprietary implementations. OpenSSL is a library, meaning that it is not a standalone software package, but one that is used by other software packages. These include the very popular webserver Apache.

The Heartbleed bug only applies to webservers with SSL/TLS enabled, and only those using specific versions of the open source OpenSSL library because the bug relates to an error in the code of that library, specifically the heartbeat extension code. It is not related to any errors in the TLS specification or and in any of the underlying ciper suites.

Usually this would be good news. However, because OpenSSL is so widely used, particularly the affected version, this simple bug has tremendously reach in terms of the number of servers and therefor the number of users it potentially affects.

What the heartbeat extension is supposed to do

The heartbeat extension is supposed to work as follows:

  • A client sends a heartbeat message to the server.
  • The message contains two pieces of data: a payload and the size of that payload. The payload can by anything up to 64kb.
  • When the server receives the heartbeat message, it is to add a bit of extra data to it (padding) and send it right back to the client.

Pretty simple, right? Heartbeat isn’t supposed to do anything other than let the server and client know they are each still there and accepting connections.

What the heartbeat code actually does

In the code for affected versions (1.0.1-1.0.1f) of the OpenSSL heartbeat extension, the programmer(s) made a simple but horrible mistake: They failed to verify the size of the received payload. Instead, they accepted what the client said was the size of the payload and returned this amount of data from memory, thinking it should be returning the same data it had received. Therefore, a client could send a payload of 1KB, say it was 64KB and receive that amount of data back, all from server memory.

If that’s confusing, try this analogy: Imagine you are my bank. I show up and make a deposit. I say the deposit is $64, but you don’t actually verify this amount. Moments later I request a withdrawal of the $64 I say I deposited. In fact, I really only deposited $1, but since you never checked, you have no choice but to give me $64, $63 of which doesn’t actually belong to me.

And, this is exactly how a someone could exploit this vulnerability. What comes back from memory doesn’t belong to the client that sent the heartbeat message, but it’s given a copy of it anyway. The data returned is random, but would be data that the OpenSSL library had been storing in memory. This should be pre-encryption (plain-text) data, including your user names and passwords. It could also technically be your server’s private key (because that is used in the securing process) and/or your server’s certificate (which is also not something you should share).

The ability to retrieve a server’s private key is very bad because that private key could be used to decrypt all past, present and future traffic to the sever. The ability to retreive a server’s certificate is also bad because it gives the ability to impersonate that server.

This, coupled with the widespread use of OpenSSL, is why this bug is so terribly bad. Oh, and it gets worse…

Taking advantage of this vulnerability leaves no trace

What’s worse is that logging isn’t part of the Heartbeat extension. Why would it be? Keepalives happen all the time and generally do not represent transmission of any significant data. There’s no reason to take up value time accessing the physical disk or taking up storage space to record that kind of information.

Because there is no logging, there is no trace left when someone takes advantage of this vulnerability.

The code that introduced this bug has been part of OpenSSl for 2+ years. This means that any data you’ve communicated to servers with this bug since then has the potential to be compromised, but there’s no way to determine definitively if it was.

This is why most of the internet is collectively freaking out.

What do server administrators need to do?

Server (website) administrators need to, if they haven’t already:

  1. Determine whether or not their systems are affected by the bug. (test)
  2. Patch and/or upgrade affected systems. (This will require a restart)
  3. Revoke and reissue keys and certificates for affected systems.

Furthermore, I strongly recommend you enable Perfect forward secrecy to safeguard data in the event that a private key is compromised:

When an encrypted connection uses perfect forward secrecy, that means that the session keys the server generates are truly ephemeral, and even somebody with access to the secret key can’t later derive the relevant session key that would allow her to decrypt any particular HTTPS session. So intercepted encrypted data is protected from prying eyes long into the future, even if the website’s secret key is later compromised.

What do users (like me) need to do?

The most important thing regular users need to do is change your passwords on critical sites that were vulnerable (but only after they’ve been patched). Do you need to change all of your passwords everywhere? Probably not. Read You don’t need to change all your passwords for some good tips.

Additionally, if you’re not already using a password manager, I highly recommend LastPass, which is cross-platform and works on pretty much every device. Yesterday LastPass announced they are helping users to know which passwords they need to update and when it is safe to do so.

If you do end up trying LastPass, checkout my guide for setting it up with two-factor auth.

Further Reading


If you like visuals, check out this great video showing how the Heartbleed exploit works.

If you’re interested in learning more about networking, I highly recommend Ilya Grigorik‘s High Performance Browser Networking, which you can also read online for free.

If you want some additional technical details about Heartbleed (including actual code!) checkout these posts:

Oh, and you can listen to Kevin and I talk about Heartbleed on In Beta episode 96, “A Series of Mathy Things.”

Conclusion

Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 7/8 2014

Mozilla planet - do, 10/04/2014 - 10:36

The current work load is still affecting my time for getting out our automation status reports. The current updates are a bit old but still worth to mention. So lets get them out.

Highlights

As mentioned in my last report, we had issues with Mozmill on Windows while running our restart tests. So during a restart of Firefox, Mozmill wasn’t waiting long enough to detect that the old process is gone, so a new instance has been started immediately. Sadly that process failed with the profile already in use error. Reason here was a broken handling of process.poll() in mozprocess, which Henrik fixed in bug 947248. Thankfully no other Mozmill release was necessary. We only re-created our Mozmill environments for the new mozprocess version.

With the ongoing pressure of getting automated tests implemented for the Metro mode of Firefox, our Softvision automation team members were concentrating their work on adding new library modules, and enhancing already existent ones. Another big step here was the addition of the Metro support to our Mozmill dashboard by Andrei. With that we got new filter settings to only show results for Metro builds.

When Henrik noticed one morning that our Mozmill-CI staging instance was running out of disk space, he did some investigation and has seen that we were using the identical settings from production for the build retention policy. Without having more then 40GB disk space we were trying to keep the latest 100 builds for each job. This certainly wont work, so Cosmin worked on reducing the amount of builds to 5. Beside this we were also able to fix our l10n testrun for localized builds of Firefox Aurora.

Given that we stopped support for Mozmill 1.5 and continued with Mozmill 2.0 already a while ago, we totally missed to update our Mozmill tests documentation on MDN. Henrik was working on getting all of it up-to-date, so new community members wont struggle anymore.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 7 and week 8.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda and notes from the Firefox Automation meetings of week 7 and week 8.

Categorieën: Mozilla-nl planet

Firefox OS and Medic Mobile use the Web to Connect the World to Healthcare

Mozilla Blog - do, 10/04/2014 - 08:01

At Mozilla, we are dedicated to putting the power of the Web in people’s hands in support of our mission to promote openness, innovation and opportunity on the Web. We’re pleased to announce today that we’re partnering with Medic Mobile, a leader in mHealth solutions for developing countries, to take that mission to the world’s most remote and underserved communities.

medic mobileMedic Mobile pioneered the use of feature phones and text messaging to connect remote communities to health care. Their technology tracks outbreaks, schedules maternal health visits, monitors medicine stock, and connects villages with hospitals. The organization works in 20 countries spanning Africa, Asia and Latin America, and supports Community Health Workers overseeing 5 million people.  Medic Mobile’s Hope Phones campaign, a cellphone recycling initiative, is one of many ways that the organization raises funds for new technology needed in the field.

Medic Mobile is the first community health app for Firefox OS and is now available in Firefox Marketplace. This early version allows health workers to document measles outbreaks from their phones, and more features will be added with future updates.

“We see harnessing the Web as our next big step, bringing location data, images, visualizations and analytics to the fingertips of Community Health Workers. Firefox OS is device-agnostic and provides a Web-based, low-cost platform for the developing world,” said Josh Nesbit, CEO of Medic Mobile.

“We look forward to expanding Medic Mobile’s reach through Firefox OS, and will continue pursuing regionalized and localized community apps like Medic Mobile that make the Web even more useful and relevant to people,” added Mary Ellen Muckerman, VP of Brand Strategy and Services at Mozilla.

We’re proud to partner with Medic Mobile, aligning our non-profit missions of using technology to change people’s lives.

For more information:

 

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

Mozilla planet - do, 10/04/2014 - 07:21

the following changes have been pushed to bugzilla.mozilla.org:

  • [924265] add an alert date field/mechanism to the “Infrastructure & Operations” product
  • [987765] Updates to Developer Documentation product’s “form.doc” form and configuration
  • [891757] “Additional hours worked” displayed inline with comments is now superfluous
  • [986590] Confusing error message when not finding reviewer
  • [987940] arbitrary product name (text) injection in guided workflow
  • [988175] Needinfo dropdown should include “myself”
  • [961843] Reset password token leakage
  • [984505] Link component and product to browse for other bugs in this category
  • [987521] flag activity api needs to prohibit requests which return the entire table
  • [990982] Use <optgroup/> to group products into classifications in the product drop-down on show_bug.cgi
  • [991477] changing a tracking flag’s value doesn’t result in the value being updated on bugs
  • [988414] The drop down menu icon next to the user name is not visible in Chrome on Mac OSX
  • [994467] remove “anyone” from the needinfo menu

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Selena Deckelmann: Python Core Summit: notes from my talk today

Mozilla planet - wo, 09/04/2014 - 22:03

I gave a short talk today about new coders and contributors to developer documentation today. Here are my notes!

Me: Selena Deckelmann Data Architect, Mozilla Major contributor to PostgreSQL, PyLadies organizer in Portland, OR

Focusing on Documentation, Teaching and Outreach

Two main forks of thought around teaching and outreach: 1. Brand new coders: PyLadies, Software Carpentry and University are the main communities represented 2. New contributors to Python & ecosystem

1. Brand new coders: PyLadies, Software Carpentry and University are the main communities represented

(a) Information architecture of the website

Where do you go if you are a teacher or want to teach a workshop? Totally unclear on python.org. Really could use a section on the website for this, microsite.

Version 2 vs 3 is very confusing for new developers. Most workshops default to 2, some workshops now require 3. Maybe mark clearly on all workshops which version. Generally this is a very confusing issue when encountering the site for the first time.

Possible solution: Completely separate “brand new coder” tutorial. Jessica McKellar would like to write this.

(b) Packaging and Installation problems — see earlier long conversation in this meeting about this. Many problems linked to having to compile C code while installing with pip

(c) New coder contribution can come through documenting of issues around install and setup. We could make this easier — maybe direct initial reports to stack overflow, and then float solutions to bugs.python.org

2. New contributors to Python & ecosystem — with a focus on things useful for keeping documentation and tutorials up-to-date and relevant

(a) GNOME Outreach Program for WomenPython is participating!

More people from core should participate as mentors! PSF is funding 2-3 students this cycle, Twisted has participated for a while and had a great experience. This program is great because:

  • Supports code and non-code contribution
  • Developer community seems very cohesive, participants seem to join communities and stick around
  • Strong diversity support
  • Participants don’t have to be students
  • Participants are paid for 3 months
  • Participants come from geographically diverse communities
  • To participate, applicants must submit a patch or provide some other pre-defined contribution before their application is even accepted

Jessica McKellar and Lynn Root are mentors for Python itself. See them for more details about this round! Selena is a coordinator and former mentor for Mozilla’s participation and also available to answer questions.

(b) Write the Docs conference is a python-inspired community around documentation.

(c) Openstack – Anne Gentle & her blog. 3-year participant in OpenStack community and great resource for information about building technical documentation community.

(d) Better tooling for contribution could be a great vector for getting new contributors.

  • Wiki is a place for information to go and die (no clear owners, neglected SEO etc) – Maybe separate documentation repos from core code repos for tutorials
  • carefully consider the approval process – put the people who are most dedicated to maintaining the tutorials in charge of maintaining them

(e) bugs.python.org

Type selection is not relevant to ‘documentation’ errors/fixes. Either remove ‘type’ from the UI or provide relevant types. I recommend removing ‘type’ as a required (or implied required) form field when entering a bug.

The larger issue here is around how we design for contribution of docs:

  • What language do we use in our input systems?
  • What workflow do we expect technical writers to follow to get their contributions included?
  • What is the approval process?

Also see the “tooling for contribution”

Categorieën: Mozilla-nl planet

Mozilla CEO's downfall a lesson to all execs: 'Stay boring' - Fortune

Nieuws verzameld via Google - wo, 09/04/2014 - 21:46

Christian Post

Mozilla CEO's downfall a lesson to all execs: 'Stay boring'
Fortune
Here's a recap: Shortly after taking over as CEO of Mozilla, the open-source computing company known for its Firefox web browser, Brendan Eich found himself on the receiving end of public outrage over revelations that he'd donated $1,000 to anti-gay ...
No, Sam Yagan and OkCupid Aren't HypocritesSlate Magazine (blog)
Brendan Eich's Resignation: Did Mozilla CEO Step Down Because Of A 'Gay ...Huffington Post
Mozilla's Statement on the Brendan Eich Controversy, ExplainedNational Review Online (blog)
Register -USA TODAY -Christian Post
alle 414 nieuwsartikelen »
Categorieën: Mozilla-nl planet

How Facebook's Political Donations Could Make It The Next Mozilla - ThinkProgress

Nieuws verzameld via Google - wo, 09/04/2014 - 21:42

ThinkProgress

How Facebook's Political Donations Could Make It The Next Mozilla
ThinkProgress
Facebook shareholders called out the social network for backing politicians whose positions on public policy issues, such as Internet freedom, LGBT rights, and the environment, don't align with Facebook's image, according to filings with the Securities ...
Shareholders slam Facebook over 'incongruent' PAC contributions on gay rights ...Computerworld New Zealand

alle 15 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

RedState, Breitbart Boycott Mozilla to Protest CEO's 'Totalitarian' Ousting - Mediaite

Nieuws verzameld via Google - wo, 09/04/2014 - 19:58

RedState, Breitbart Boycott Mozilla to Protest CEO's 'Totalitarian' Ousting
Mediaite
Conservative sites RedState and Breitbart have both launched boycotts against Mozilla in protest of CEO Brendan Eich being pushed out after his donation to an anti-gay marriage cause went public, displaying big, front-page messages on their sites ...

en meer »
Categorieën: Mozilla-nl planet

Gervase Markham: Recommended Reading

Mozilla planet - wo, 09/04/2014 - 19:21

This response to my recent blog post is the best post on the Brendan situation that I’ve read from a non-Mozillian. His position is devastatingly understandable.

Categorieën: Mozilla-nl planet

Joel Maher: is a phone too hard to use?

Mozilla planet - wo, 09/04/2014 - 18:43

Working at Mozilla, I get to see a lot of great things.  One of them is collaborating with my team (as we are almost all remoties) and I have been doing that for almost 6 years.  Sometime around 3 years ago we switch to using Vidyo as a way to communicate in meetings.  This is great, we can see and hear each other.  Unfortunately heartbleed came out and affects Mozilla’s Vidyo servers.  So yesterday and today we have been without Vidyo.

Now I am getting meeting cancellation notices, why are we cancelling meetings?  Did meetings not happen 3 years ago?  Mozilla actually creates an operating system for a … phone.  In fact our old teleconferencing system is still in place.  I thought about this earlier today and wondered why we are cancelling meetings.  Personally I always put Vidyo in the background during meetings and keep IRC in the foreground.  Am I a minority?

I am not advocating for scrapping Vidyo, instead I would like to attend meetings, and if we find they cannot be held without Vidyo, we should cancel them (and not reschedule them). 

Meetings existed before Vidyo and Open Source existed before GitHub, we don’t need the latest and greatest things to function in life. Pick up a phone and discuss what needs to be discussed.


Categorieën: Mozilla-nl planet

Pagina's