mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Kampf der Software-Giganten: Microsoft, Mozilla, Google, Adobe und Apple im ... - CHIP Online

Nieuws verzameld via Google - snein, 29/05/2016 - 14:07

CHIP Online

Kampf der Software-Giganten: Microsoft, Mozilla, Google, Adobe und Apple im ...
CHIP Online
Aber welcher Hersteller ist hier eigentlich der große Software-Zampano? Wessen Software wurde am häufigsten heruntergeladen? Klar ist: Es muss eine der absoluten Software-Größen sein. Deswegen lassen wir die Dickschiffe Microsoft, Mozilla, Google, ...

Categorieën: Mozilla-nl planet

What is the Role of Mozilla’s Executive Chair?

Mitchell Baker - sn, 28/05/2016 - 01:53

What does the Executive Chair do at Mozilla? This question comes up frequently in conversations with people inside and outside of Mozilla. I want to answer that question and clearly define my role at Mozilla. The role of Executive Chair is unique and entails many different responsibilities. In particular at Mozilla, the Executive Chair is something more than the well understood role of “Chairman of the Board.” Because Mozilla is a very different sort of organization, the role of Executive Chair can be highly customized and personal. It is not generally an operational role although I may initiate and oversee some programs and initiatives.   

In this post I’ll outline the major areas I’m focused on. In subsequent posts I’ll go into more detail.

#1. Chair the Board

This portion of my role is similar to the more traditional Chair role. At Mozilla in this capacity I work on mission focus, governance, development and operation of the Board and the selection, support and evaluation of the most senior executives. In our case these are Mark Surman, Executive Director of Mozilla Foundation and Chris Beard, CEO of Mozilla Corporation. Less traditionally, this portion of my role  includes an ongoing activity I call “weaving all aspects of Mozilla into a whole.”  Mozilla is an organizationally complex mixture of communities, legal entities and programs.  Unifying this into “one Mozilla” is important.

#2. Represent Mozilla and our mission to Mozillians and the world

This is our version of the general “public spokesperson” role. In this part of my role, I speak at conferences, events and to the media to communicate Mozilla’s message. The goal of this portion of my role is to grow our influence and educate the world on issues that are important to us. This role is particularly important as we transform the company and the products we create, and as we refocus on entirely new challenges to the Open Web, interoperability, privacy and security.

#3. Reform the ways in which Mozilla values are reflected in our culture, management and leadership

This is the core of the work that is intensely customized for Mozilla. It is an area where Mozilla looks to me for leadership, and for which I have a unique vision. Mozilla’s core DNA is a mix of the open source/free software movement and the open architecture of the Internet. We were born as a radically open, radically participatory organization, unbound to traditional corporate structure. We played a role in bringing the “open” movement into mainstream consciousness. How does and how can this DNA manifest itself today? How do we better integrate this DNA into our current size?  Needless to say I work hand-in-hand with Chris Beard and Mark Surman in these areas.

#4. Strategically advise Mozilla’s technology and product direction

I’ve played this role for just over 20 years now, working closely with Mozilla’s technologists, individual contributors and leadership. I help us take new directions that might be difficult to chart. And in this role I can take risks that may make us uncomfortable in the shorter term but yield us great value over the longer term. By helping to point us towards the cutting edge of our technology, I reinforce the importance of change and adaptation in how we express our values.

#5. Help Mozilla ideas expand into new contexts

I’ve been working with Mark Surman on this topic since he joined us. We’ve expanded our mission and programs into digital literacy and education, journalism, science, women and technology and now the Mozilla Leadership Network. I have also championed Mozilla’s expanded efforts in public policy. I continue to look at how we can do more (and am always open to suggestions).

So these are the different parts of my role. Hopefully it provides you with a framework for understanding what I do and how I see myself interacting with Mozilla. I’m planning to write a series of posts describing the work underway in these areas. Please send comments or feedback or questions to “office of the chair mailing list.” And thanks for your interest in Mozilla.

Categorieën: Mozilla-nl planet

Mozilla throws support behind privacy-boosting GDPR updates - ITProPortal

Nieuws verzameld via Google - fr, 27/05/2016 - 12:39

ITProPortal

Mozilla throws support behind privacy-boosting GDPR updates
ITProPortal
Mozilla says that GDPR is designed to benefit both companies and the public, and recognises the importance of the fact that it will have an impact far beyond the borders of Europe. Denelle Dixon-Thayer, Chief Legal and Business Officer at Mozilla ...
Mozilla welcomes privacy-boosting GDPR data protection law updatesBetaNews

alle 53 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla veröffentlicht Transparenz-Bericht für 2015 - soeren-hentzschel.at

Nieuws verzameld via Google - to, 26/05/2016 - 17:53

Mozilla veröffentlicht Transparenz-Bericht für 2015
soeren-hentzschel.at
Mozilla hat einen Transparenz-Bericht für das vergangene Jahr veröffentlicht. In diesem geht Mozilla auf Fragen wie die Häufigkeit von Regierungsanfragen ein, Nutzerdaten auszuhändigen, Inhalte zu entfernen oder gemeldete Urheber- und ...

Categorieën: Mozilla-nl planet

Court decision ruins the whole Tor FBI Mozilla problem - Inquirer

Nieuws verzameld via Google - to, 26/05/2016 - 17:10

Inquirer

Court decision ruins the whole Tor FBI Mozilla problem
Inquirer
SOFTWARE COMPANY Mozilla has failed in its attempt to get the FBI to release information about a Tor vulnerability in a case about child abuse because a judge in the case has dismissed all the evidence. The FBI has declined the opportunity to inform ...

Categorieën: Mozilla-nl planet

Mozilla welcomes privacy-boosting GDPR data protection law updates - BetaNews

Nieuws verzameld via Google - wo, 25/05/2016 - 19:18

Mozilla welcomes privacy-boosting GDPR data protection law updates
BetaNews
Denelle Dixon-Thayer, Chief Legal and Business Officer at Mozilla Corporation says that the regulation will help to improve trust and security. She welcomes regulation being updated as the previous data protection directive was drafted when hardly ...

en meer »
Categorieën: Mozilla-nl planet

Mozilla, Reddit, Vimeo, And Others Call On FCC To Publicize Its Zero-Rating Plans - Tom's Hardware

Nieuws verzameld via Google - ti, 24/05/2016 - 23:16

Mozilla, Reddit, Vimeo, And Others Call On FCC To Publicize Its Zero-Rating Plans
Tom's Hardware
Mozilla, Reddit, Vimeo, DuckDuckGo, Kickstarter, and tens of other companies sent an open letter to the FCC, urging it to adopt some clear and transparent rules against zero-rating, as the telecommunications companies continue to push the limits of net ...

Categorieën: Mozilla-nl planet

Traducen Mozilla a lenguas indígenas - El Siglo de Torreón

Nieuws verzameld via Google - ti, 24/05/2016 - 21:55

El Siglo de Torreón

Traducen Mozilla a lenguas indígenas
El Siglo de Torreón
Para combatir la idea de que ciertas lenguas no son compatibles con la tecnología, Mozilla apostará hacia el proceso de revitalización lingüística, para ser el primer navegador a nivel mundial traducido a lenguas indígenas de todo Latinoamérica.

en meer »Google Nieuws
Categorieën: Mozilla-nl planet

Reddit, Mozilla, Others Urge FCC To Formally Investigate Broadband Usage ... - Techdirt (blog)

Nieuws verzameld via Google - ti, 24/05/2016 - 20:47

Motherboard

Reddit, Mozilla, Others Urge FCC To Formally Investigate Broadband Usage ...
Techdirt (blog)
But in a letter sent to FCC Commissioners (pdf) this week, a coalition of companies including Yelp, Vimeo, Foursquare, Kickstarter, Medium, Mozilla and Reddit have urged the agency to launch a more formal -- but also transparent -- probe of ISP ...
Medium, Mozilla, and Kickstarter Signed a Letter Against Zero-RatingMotherboard
As mobile carriers ramp up bribery program, Internet coalition says no to "zero ...Boing Boing
Silicon Valley Protests 'Zero-Rating' Schemes, Calls For New FCC RulesMediaPost Communications

alle 10 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Medium, Mozilla, and Kickstarter Signed a Letter Against Zero-Rating - Motherboard

Nieuws verzameld via Google - ti, 24/05/2016 - 11:07

Motherboard

Medium, Mozilla, and Kickstarter Signed a Letter Against Zero-Rating
Motherboard
A coalition of leading open internet advocates is pressuring federal regulators to crack down on the controversial broadband industry practice of “zero-rating,” calling it a threat to net neutrality, the principle that all content on the internet ...

en meer »
Categorieën: Mozilla-nl planet

Firefox 46: Mozilla verteilt Firefox Hello 1.3 - soeren-hentzschel.at

Nieuws verzameld via Google - mo, 23/05/2016 - 23:58

soeren-hentzschel.at

Firefox 46: Mozilla verteilt Firefox Hello 1.3
soeren-hentzschel.at
Seit Firefox 45 ist Firefox Hello als System-Add-on implementiert. Die Entkoppelung vom Firefox-Core ermöglicht es, einzelne Komponenten von Firefox unabhängig vom Rest des Browsers zu aktualisieren. Mozilla verteilt mit Firefox Hello 1.3 nun zum ...

Categorieën: Mozilla-nl planet

Mozilla Firefox – tipy pro ochranu soukromí - 3.díl - PC World.cz

Nieuws verzameld via Google - mo, 23/05/2016 - 18:52

PC World.cz

Mozilla Firefox – tipy pro ochranu soukromí - 3.díl
PC World.cz
Nyní se pojďme podívat na nastavení adresního řádku, který se v programu Mozilla Firefox označuje rovněž jako Location Bar. Pokud si nepřejete, aby vám Mozilla navrhovala weby na základě uložené historie stránek, na základě záložek či otevřených ...

Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Firefox para iOS mejora su seguridad y te hace ir más rápido por la Web

Mozilla planet - mo, 23/05/2016 - 14:39

La semana pasada Mozilla liberó una nueva versión Firefox para iOS y desde Mozilla Hispano te mostramos sus novedades. Principalmente, esta entrega mejora la privacidad y seguridad de las personas al navegar en la Web y aporta una experiencia más aerodinámica que te permitirá un mayor control sobre tu experiencia de navegación móvil.

¿Qué hay de nuevo en esta actualización?

El widget Today de iOS: Sabes que obtener lo que buscas en la Web rápidamente para ti es importante, especialmente en tu móvil. Por esa razón, ahora puedes acceder a Firefox a través del widget iOS Today para abrir nuevas pestañas o un enlace copiado recientemente.

El widget iOS Today en Firefox para iOS

El widget iOS Today en Firefox para iOS

La barra alucinante: De ahora en adelante al escribir en la barra de direcciones se mostrarán tus marcadores, historial y sugerencias de búsqueda que coincidan con el término deseado. Esto hará que el acceso a tus sitios web favoritos sea más rápido y fácil.

La barra alucinante muestra los marcadores y sugerencias de búsqueda.

La barra alucinante muestra los marcadores y sugerencias de búsqueda.

Administra tu seguridad: Por defecto, Firefox contribuye a garantizar tu seguridad avisándote cuando la conexión a determinada web no es segura. Cuando trates de acceder a una web poco segura, verás un mensaje de “error” avisándote de que esa conexión no es de fiar y estarás protegido a la hora de acceder a ellas. Con Firefox para iOS, puedes ignorar temporalmente esos mensajes de error de las páginas web que has considerado como “seguras” pero pueden quedar registradas como potencialmente no-seguras por Firefox.

Error de certificado en Firefox para iOS

Error de certificado en Firefox para iOS

Debido a que el mecanismo empleado por Apple para descargar e instalar aplicaciones en sus teléfonos es muy complicado, no podemos proveer la descarga de esta versión desde nuestro sitio. Quizás más adelante, si esta regla varía, podremos hacerlo y completaremos en kit de versiones de Firefox. Por lo que para experimentar y gozar estas nuevas funcionalidades añadidas a Firefox para iOS debes descargar esta actualización desde la AppStore.

download_on_appstore

Fuentes: The Mozilla Blog y Mozilla Press

Categorieën: Mozilla-nl planet

Niko Matsakis: Unsafe abstractions

Mozilla planet - mo, 23/05/2016 - 14:17

The unsafe keyword is a crucial part of Rust’s design. For those not familiar with it, the unsafe keyword is basically a way to bypass Rust’s type checker; it essentially allows you to write something more like C code, but using Rust syntax.

The existence of the unsafe keyword sometimes comes as a surprise at first. After all, isn’t the point of Rust that Rust programs should not crash? Why would we make it so easy then to bypass Rust’s type system? It can seem like a kind of flaw in the design.

In my view, though, unsafe is anything but a flaw: in fact, it’s a critical piece of how Rust works. The unsafe keyword basically serves as a kind of escape valve – it means that we can keep the type system relatively simple, while still letting you pull whatever dirty tricks you want to pull in your code. The only thing we ask is that you package up those dirty tricks with some kind of abstraction boundary.

This post introduces the unsafe keyword and the idea of unsafety boundaries. It is in fact a lead-in for another post I hope to publish soon that discusses a potential design of the so-called Rust memory model, which is basically a set of rules that help to clarify just what is and is not legal in unsafe code.

Unsafe code as a plugin

I think a good analogy for thinking about how unsafe works in Rust is to think about how an interpreted language like Ruby (or Python) uses C modules. Consider something like the JSON module in Ruby. The JSON bundle includes a pure Ruby implementation (JSON::Pure), but it also includes a re-implementation of the same API in C (JSON::Ext). By default, when you use the JSON bundle, you are actually running C code – but your Ruby code can’t tell the difference. From the outside, that C code looks like any other Ruby module – but internally, of course, it can play some dirty tricks and make optimizations that wouldn’t be possible in Ruby. (See this excellent blog post on Helix for more details, as well as some suggestions on how you can write Ruby plugins in Rust instead.)

Well, in Rust, the same scenario can arise, although the scale is different. For example, it’s perfectly possible to write an efficient and usable hashtable in pure Rust. But if you use a bit of unsafe code, you can make it go faster still. If this a data structure that will be used by a lot of people or is crucial to your application, this may be worth the effort (so e.g. we use unsafe code in the standard library’s implementation). But, either way, normal Rust code should not be able to tell the difference: the unsafe code is encapsulated at the API boundary.

Of course, just because it’s possible to use unsafe code to make things run faster doesn’t mean you will do it frequently. Just like the majority of Ruby code is in Ruby, the majority of Rust code is written in pure safe Rust; this is particularly true since safe Rust code is very efficient, so dropping down to unsafe Rust for performance is rarely worth the trouble.

In fact, probably the single most common use of unsafe code in Rust is for FFI. Whenever you call a C function from Rust, that is an unsafe action: this is because there is no way the compiler can vouch for the correctness of that C code.

Extending the language with unsafe code

To me, the most interesting reason to write unsafe code in Rust (or a C module in Ruby) is so that you can extend the capabilities of the language. Probably the most commonly used example of all is the Vec type in the standard library, which uses unsafe code so it can handle uninitialized memory; Rc and Arc, which enable shared ownership, are other good examples. But there are also much fancier examples, such as how Crossbeam and deque use unsafe code to implement non-blocking data structures, or Jobsteal and Rayon use unsafe code to implement thread pools.

In this post, we’re going to focus on one simple case: the split_at_mut method found in the standard library. This method is defined over mutable slices like &mut [T]. It takes as argument a slice and an index (mid), and it divides that slice into two pieces at the given index. Hence it returns two subslices: ranges from 0..mid, and one that ranges from mid...

You might imagine that split_at_mut would be defined like this:

1 2 3 4 5 impl [T] { pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) { (&mut self[0..mid], &mut self[mid..]) } }

If it compiled, this definition would do the right thing, but in fact if you try to build it you will find it gets a compilation error. It fails for two reasons:

  1. In general, the compiler does not try to reason precisely about indices. That is, whenever it sees an index like foo[i], it just ignores the index altogether and treats the entire array as a unit (foo[_], effectively). This means that it cannot tell that &mut self[0..mid] is disjoint from &mut self[mid..]. The reason for this is that reasoning about indices would require a much more complex type system.
  2. In fact, the [] operator is not builtin to the language when applied to a range anyhow. It is implemented in the standard library. Therefore, even if the compiler knew that 0..mid and mid.. did not overlap, it wouldn’t necessarily know that &mut self[0..mid] and &mut self[mid..] return disjoint slices.

Now, it’s plausible that we could extend the type system to make this example compile, and maybe we’ll do that someday. But for the time being we’ve preferred to implement cases like split_at_mut using unsafe code. This lets us keep the type system simple, while still enabling us to write APIs like split_at_mut.

Abstraction boundaries

Looking at unsafe code as analogous to a plugin helps to clarify the idea of an abstraction boundary. When you write a Ruby plugin, you expect that when users from Ruby call into your function, they will supply you with normal Ruby objects and pointers. Internally, you can play whatever tricks you want: for example, you might use a C array instead of a Ruby vector. But once you return values back out to the surrounding Ruby code, you have to repackage up those results as standard Ruby objects.

It works the same way with unsafe code in Rust. At the public boundaries of your API, your code should act as if it were any other safe function. This means you can assume that your users will give you valid instances of Rust types as inputs. It also means that any values you return or otherwise output must meet all the requirements that the Rust type system expects. Within the unsafe boundary, however, you are free to bend the rules (of course, just how free you are is the topic of debate; I intend to discuss it in a follow-up post).

Let’s look at the split_at_mut method we saw in the previous section. For our purposes here, we only care about the public interface of the function, which is its signature:

1 2 3 4 5 6 7 impl [T] { pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) { // body of the fn omitted so that we can focus on the // public inferface; safe code shouldn't have to care what // goes in here anyway } }

So what can we derive from this signature? To start, split_at_mut can assume that all of its inputs are valid (for safe code, the compiler’s type system naturally ensures that this is true; unsafe callers would have to ensure it themselves). Part of writing the rules for unsafe code will require enumerating more precisely what this means, but at a high-level it’s stuff like this:

  • The self argument is of type &mut [T]. This implies that we will receive a reference that points at some number N of T elements. Because this is a mutable reference, we know that the memory it refers to cannot be accessed via any other alias (until the mutable reference expires). We also know the memory is initialized and the values are suitable for the type T (whatever it is).
  • The mid argument is of type usize. All we know is that it is some unsigned integer.

There is one interesting thing missing from this list, however. Nothing in the API assures us that mid is actually a legal index into self. This implies that whatever unsafe code we write will have to check that.

Next, when split_at_mut returns, it must ensure that its return value meets the requirements of the signature. This basically means it must return two valid &mut [T] slices (i.e., pointing at valid memory, with a length that is not too long). Crucially, since those slices are both valid at the same time, this implies that the two slices must be disjoint (that is, pointing at different regions of memory).

Possible implementations

So let’s look at a few different implementation strategies for split_at_mut and evaluate whether they might be valid or not. We already saw that a pure safe implementation doesn’t work. So what if we implemented it using raw pointers like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 impl [T] { pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) { use std::slice::from_raw_parts_mut; // The unsafe block gives us access to raw pointer // operations. By using an unsafe block, we are claiming // that none of the actions below will trigger // undefined behavior. unsafe { // get a raw pointer to the first element let p: *mut T = &mut self[0]; // get a pointer to the element `mid` let q: *mut T = p.offset(mid as isize); // number of elements after `mid` let remainder = self.len() - mid; // assemble a slice from 0..mid let left: &mut [T] = from_raw_parts_mut(p, mid); // assemble a slice from mid.. let right: &mut [T] = from_raw_parts_mut(q, remainder); (left, right) } } }

This is a mostly valid implementation, and in fact fairly close to what the standard library actually does. However, this code is making a critical assumption that is not guaranteed by the input: it is assuming that mid is in range. Nowhere does it check that mid <= len, which means that the q pointer might be out of range, and also means that the computation of remainder might overflow and hence (in release builds, at least by default) wrap around. So this implementation is incorrect, because it requires more guarantees than what the caller is required to provide.

We could make it correct by adding an assertion that mid is a valid index (note that the assert macro in Rust always executes, even in optimized code):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 impl [T] { pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) { use std::slice::from_raw_parts_mut; // check that `mid` is in range: assert!(mid <= self.len()); // as before, with fewer comments: unsafe { let p: *mut T = &mut self[0]; let q: *mut T = p.offset(mid as isize); let remainder = self.len() - mid; let left: &mut [T] = from_raw_parts_mut(p, mid); let right: &mut [T] = from_raw_parts_mut(q, remainder); (left, right) } } }

OK, at this point we have basically reproduced the implementation in the standard library (it uses some slightly different helpers, but it’s the same idea).

Extending the abstraction boundary

Of course, it might happen that we actually wanted to assume mid that is in bound, rather than checking it. We couldn’t do this for the actual split_at_mut, of course, since it’s part of the standard library. But you could imagine wanting a private helper for safe code that made this assumption, so as to avoid the runtime cost of a bounds check. In that case, split_at_mut is relying on the caller to guarantee that mid is in bounds. This means that split_at_mut is no longer safe to call, because it has additional requirements for its arguments that must be satisfied in order to guarantee memory safety.

Rust allows you express the idea of a fn that is not safe to call by moving the unsafe keyword out of the fn body and into the public signature. Moving the keyword makes a big difference as to the meaning of the function: the unsafety is no longer just an implementation detail of the function, it’s now part of the function’s interface. So we could make a variant of split_at_mut called split_at_mut_unchecked that avoids the bounds check:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 impl [T] { // Here the **fn** is declared as unsafe; calling such a function is // now considered an unsafe action for the caller, because they // must guarantee that `mid <= self.len()`. unsafe pub fn split_at_mut_unchecked(&mut self, mid: usize) -> (&mut [T], &mut [T]) { use std::slice::from_raw_parts_mut; let p: *mut T = &mut self[0]; let q: *mut T = p.offset(mid as isize); let remainder = self.len() - mid; let left: &mut [T] = from_raw_parts_mut(p, mid); let right: &mut [T] = from_raw_parts_mut(q, remainder); (left, right) } }

When a fn is declared as unsafe like this, calling that fn becomes an unsafe action: what this means in practice is that the caller must read the documentation of the function and ensure that what conditions the function requires are met. In this case, it means that the caller must ensure that mid <= self.len().

If you think about abstraction boundaries, declaring a fn as unsafe means that it does not form an abstraction boundary with safe code. Rather, it becomes part of the unsafe abstraction of the fn that calls it.

Using split_at_mut_unchecked, we could now re-implemented split_at_mut to just layer on top the bounds check:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 impl [T] { pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) { assert!(mid <= self.len()); // By placing the `unsafe` block in the function, we are // claiming that we know the extra safety conditions // on `split_at_mut_unchecked` are satisfied, and hence calling // this function is a safe thing to do. unsafe { self.split_at_mut_unchecked(mid) } } // **NB:** Requires that `mid <= self.len()`. pub unsafe fn split_at_mut_unchecked(&mut self, mid: usize) -> (&mut [T], &mut [T]) { ... // as above } } Unsafe boundaries and privacy

Although there is nothing in the language that explicitly connects the privacy rules with unsafe abstraction boundaries, they are naturally interconnected. This is because privacy allows you to control the set of code that can modify your fields, and this is a basic building block to being able to construct an unsafe abstraction.

Earlier we mentioned that the Vec type in the standard library is implemented using unsafe code. This would not be possible without privacy. If you look at the definition of Vec, it looks something like this:

1 2 3 4 5 pub struct Vec<T> { pointer: *mut T, capacity: usize, length: usize, }

Here the field pointer is a pointer to the start of some memory. capacity is the amount of memory that has been allocated and length is the amount of memory that has been initialized.

The vector code is all very careful to maintain the invariant that it is always safe the first length elements of the the memory that pointer refers to. You can imagine that if the length field were public, this would be impossible: anybody from the outside could go and change the length to whatever they want!

For this reason, unsafety boundaries tend to fall into one of two categories:

  • a single functions, like split_at_mut
    • this could include unsafe callees like split_at_mut_unchecked
  • a type, typically contained in its own module, like Vec
    • this type will naturally have private helper functions as well
    • and it may contain unsafe helper types too, as described in the next section
Types with unsafe interfaces

We saw earlier that it can be useful to define unsafe functions like split_at_mut_unchecked, which can then serve as the building block for a safe abstraction. The same is true of types. In fact, if you look at the actual definition of Vec from the standard library, you will see that it looks just a bit different from what we saw above:

1 2 3 4 pub struct Vec<T> { buf: RawVec<T>, len: usize, }

What is this RawVec? Well, that turns out to be an unsafe helper type that encapsulates the idea of a pointer and a capacity:

1 2 3 4 5 6 pub struct RawVec<T> { // Unique is actually another unsafe helper type // that indicates a uniquely owned raw pointer: ptr: Unique<T>, cap: usize, }

What makes RawVec an unsafe helper type? Unlike with functions, the idea of an unsafe type is a rather fuzzy notion. I would define such a type as a type that doesn’t really let you do anything useful without using unsafe code. Safe code can construct RawVec, for example, and even resize the backing buffer, but if you want to actually access the data in that buffer, you can only do so by calling the ptr method, which returns a *mut T. This is a raw pointer, so dereferencing it is unsafe; which means that, to be useful, RawVec has to be incorporated into another unsafe abstraction (like Vec) which tracks initialization.

Conclusion

Unsafe abstractions are a pretty powerful tool. They let you play just about any dirty performance trick you can think of – or access any system capbility – while still keeping the overall language safe and relatively simple. We use unsafety to implement a number of the core abstractions in the standard library, including core data structures like Vec and Rc. But because all of these abstractions encapsulate the unsafe code behind their API, users of those modules don’t carry the risk.

How low can you go?

One thing I have not discussed in this post is a lot of specifics about exactly what is legal within unsafe code and not. Clearly, the point of unsafe code is to bend the rules, but how far can you bend them before they break? At the moment, we don’t have a lot of published guidelines on this topic. This is something we aim to address. In fact there has even been a first RFC introduced on the topic, though I think we can expect a fair amount of iteration before we arrive at the final and complete answer.

As I wrote on the RFC thread, my take is that we should be shooting for rules that are human friendly as much as possible. In particular, I think that most people will not read our rules and fewer still will try to understand them. So we should ensure that the unsafe code that people write in ignorance of the rules is, by and large, correct. (This implies also that the majority of the code that exists ought to be correct.)

Interestingly, there is something of a tension here: the more unsafe code we allow, the less the compiler can optimize. This is because it would have to be conservative about possible aliasing and (for example) avoid reordering statements.

In my next post, I will describe how I think that we can leverage unsafe abstractions to actually get the best of both worlds. The basic idea is to aggressively optimized safe code, but be more conservative within an unsafe abstraction (but allow people to opt back in with additional annotations).

Edit note: Tweaked some wording for clarity.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 131

Mozilla planet - mo, 23/05/2016 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community News & Blog Posts New Crates & Project Updates
  • Systemd Manager. A systemd service manager written in Rust with the GTK-rs wrapper and direct integration with dbus.
  • FLAME. A flamegraph profiling tool for Rust.
  • Jobsteal. A work-stealing fork-join threadpool written in Rust.
  • pest. Simple, efficient parser generator.
Crate of the Week

This weeks Crate of the Week is parking_lot which gives us synchronization primitives (Mutex, RWLock, CondVar and friends) that are both smaller and faster than the standard library's implementations.

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

117 pull requests were merged in the last two weeks.

New Contributors
  • Daniel Campoverde [alx741]
  • mark-summerfield
  • Postmodern
  • Rémy Rakic
  • Robert Habermeier
  • Val Vanderschaegen
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 64

Mozilla planet - mo, 23/05/2016 - 02:30

In the last week, we landed 99 PRs in the Servo organization’s repositories.

This is a week of many additions to the Servo team - a Research Assistant and two full-time hires!

Kyle Headley is joining us from U. Colorado at Boulder, where he is working with Matthew Hammer on incremental computation. He’s going to be working as a Research Assistant this summer, helping us find ways we can use incremental computation to improve the performance of Servo. He’s kheadley on IRC.

Manish Goregaokar is a long-time Servo contributor, initially participating in the first round of Google Summer of Code with Servo. He has mostly worked on DOM-related issues and Rust itself, but is looking forward to working on new things. He is currently working remotely from Mumbai, but will be relocating to the San Francisco office later this year. He is manishearth on IRC.

Diane Hosfelt previously did network and protocol analysis for the Department of Defense, and will start out working on Servo’s networking (an area sorely in need of some expert work!). Diane is working remotely from the UK. She is dd0x68 on IRC.

Welcome to the team, everybody!

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions
  • Manish added support for submit button data in form submissions
  • Jack made Servo DPI-aware on Windows
  • nox hoisted out a channel creation to reduce the number of channels and threads Servo creates
  • larsberg enabled AppVeyor/Windows testing on ipc-channel
  • dati implemented Included Services support for WebBluetooth
  • ajeffrey reduced the number of threads used in our scheduler
  • rzambre changed the profiler file output from CSV to TSV format
  • emilio added support for constants in classes in geckolib
  • ms2ger implemented reporting of panics in web worker threads
  • bholley added basic support for Gecko atoms
  • mbrubeck optimized text shaping for ASCII text
  • KiChjang implemented support for -moz-user-* CSS longhands in geckolib
  • jdm created markers for network and JS-related events in the timeline profiler
  • izgzhen filled in many missing pieces related to file inputs in forms
  • fduraffourg ported a large set of HTML/JS tests for cookie handling to Rust unit tests
  • wafflespeanut improved the usability of the highfive automated tests
  • creativcoder enabled intercepting network requests and synthesizing responses
New Contributors Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

None this week.

Categorieën: Mozilla-nl planet

Anjana Vakil: Outreachy: What? How? Why?

Mozilla planet - mo, 23/05/2016 - 02:00

Today was my first day as an Outreachy intern with Mozilla! What does that even mean? Why is it super exciting? How did I swing such a sweet gig? How will I be spending my summer non-vacation? Read on to find out!

Outreachy logo

What is Outreachy?

Outreachy is a fantastic initiative to get more women and members of other underrepresented groups involved in Free & Open Source Software. Through Outreachy, organizations that create open-source software (e.g. Mozilla, GNOME, Wikimedia, to name a few) take on interns to work full-time on a specific project for 3 months. There are two internship rounds each year, May-August and December-March. Interns are paid for their time, and receive guidance/supervision from an assigned mentor, usually a full-time employee of the organization who leads the given project.

Oh yeah, and the whole thing is done remotely! For a lot of people (myself included) who don’t/can’t/won’t live in a major tech hub, the opportunity to work remotely removes one of the biggest barriers to jumping in to the professional tech community. But as FOSS developers tend to be pretty distributed anyway (I think my project’s team, for example, is on about 3 continents), it’s relatively easy for the intern to integrate with the team. It seems that most communication takes place over IRC and, to a lesser extent, videoconferencing.

What does an Outreachy intern do?

Anything and everything! Each project and organization is different. But in general, interns spend their time…

Coding (or not)

A lot of projects involve writing code, though what that actually entails (language, framework, writing vs. refactoring, etc.) varies from organization to organization and project to project. However, there are also projects that don’t involve code at all, and instead have the intern working on equally important things like design, documentation, or community management.

As for me specifically, I’ll be working on the project Test-driven Refactoring of Marionette’s Python Test Runner. You can click through to the project description for more details, but basically I’ll be spending most of the summer writing Python code (yay!) to test and refactor a component of Marionette, a tool that lets developers run automated Firefox tests. This means I’ll be learning a lot about testing in general, Python testing libraries, the huge ecosystem of internal Mozilla tools, and maybe a bit about browser automation. That’s a lot! Luckily, I have my mentor Maja (who happens to also be an alum of both Outreachy and RC!) to help me out along the way, as well as the other members of the Engineering Productivity team, all of whom have been really friendly & helpful so far.

Traveling

Interns receive a $500 stipend for travel related to Outreachy, which is fantastic. I intend, as I’m guessing most do, to use this to attend conference(s) related to open source. If I were doing a winter round I would totally use it to attend FOSDEM, but there are also a ton of conferences in the summer! Actually, you don’t even need to do the traveling during the actual 3 months of the internship; they give you a year-long window so that if there’s an annual conference you really want to attend but it’s not during your internship, you’re still golden.

At Mozilla in particular, interns are also invited to a week-long all-hands meet up! This is beyond awesome, because it gives us a chance to meet our mentors and other team members in person. (Actually, I doubly lucked out because I got to meet my mentor at RC during “Never Graduate Week” a couple of weeks ago!)

Blogging

One of the requirements of the internship is to blog regularly about how the internship and project are coming along. This is my first post! Though we’re required to write a post every 2 weeks, I’m aiming to write one per week, on both technical and non-technical aspects of the internship. Stay tuned!

How do you get in?

I’m sure every Outreachy participant has a different journey, but here’s a rough outline of mine.

Step 1: Realize it is a thing

Let’s not forget that the first step to applying for any program/job/whatever is realizing that it exists! Like most people, I think, I had never heard of Outreachy, and was totally unaware that a remote, paid internship working on FOSS was a thing that existed in the universe. But then, in the fall of 2015, I made one of my all-time best moves ever by attending the Recurse Center (RC), where I soon learned about Outreachy from various Recursers who had been involved with the program. I discovered it about 2 weeks before applications closed for the December-March 2015-16 round, which was pretty last-minute; but a couple of other Recursers were applying and encouraged me to do the same, so I decided to go for it!

Step 2: Frantically apply at last minute

Applying to Outreachy is a relatively involved process. A couple months before each round begins, the list of participating organizations/projects is released. Prospective applicants are supposed to find a project that interests them, get in touch with the project mentor, and make an initial contribution to that project (e.g. fix a small bug).

But each of those tasks is pretty intimidating!

First of all, the list of participating organizations is long and varied, and some organizations (like Mozilla) have tons of different projects available. So even reading through the project descriptions and choosing one that sounds interesting (most of them do, at least to me!) is no small task.

Then, there’s the matter of mustering up the courage to join the organization/project’s IRC channel, find the project mentor, and talk to them about the application. I didn’t even really know what IRC was, and had never used it before, so I found this pretty scary. Luckily, I was RC, and one of my batchmates sat me down and walked me through IRC basics.

However, the hardest and most important part is actually making a contribution to the project at hand. Depending on the project, this can be long & complicated, quick & easy, or anything in between. The level of guidance/instruction also varies widely from project to project: some are laid out clearly in small, hand-holdy steps, others are more along the lines of “find something to do and then do it”. Furthermore, prerequisites for making the contribution can be anything from “if you know how to edit text and send an email, you’re fine” to “make a GitHub account” to “learn a new programming language and install 8 million new tools on your system just to set up the development environment”. All in all, this means that making that initial contribution can often be a deceptively large amount of work.

Because of all these factors, for my application to the December-March round I decided to target the Mozilla project “Contribute to the HTML standard”. In addition to the fact that I thought it would be awesome to contribute to such a fundamental part of the web, I chose it because the contribution itself was really simple: just choose a GitHub issue with a beginner-friendly label, ask some questions via GitHub comments, edit the source markup file as needed, and make a pull request. I was already familiar with GitHub so it was pretty smooth sailing.

Once you’ve made your contribution, it’s time to write the actual Outreachy application. This is just a plain text file you fill out with lots of information about your experience with FOSS, your contribution to the project, etc. In case it’s useful to anyone, here’s my application for the December-March 2015-16 round. But before you use that as an example, make sure you read what happened next…

Step 3: Don’t get in

Unfortunately, I didn’t get in to the December-March round (although I was stoked to see some of my fellow Recursers get accepted!). Honestly, I wasn’t too surprised, since my contributions and application had been so hectic and last-minute. But even though it wasn’t successful, the application process was educational in and of itself: I learned how to use IRC, got 3 of my first 5 GitHub pull requests merged, and became a contributor to the HTML standard! Not bad for a failure!

Step 4: Decide to go for it again (at last minute, again)

Fast forward six months: after finishing my batch at RC, I had been looking & interview-prepping, but still hadn’t gotten a job. When the applications for the May-August round opened up, I took a glance at the projects and found some cool ones, but decided that I wouldn’t apply this round because a) I needed a Real Job, not an internship, and b) the last round’s application process was a pretty big time investment which hadn’t paid off (although it actually had, as I just mentioned!).

But as the weeks went by, and the application deadline drew closer, I kept thinking about it. I was no closer to finding a Real Job, and upheaval in my personal life made my whereabouts over the summer an uncertainty (I seem never to know what continent I live on), so a paid, remote internship was becoming more and more attractive. When I broached my hesitation over whether or not to apply to other Recursers, they unanimously encouraged me (again) to go for it (again). Then, I found out that one of the project mentors, Maja, was a Recurser, and since her project was one of the ones I had shortlisted, I decided to apply for it.

Of course, by this point it was once again two weeks until the deadline, so panic once again set in!

Step 5: Learn from past mistakes

This time, the process as a whole was easier, because I had already done it once. IRC was less scary, I already felt comfortable asking the project mentor questions, and having already been rejected in the previous round made it somehow lower-stakes emotionally (“What the hell, at least I’ll get a PR or two out of it!”). During my first application I had spent a considerable amount of time reading about all the different projects and fretting about which one to do, flipping back and forth mentally until the last minute. This time, I avoided that mistake and was laser-focused on a single project: Test-driven Refactoring of Marionette’s Python Test Runner.

From a technical standpoint, however, contributing to the Marionette project was more complicated than the HTML standard had been. Luckily, Maja had written detailed instructions for prospective applicants explaining how to set up the development environment etc., but there were still a lot of steps to work through. Then, because there were so many folks applying to the project, there was actually a shortage of “good-first-bugs” for Marionette! So I ended up making my first contributions to a different but related project, Perfherder, which meant setting up a different dev environment and working with a different mentor (who was equally friendly). By the time I was done with the Perfherder stuff (which turned out to be a fun little rabbit hole!), Maja had found me something Marionette-specific to do, so I ended up working on both projects as part of my application process.

When it came time to write the actual application, I also had the luxury of being able to use my failed December-March application as both a starting point and an example of what not to do. Some of the more generic parts (my background, etc.) were reusable, which saved time. But when it came to the parts about my contribution to the project and my proposed internship timeline, I knew I had to do a much better job than before. So I opted for over-communciation, and basically wrote down everything I could think of about what I had already done and what I would need to do to complete the goals stated in the project description (which Maja had thankfully written quite clearly).

In the end, my May-August application was twice as long as my previous one had been. Much of that difference was the proposed timeline, which went from being one short paragraph to about 3 pages. Perhaps I was a bit more verbose than necessary, but I decided to err on the side of too many details, since I had done the opposite in my previous application.

Step 6: Get a bit lucky

Spoiler alert: this time I was accepted!

Although I knew I had made a much stronger application than in the previous round, I was still shocked to find out that I was chosen from what seemed to be a large, competitive applicant pool. I can’t be sure, but I think what made the difference the second time around must have been a) more substantial contributions to two different projects, b) better, more frequent communication with the project mentor and other team members, and c) a much more thorough and better thought-out application text.

But let’s not forget d) luck. I was lucky to have encouragement and support from the RC community throughout both my applications, lucky to have the time to work diligently on my application because I had no other full-time obligations, lucky to find a mentor who I had something in common with and therefore felt comfortable talking to and asking questions of, and lucky to ultimately be chosen from among what I’m sure were many strong applications. So while I certainly did work hard to get this internship, I have to acknowledge that I wouldn’t have gotten in without all of that luck.

Why am I doing this?

Last week I had the chance to attend OSCON 2016, where Mozilla’s E. Dunham gave a talk on How to learn Rust. A lot of the information applied to learning any language/new thing, though, including this great recommendation: When embarking on a new skill quest, record your motivation somewhere (I’m going to use this blog, but I suppose Twitter or a vision board or whatever would work too) before you begin.

The idea is that once you’re in the process of learning the new thing, you will probably have at least one moment where you’re stuck, frustrated, and asking yourself what the hell you were thinking when you began this crazy project. Writing it down beforehand is just doing your future self a favor, by saving up some motivation for a rainy day.

So, future self, let it be known that I’m doing Outreachy to…

  • Write code for an actual real-world project (as opposed to academic/toy projects that no one will ever use)
  • Get to know a great organization that I’ve respected and admired for years
  • Try out working remotely, to see if it suits me
  • Learn more about Python, testing, and automation
  • Gain confidence and feel more like a “real developer”
  • Launch my career in the software industry

I’m sure these goals will evolve as the internship goes along, but for now they’re the main things driving me. Now it’s just a matter of sitting back, relaxing, and working super hard all summer to achieve them! :D

Got any more questions?

Are you curious about Outreachy? Thinking of applying? Confused about the application process? Feel free to reach out to me! Go on, don’t be shy, just use one of those cute little contact buttons and drop me a line. :)

Categorieën: Mozilla-nl planet

Christian Heilmann: Google IO – A tale of two Googles

Mozilla planet - mo, 23/05/2016 - 01:59

Google IO main stage with audience

Disclaimer: The following are my personal views and experiences at this year’s Google IO. They are not representative of my employer. Should you want to quote me, please do so as Chris Heilmann, developer.

TL;DR: Is Google IO worth the $900? Yes, if you’re up for networking, getting information from experts and enjoy social gatherings. No, if you expect to be able to see talks. You’re better off watching them from home. The live streaming and recordings are excellent.

Google IO this year left me confused and disappointed. I found a massive gap between the official messaging and the tech on display. I’m underwhelmed with the keynote and the media outreach. The much more interesting work in the breakout sessions, talks and demos excited me. It seems to me that what Google wants to promote and the media to pick up is different to what its engineers showed. That’s OK, but it feels like sales stepping on a developer conference turf.

I enjoyed the messaging of the developer outreach and product owner team in the talks and demos. At times I was wondering if I was at a Google or a Mozilla event. The web and its technologies were front and centre. And there was a total lack of “our product $X leads the way” vibes.

Kudos to everyone involved. The messaging about progressive Web Apps, AMP and even the new Android Instant Apps was honest. It points to a drive in Google to return to the web for good.

Illuminated dinosaur at the after party

The vibe of the event changed a lot since moving out of Moscone Center in San Francisco. Running it on Google’s homestead in Mountain View made the whole show feel more like a music festival than a tech event. It must have been fun for the presenters to stand on the same stage they went to see bands at.

Having smaller tents for the different product and technology groups was great. It invited much more communication than booths. I saw a lot of neat demos. Having experts at hand to talk with about technologies I wanted to learn about was great.

Organisation

Feet in the sun watching a talk at the Amphitheatre

Here are the good and bad things about the organisation:

  • Good: traffic control wasn’t as much of a nightmare I expected. I got there two hours in advance as I anticipated traffic jams, but it wasn’t bad at all. Shuttles and bike sheds helped getting people there.
  • Good: there was no queue at badge pickup. Why I had to have my picture taken and a – somehow sticky – plastic badge printed was a bit beyond me, though. It seems wasteful.
  • Good: the food and beverages were plentiful and applicable. With a group this big it is hard to deliver safe to eat and enjoyable food. The sandwiches, apples and crisps did the trick. The food at the social events was comfort food/fast food, but let’s face it – you’re not at a food fair. I loved that all the packaging was paper and cardboard and there was not too much excess waste in the form of plastics. We also got a reusable water bottle you could re-fill at water dispensers like you have in offices. Given the weather, this was much needed. Coffee and tea was also available throughout the day. We were well fed and watered. I’m no Vegan, and I heard a few complaints about a lack of options, but that may have been personal experiences.
  • Good: the toilets were amazing. Clean, with running water and plenty of paper, mirrors, free sunscreen and no queues. Not what I expected from a music festival surrounding.
  • Great: as it was scorching hot on the first day the welcome pack you got with your badge had a bandana to cover your head, two sachets of sun screen, a reusable water bottle and sunglasses. As a ginger: THANK YOU, THANK YOU, THANK YOU. The helpers even gave me a full tube of sunscreen on re-entry the second day, taking pity on my red skin.
  • Bad: the one thing that was exactly the same as in Moscone was the abysmal crowd control. Except for the huge stage tent number two (called HYDRA - I am on to you, people) all others were far too small. It was not uncommon to stand for an hour in a queue for the talk you wanted to see just to be refused entry as it was full up. Queuing up in the scorching sun isn’t fun for anyone and impossible for me. Hence I missed all but two talks I wanted to see.
  • Good: if you were lucky enough to see a talk, the AV quality was great. The screens were big and readable, all the talks were live transcribed and the presenters audible.

The bad parts

Apart from the terrible crowd control, two things let me down the most. The keynote and a total lack of hardware giveaway – something that might actually be related.

Don’t get me wrong, I found the showering of attendees with hardware excessive at the first few IOs. But announcing something like a massive move into VR with Daydream and Tango without giving developers something to test it on is assuming a lot. Nine hundred dollars plus flying to the US and spending a lot of money on accommodation is a lot for many attendees. Getting something amazing to bring back would be a nice “Hey, thanks”.

There was no announcement at the keynote about anything physical except for some vague “this will be soon available” products. This might be the reason.

My personal translation of the keynote is the following:

We are Google, we lead in machine learning, cloud technology and data insights. Here are a few products that may soon come out that play catch-up with our competition. We advocate diversity and try to make people understand that the world is bigger than the Silicon Valley. That’s why we solve issues that aren’t a problem but annoyances for the rich. All the things we’re showing here are solving issues of people who live in huge houses, have awesome cars and suffer from the terrible ordeal of having to answer text messages using their own writing skills. Wouldn’t it be better if a computer did that for you? Why go and wake up your children with a kiss using the time you won by becoming more effective with our products when you can tell Google to do that for you? Without the kiss that is – for now.

As I put it during the event:

I actually feel poor looking at the #io16 keynote. We have lots of global problems technology can help with. This is pure consumerism.

I stand by this. Hardly anything in the keynote excited me as a developer. Or even as a well-off professional who lives in a city where public transport is a given. The announcement of Instant Apps, the Firebase bits and the new features of Android Studio are exciting. But it all got lost in an avalanche of “Look what’s coming soon!” product announcements without the developer angle. We want to look under the hood. We want to add to the experience and we want to understand how things work. This is how developer events work. Google Home has some awesome features. Where are the APIs for that?

As far as I understand it, there was a glitch in the presentation. But the part where a developer in Turkey used his skills to help the Syrian refugee crisis was borderline insulting. There was no information what the app did, who benefited from it and what it ran on. No information how the data got in and how the data was going to the people who help the refugees. The same goes for using machine learning to help with the issue of blindness. Both were teasers without any meat and felt like “Well, we’re also doing good, so here you go”.

Let me make this clear: I am not criticising the work of any Google engineer, product owner or other worker here. All these things are well done and I am excited about the prospects. I find it disappointing that the keynote was a sales pitch. It did not pay respect to this work and failed to show the workings rather than the final product. IO is advertised as a developer conference, not a end user oriented sales show. It felt disconnected.

Things that made me happy

Chris Heilmann covered in sunscreen, wearing a bandana in front of Google Loon

  • The social events were great – the concert in the amphitheatre was for those who wanted to go. Outside was a lot of space to have a chat if you’re not the dancing type. The breakout events on the second day were plentiful, all different and arty. The cynic in my sniggered at Burning Man performers (the anthithesis to commercialism by design) doing their thing at a commercial IT event, but it gave the whole event a good vibe.
  • Video recording and live streaming – I watched quite a few of the talks I missed the last two days in the gym and I am grateful that Google offers these on YouTube immediately, well described and easy to find in playlists. Using the app after the event makes it easy to see the talks you missed.
  • Boots on the ground – everyone I wanted to meet from Google was there and had time to chat. My questions got honest and sensible answers and there was no hand-waving or over-promising.
  • A good focus on health and safety – first aid tents, sunscreen and wet towels for people to cool down, creature comforts for an outside environment. The organisers did a good job making sure people are safe. Huge printouts of the Code of Conduct also made no qualm about it that antisocial or aggressive behaviour was not tolerated.

Conclusion

Jatinder and me at the keynote

I will go again to Google IO, to talk, to meet, to see product demos and to have people at hand that can give me insight further than the official documentation. I am likely to not get up early next time to see the keynote though and I would love to see a better handle on the crowd control. It is frustrating to queue and not being able to see talks at the conference of a company who prides itself at organising huge datasets and having self-driving cars.
Here are a few things that could make this better:

  • Having screening tents with the video and the transcription screens outside the main tents. These don’t even need sound (which is the main outside issue)
  • Use the web site instead of two apps. Advocating progressive web apps and then telling me in the official conference mail to download the Android app was not a good move. Especially as the PWA outperformed the native app at every turn – including usability (the thing native should be much better). It was also not helpful that the app showed the name of the stage but not the number of the tent.
  • Having more places to charge phones would have been good, or giving out power packs. As we were outside all the time and moving I didn’t use my computer at all and did everything on the phone.

I look forward to interacting and working with the tech Google. I am confused about the Google that tries to be in the hands of end users without me being able to crack the product open and learn from how it is done.

Categorieën: Mozilla-nl planet

Mozilla Gigabit Community Fund Expands to Austin - Silicon Hills News

Nieuws verzameld via Google - snein, 22/05/2016 - 22:24

Mozilla Gigabit Community Fund Expands to Austin
Silicon Hills News
The fund is a joint initiative among Mozilla, the maker of the Firefox web browser, the National Science Foundation and U.S. Ignite. Mozilla is providing $150,000 in grant funding to Austin projects and tools that use the city's Google Fiber network ...

Categorieën: Mozilla-nl planet

Mozilla's Firefox surpasses IE/Edge in market share for first time ever - AfterDawn

Nieuws verzameld via Google - snein, 22/05/2016 - 18:34

AfterDawn

Mozilla's Firefox surpasses IE/Edge in market share for first time ever
AfterDawn
Mozilla's Firefox surpasses IE/Edge in market share for first time ever According to the most recent StatCounter figures, Firefox has surpassed Internet Explorer/Edge in market share for the first time ever, although all three browsers still have ...

en meer »
Categorieën: Mozilla-nl planet

Pages