mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Eric Shepherd: Smart people + open discussion + epiphany = a better Web for everyone

Mozilla planet - ti, 19/04/2016 - 17:00

One great thing about watching the future of the Web being planned in the open is that you can see how smart people having open discussions, combined with the occasional sudden realization, epiphany, or unexpected spark of creative genius can make the Web a better place for everyone.

This is something I’m reminded of regularly when I read mailing list discussions about plans to implement new Web APIs or browser features. There are a number of different kinds of discussion that take place on these mailing lists, but the ones that have fascinated me the most lately have been the “Intent to…” threads.

There are three classes of “Intent to…” thread:

  • Intent to implement. This thread begins with an announcement that someone plans to begin work on implementing a new feature. This could be an entire API, or a single new function, or anything in between. It could be a change to how an existing technology behaves, for that matter.
  • Intent to ship. This thread starts with the announcement that a feature or technology which has been implemented, or is in the process of being implemented, will be shipped in a particular upcoming version of the browser.
  • Intent to unship. This thread starts by announcing that a previously shipped feature will be removed in a given release of the software. This usually means rolling back a change that had unexpected consequences.

In each of these cases, discussion and debate may arise. Sometimes the discussion is very short, with a few people agreeing that it’s a good (or bad) idea, and that’s that. Other times, the discussion becomes very lengthy and complicated, with proposals and counter-proposals and debates (and, yes, sometimes arguments) about whether it’s a good idea or how to go about doing it the best way possible.

You know… I just realized that this change could be why the following sites aren’t working on nightly builds… maybe we need to figure out a different way to do this.

This sounds great, but what if we add a parameter to this function so we can make it more useful to a wider variety of content by…

The conversation frequently starts innocuously enough, with general agreement or minor suggestions that might improve the implementation, and then, sometimes, out of nowhere someone points out a devastating and how-the-heck-did-we-miss-that flaw in the design that causes the conversation to shift into a debate about the best way to fix the design. Result: a better design that works for more people with fewer side effects.

These discussions are part of what makes the process of inventing the Web in the open great. Anyone who has an interest can offer a suggestion or insight that might totally change the shape of things to come. And by announcing upcoming changes in threads such as these, developers make it easier than ever to get involved in the design of the Web as a platform.

Mozilla is largely responsible for the design process of the Web being an open one. Before our global community became a force to be reckoned with, development crawled along inside the walls of one or two corporate offices. Now, dozens of companies and millions of people are active participants in the design of the Web and its APIs. It’s a legacy that every Mozillian—past, present, and future—can be very proud of.

Categorieën: Mozilla-nl planet

Wladimir Palant: Introducing Easy Passwords: the new best way to juggle all those passwords

Mozilla planet - ti, 19/04/2016 - 13:13

“The password system is broken” – I don’t know how often I’ve heard that phrase already. Yes, passwords suck. Nobody can be expected to remember passwords for dozens of websites. Websites enforcing arbitrary complexity rules (“between 5 and 7 characters, containing at least two-upper case letters and a dog’s name”) doesn’t make it any better. So far I’ve heard of three common strategies to deal with passwords: write them down, use the same one everywhere or just hit “forgot password” every time you access the website. None of these are particularly secure or recommendable, and IMHO neither are the suggestions to derive passwords via more or less complicated manual algorithms.

As none of the password killing solutions gained significant traction so far, password managers still seem to be the best choice for now. However, these often have the disadvantage of relying on a third-party service which you have to trust or storing your passwords on disk so that you have to trust their crypto. But there is also this ancient idea to derive individual passwords from a single master password via one-way hashing functions. This is great as the only sensitive piece of data is your master password, and this one you can hopefully just remember.

Now all the existing password generators have significant usability issues. What if I want to have multiple passwords on a single website? What if different websites share the same login credentials (e.g. all the WordPress blogs)? What if you are required to change your password every few months? What if there is some password which I have to use as is rather than replace it by a generated one? How to deal with that crazy website that doesn’t accept special characters in passwords? Do I have to remember all the websites that I generated passwords for? I haven’t found any solution that would answer all these questions. And I’m not even starting about security, this is a topic for a separate blog post (spoiler: only one out of twenty password generator extensions for Firefox got crypto right).

Easy Passwords login prompt

So last summer I decided to roll my own: Easy Passwords. I’m working on it in my spare time so it took a while until I considered it ready for general use but now you can finally go and install it. You set your master password and then you can generate named passwords for any website. You can adjust password length and character set to match the requirements of the website. And if the generated password absolutely won’t do, you can still store your existing password — it will be encrypted securely, only to be decrypted with your master password.

On most websites your password can be filled in with a single click. And Easy Passwords supports website aliases: for some WordPress blog you can edit the site name into “wordpress.com” — done, you will get WordPress passwords there now. And it can show you all your passwords on a single page, you can even print them as a paper backup. This piece of paper has enough information to recreate all your passwords should your hard drive crash, but it will be useless to anybody who doesn’t know your master password.

It’s not perfect of course. For example, the aliasing functionality isn’t very intuitive and could be improved. I also have a few issues listed in the GitHub project, e.g. I’d like to warn about filling in passwords if the website doesn’t use HTTPS. Also, a secure master password is very important so it would be nice to implement some kind of security indicator when the master password is set. I wonder what other issues people come up with, we’ll see.

Categorieën: Mozilla-nl planet

Karl Dubost: Looking at summary details in HTML5

Mozilla planet - ti, 19/04/2016 - 05:01

On the dev-platform mailing-list, Ting-Yu Lin has sent an Intent to Ship: HTML5 <details> and <summary> tags. So what about it?

HTML 5.1 specification describes details as:

The details element represents a disclosure widget from which the user can obtain additional information or controls.

which is not that clear, luckily enough the specification has some examples. I put one on codepen (you need Firefox Nightly at this time or Chrome/Opera or Safari dev edition to see it). At least the rendering seems to be pretty much the same.

But as usual evil is in the details (pun not intended at first). In case, the developer would want to hide the triangle, the possibilities are for now not interoperable. Think here possible Web compatibility issues. I created another codepen for testing the different scenarios.

In Blink/WebKit world:

summary::-webkit-details-marker { display: none; }

In Gecko world:

summary::-moz-list-bullet { list-style-type: none; }

or

summary { display: block; }

These work, though the summary {display: block;} is a call for catastrophes.

Then on the thread there was the proposal of

summary { list-style-type: none; }

which is indeed working for hiding the arrow, but doesn't do anything whatsoever in Blink and WebKit. So it's not really a reliable solution from a Web compatibility point of view.

Then usually I like to look at what people do on GitHub for their projects. So these are a collection of things on the usage of -webkit-details-marker:

details summary::-webkit-details-marker { display:none; } /* to change the pointer on hover */ details summary { cursor: pointer; } /* to style the arrow widget on opening and closing */ details[open] summary::-webkit-details-marker { color: #00F; background: #0FF;} /* to replace the marker with an image */ details summary::-webkit-details-marker:after { content: icon('file.png'); /* using content this time for a unicode character */ summary::-webkit-details-marker {display: none; } details summary::before { content:"►"; } details[open] summary::before { content:"▼" } JavaScript

On JavaScript side, it seems there is a popular shim used by a lot of people: details.js

More reading

Otsukare!

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Introducing MIR

Mozilla planet - ti, 19/04/2016 - 02:00

We are in the final stages of a grand transformation on the Rust compiler internals. Over the past year or so, we have been steadily working on a plan to change our internal compiler pipeline, as shown here:

Compiler Flowchart

That is, we are introducing a new intermediate representation (IR) of your program that we call MIR: MIR stands for mid-level IR, because the MIR comes between the existing HIR (“high-level IR”, roughly an abstract syntax tree) and LLVM (the “low-level” IR). Previously, the “translation” phase in the compiler would convert from full-blown Rust into machine-code-like LLVM in one rather large step. But now, it will do its work in two phases, with a vastly simplified version of Rust – MIR – standing in the middle.

If you’re not a compiler enthusiast, this all might seem arcane and unlikely to affect you directly. But in reality, MIR is the key to ticking off a number of our highest priorities for Rust:

  • Faster compilation time. We are working to make Rust’s compilation incremental, so that when you re-compile code, the compiler recomputes only what it has to. MIR has been designed from the start with this use-case in mind, so it’s much easier for us to save and reload, even if other parts of the program have changed in the meantime.

    MIR also provides a foundation for more efficient data structures and removal of redundant work in the compiler, both of which should speed up compilation across the board.

  • Faster execution time. You may have noticed that in the new compiler pipeline, optimization appears twice. That’s no accident: previously, the compiler relied solely on LLVM to perform optimizations, but with MIR, we can do some Rust-specific optimizations before ever hitting LLVM – or, for that matter, before monomorphizing code. Rust’s rich type system should provide fertile ground for going beyond LLVM’s optimizations.

    In addition, MIR will uncork some longstanding performance improvements to the code Rust generates, like “non-zeroing” drop.

  • More precise type checking. Today’s Rust compiler imposes some artificial restrictions on borrowing, restrictions which largely stem from the way the compiler currently represents programs. MIR will enable much more flexible borrowing, which will in turn improve Rust’s ergonomics and learning curve.

Beyond these banner user-facing improvements, MIR also has substantial engineering benefits for the compiler:

  • Eliminating redundancy. Currently, because we write all of our passes in terms of the full Rust language, there is quite a lot of duplication. For example, both the safety analyses and the backend which produces LLVM IR must agree about how to translate drops, or the precise order in which match expression arms will be tested and executed (which can get quite complex). With MIR, all of that logic is centralized in MIR construction, and the later passes can just rely on that.

  • Raising ambitions. In addition to being more DRY, working with MIR is just plain easier, because it contains a much more primitive set of operations than ordinary Rust. This simplification enables us to do a lot of things that were forbodingly complex before. We’ll look at one such case in this post – non-zeroing drop – but as we’ll see at the end, there are already many others in the pipeline.

Needless to say, we’re excited, and the Rust community has stepped up in a big way to make MIR a reality. The compiler can bootstrap and run its test suite using MIR, and these tests have to pass on every new commit. Once we’re able to run Crater with MIR enabled and see no regressions across the entire crates.io ecosystem, we’ll turn it on by default (or, you’ll forgive a terrible (wonderful) pun, launch MIR into orbit).

This blog post begins with an overview of MIR’s design, demonstrating some of the ways that MIR is able to abstract away the full details of the Rust language. Next, we look at how MIR will help with implementing non-zeroing drops, a long-desired optimization. If after this post you find you are hungry for more, have a look at the RFC that introduced MIR, or jump right into the code. (Compiler buffs may be particularly interested in the alternatives section, which discusses certain design choices in detail, such as why MIR does not currently use SSA.)

Reducing Rust to a simple core

MIR reduces Rust down to a simple core, removing almost all of the Rust syntax that you use every day, such as for loops, match expressions, and even method calls. Instead, those constructs are translated to a small set of primitives. This does not mean that MIR is a subset of Rust. As we’ll see, many of these primitives operations are not available in real Rust. This is because those primitives could be misused to write unsafe or undesirable programs.

The simple core language that MIR supports is not something you would want to program in. In fact, it makes things almost painfully explicit. But it’s great if you want to write a type-checker or generate assembly code, as you now only have to handle the core operations that remain after MIR translation.

To see what I mean, let’s start by simplifying a fragment of Rust code. At first, we’ll just break the Rust down into “simpler Rust”, but eventually we’ll step away from Rust altogether and into MIR code.

Our Rust example starts out as this simple for loop, which iterates over all the elements in a vector and processes them one by one:

for elem in vec { process(elem); }

Rust itself offers three kinds of loops: for loops, like this one; while and while let loops, that iterate until some condition is met; and finally the simple loop, which just iterates until you break out of it. Each of these kinds of loops encapsulates a particular pattern, so they are quite useful when writing code. But for MIR, we’d like to reduce all of these into one core concept.

A for loop in Rust works by converting a value into an iterator and then repeatedly calling next on that iterator. That means that we can rewrite the for loop we saw before into a while let loop that looks like this:

let mut iterator = vec.into_iter(); while let Some(elem) = iterator.next() { process(elem); }

By applying this rewriting, we can remove all for loops, but that still leaves multiple kinds of loops. So next we can imagine rewrite all while let loops into a simple loop combined with a match:

let mut iterator = vec.into_iter(); loop { match iterator.next() { Some(elem) => process(elem), None => break, } }

We’ve already eliminated two constructs (for loops and while loops), but we can go further still. Let’s turn from loops for a bit to look at the method calls that we see. In Rust, method calls like vec.into_iter() and iterator.next() are also a kind of syntactic sugar. These particular methods are defined in traits, which are basically pre-defined interfaces. For example, into_iter is a method in the IntoIterator trait. Types which can be converted into iterators implement that trait and define how the into_iter method works for them. Similarly, next is defined in the Iterator trait. When you write a method call like iterator.next(), the Rust compiler automatically figures out which trait the method belongs to based on the type of the iterator and the set of traits in scope. But if we prefer to be more explicit, we could instead invoke the methods in the trait directly, using function call syntax:

// Rather than `vec.into_iter()`, we are calling // the function `IntoIterator::into_iter`. This is // exactly equivalent, just more explicit. let mut iterator = IntoIterator::into_iter(vec); loop { // Similarly, `iterator.next()` can be rewritten // to make clear which trait the `next` method // comes from. We see here that the `.` notation // was also adding an implicit mutable reference, // which is now made explicit. match Iterator::next(&mut iterator) { Some(elem) => process(elem), None => break, } }

At this point, we’ve managed to reduce the set of language features for our little fragment quite a bit: we now only use loop loops and we don’t use method calls. But we could reduce the set of concepts further if were moved away from loop and break and towards something more fundamental: goto. Using goto we could transform the previous code example into something like this:

let mut iterator = IntoIterator::into_iter(vec); loop: match Iterator::next(&mut iterator) { Some(elem) => { process(elem); goto loop; } None => { goto break; } } break: ...

We’ve gotten pretty far in breaking our example down into simpler constructs. We’re not quite done yet, but before we go further it’s worth stepping back a second to make a few observations:

Some MIR primitives are more powerful than the structured construct they replace. Introducing the goto keyword is a big simplification in one sense: it unifies and replaces a large number of control-flow keywords. goto completely replaces loop, break, continue, but it also allows us to simplify if and match as well (we’ll see more on match in particular in a bit). However, this simplification is only possible because goto is a more general construct than loop, and it’s something we would not want to introduce into the language proper, because we don’t want people to be able to write spaghetti-like-code with complex control-flow that is hard to read and follow later. But it’s fine to have such a construct in MIR, because we know that it will only be used in particular ways, such as to express a loop or a break.

MIR construction is type-driven. We saw that all method calls like iterator.next() can be desugared into fully qualified function calls like Iterator::next(&mut iterator). However, doing this rewrite is only possible with full type information, since we must (for example) know the type of iterator to determine which trait the next method comes from. In general, constructing MIR is only possible after type-checking is done.

MIR makes all types explicit. Since we are constructing MIR after the main type-checking is done, MIR can include full type information. This is useful for analyses like the borrow checker, which require the types of local variables and so forth to operate, but also means we can run the type-checker periodically as a kind of sanity check to ensure that the MIR is well-formed.

Control-flow graphs

In the previous section, I presented a gradual “deconstruction” of a Rust program into something resembling MIR, but we stayed in textual form. Internally to the compiler, though, we never “parse” MIR or have it in textual form. Instead, we represent MIR as a set of data structures encoding a control-flow graph (CFG). If you’ve ever used a flow-chart, then the concept of a control-flow graph will be pretty familiar to you. It’s a representation of your program that exposes the underlying control flow in a very clear way.

A control-flow graph is structured as a set of basic blocks connected by edges. Each basic block contains a sequence of statements and ends in a terminator, which defines how the blocks are connected to one another. When using a control-flow graph, a loop simply appears as a cycle in the graph, and the break keyword translates into a path out of that cycle.

Here is the running example from the previous section, expressed as a control-flow graph:

Control-flow-graph

Building a control-flow graph is typically a first step for any kind of flow-sensitive analysis. It’s also a natural match for LLVM IR, which is also structured into control-flow graph form. The fact that MIR and LLVM correspond to one another fairly closely makes translation quite straight-forward. It also eliminates a vector for bugs: in today’s compiler, the control-flow graph used for analyses is not necessarily the same as the one which results from LLVM construction, which can lead to incorrect programs being accepted.

Simplifying match expressions

The example in the previous section showed how we can reduce all of Rust’s loops into, effectively, gotos in the MIR and how we can remove methods calls in favor of calls to explicit calls to trait functions. But it glossed over one detail: match expressions.

One of the big goals in MIR was to simplify match expressions into a very small core of operations. We do this by introducing two constructs that the main language does not include: switches and variant downcasts. Like goto, these are things that we would not want in the base language, because they can be misused to write bad code; but they are perfectly fine in MIR.

It’s probably easiest to explain match handling by example. Let’s consider the match expression we saw in the previous section:

match Iterator::next(&mut iterator) { Some(elem) => process(elem), None => break, }

Here, the result of calling next is of type Option<T>, where T is the type of the elements. The match expression is thus doing two things: first, it is determining whether this Option was a value with the Some or None variant. Then, in the case of the Some variant, it is extracting the value elem out.

In normal Rust, these two operations are intentionally coupled, because we don’t want you to read the data from an Option unless it has the Some variant (to do otherwise would be effectively a C union, where reads are not checked for correctness).

In MIR, though, we separate the checking of the variant from the extracting of the data. I’m going to give the equivalent of MIR here first in a kind of pseudo-code, since there is no actual Rust syntax for these operations:

loop: // Put the value we are matching on into a temporary variable. let tmp = Iterator::next(&mut iterator); // Next, we "switch" on the value to determine which it has. switch tmp { Some => { // If this is a sum, we can extract the element out // by "downcasting", this effectively asserts that // the value `tmp` is of the Some variant. let elem = (tmp as Some).0; // The user's original code: process(elem); goto loop; } None => { goto break; } } break: ....

Of course, the actual MIR is based on a control-flow-graph, so it would look something like this:

Loop-break control-flow graph

Explicit drops and panics

So now we’ve seen how we can remove loops, method calls, and matches out of the MIR, and replace them with simpler equivalents. But there is still one key area that we can simplify. Interestingly, it’s something that happens almost invisibly in the code today: running destructors and cleanup in the case of a panic.

In the example control-flow-graph we saw before, we were assuming that all of the code would execute successfully. But in reality, we can’t know that. For example, any of the function calls that we see could panic, which would trigger the start of unwinding. As we unwind the stack, we would have to run destructors for any values we find. Figuring out precisely which local variables should be freed at each point of panic is actually somewhat complex, so we would like to make it explicit in the MIR: this way, MIR construction has to figure it out, but later passes can just rely on the MIR.

The way we do this is two-fold. First, we make drops explicit in the MIR. Drop is the term we use for running the destructor on a value. In MIR, whenever control-flow passes a point where a value should be dropped, we add in a special drop(...) operation. Second, we add explicit edges in the control-flow graph to represent potential panics, and the cleanup that we have to do.

Let’s look at the explicit drops first. If you recall, we started with an example that was just a for loop:

for elem in vec { process(elem); }

We then transformed this for loop to explicitly invoke IntoIterator::into_iter(vec), yielding a value iterator, from which we extract the various elements. Well, this value iterator actually has a destructor, and it will need to be freed (in this case, its job is to free the memory that was used by the vector vec; this memory is no longer needed, since we’ve finished iterating over the vector). Using the drop operation, we can adjust our MIR control-flow-graph to show explicitly where the iterator value gets freed. Take a look at the new graph, and in particular what happens when a None variant is found:

Drop control-flow graph

Here we see that, when the loop exits normally, we will drop the iterator once it has finished. But what about if a panic occurs? Any of the function calls we see here could panic, after all. To account for that, we introduce panic edges into the graph:

Panic control-flow graph

Here we have introduced panic edges onto each of the function calls. By looking at these edges, you can see that the if the call to next or process should panic, then we will drop the variable iterator; but if the call to into_iter panics, the the iterator hasn’t been initialized yet, so it should not be dropped.

One interesting wrinkle: we recently approved RFC 1513, which allows an application to specify that panics should be treated as calls to abort, rather than triggering unwinding. If the program is being compiled with “panic as abort” semantics, then this too would be reflected in the MIR, as the panic edges and handling would simply be absent from the graph.

Viewing MIR on play

At this point, we’ve reduced our example into something fairly close to what MIR actually looks like. If you’d like to see for yourself, you can view the MIR for our example on play.rust-lang.org. Just follow this link and then press the “MIR” button along the top. You’ll wind up seeing the MIR for several functions, so you have to search through to find the start of the example fn. (I won’t reproduce the output here, as it is fairly lengthy.) In the compiler itself, you can also enable graphviz output.

Drops and stack flags

By now I think you have a feeling for how MIR represents a simplified Rust. Let’s look at one example of where MIR will allow us to implement a long-awaited improvement to Rust: the shift to non-zeroing drop. This is a change to how we detect when destructors must execute, particularly when values are only sometimes moved. This move was proposed (and approved) in RFC 320, but it has yet to be implemented. This is primarily because doing it on the pre-MIR compiler was architecturally challenging.

To better understand what the feature is, consider this function send_if, which conditionally sends a vector to another thread:

fn send_if(data: Vec<Data>) { // If `some_condition` returns *true*, then ownership of `data` // moves into the `send_to_other_thread` function, and hence // we should not free it (the other thread will free it). if some_condition(&data) { send_to_other_thread(data); } post_send(); // If `some_condition` returned *false*, the ownership of `data` // remains with `send_if`, which means that the `data` vector // should be freed here, when we return. }

The key point, as indicated in the comments, is that we can’t know statically whether we ought to free data or not. It depends on whether we entered the if or not.

To handle this scenario today, the compiler uses zeroing. Or, more accurately, overwriting. What this means is that, if ownership of data is moved, we will overwrite the stack slot for data with a specific, distinctive bit pattern that is not a valid pointer (we used to use zeroes, so we usually call this zeroing, but we’ve since shifted to something different). Then, when it’s time to free data, we check whether it was overwritten. (As an aside, this is roughly the same thing that the equivalent C++ code would do.)

But we’d like to do better than that. What we would like to do is to use boolean flags on the stack that tell us what needs to be freed. So that might look something like this:

fn send_if(data: Vec<Data>) { let mut data_is_owned = true; if some_condition(&data) { send_to_other_thread(data); data_is_owned = false; } post_send(); // Free `data`, but only if we still own it: if data_is_owned { mem::drop(data); } }

Of course, you couldn’t write code like this in Rust. You’re not allowed to acccess the variable data after the if, since it might have been moved. (This is yet another example of where we can do things in MIR that we would not want to allow in full Rust.)

Using boolean stack flags like this has a lot of advantages. For one, it’s more efficient: instead of overwriting the entire vector, we only have to set the one flag. But also, it’s easier to optimize: imagine that, through inlining or some other means, the compiler was able to determine that some_condition would always be true. In that case, standard constant propagation techniques would tell us that data_is_owned is always false, and hence we can just optimize away the entire call to mem::drop, resulting in tighter code. See RFC 320 for more details on that.

However, implementing this optimization properly on the current compiler architecture is quite difficult. With MIR, it becomes relatively straightforward. The MIR control-flow-graph tells us explicitly where values will be dropped and when. When MIR is first generated, we assume that dropping moved data has no effect – roughly like the current overwriting semantics. So this means that the MIR for send_if might look like this (for simplicity, I’ll ignore unwinding edges).

Non-zeroing drop example

We can then transform this graph by identifying each place where data is moved or dropped and checking whether any of those places can reach one another. In this case, the send_to_other_thread(data) block can reach drop(data). This indicates that we will need to introduce a flag, which can be done rather mechanically:

Non-zeroing drop with flags

Finally, we can apply standard compiler techniques to optimize this flag (but in this case, the flag is needed, and so the final result would be the same).

Just to drive home why MIR is useful, let’s consider a variation on the send_if function called send_if2. This variation checks some condition and, if it is met, sends the data to another thread for processing. Otherwise, it processes it locally:

fn send_if2(data: Vec<Data>) { if some_condition(&data) { send_to_other_thread(data); return; } process(&data); }

This would generate MIR like:

Control-flow graph for send_if2

As before, we still generate the drops of data in all cases, at least to start. Since there are still moves that can later reach a drop, we could now introduce a stack flag variable, just as before:

send_if2 with flags

But in this case, if we apply constant propagation, we can see that at each point where we test data_is_owned, we know statically whether it is true or false, which would allow us to remove the stack flag and optimize the graph above, yielding this result:

Optimized send_if2

Conclusion

I expect the use of MIR to be quite transformative in terms of what the compiler can accomplish. By reducing the language to a core set of primitives, MIR opens the door to a number of language improvements. We looked at drop flags in this post. Another example is improving Rust’s lifetime system to leverage the control-flow-graph for better precision. But I think there will be many applications that we haven’t foreseen. In fact, one such example has already arisen: Scott Olson has been making great strides developing a MIR interpreter miri, and the techniques it is exploring may well form the basis for a more powerful constant evaluator in the compiler itself.

The transition to MIR in the compiler is not yet complete, but it’s getting quite close. Special thanks go out to Simonas Kazlauskas (nagisa) and Eduard-Mihai Burtescu (eddyb), who have both had a particularly large impact on pushing MIR towards the finish line. Our initial goal is to switch our LLVM generation to operate exclusively from the MIR. Work is also proceeding on porting the borrow checker. After that, I expect we will port a number of other pieces on the compiler that are currently using the HIR. If you’d be interested in contributing, look for issues tagged with A-mir or ask around in the #rustc channel on IRC.

Categorieën: Mozilla-nl planet

Chris Cooper: RelEng & RelOps Weekly highlights - April 18, 2016

Mozilla planet - mo, 18/04/2016 - 20:28

SF2 Balrog character select portrait“My update requests have your blood on them.”This is release candidate week, traditionally one of the busiest times for releng. Your patience is appreciated.

Improve Release Pipeline:

Varun began work on improving Balrog’s backend to make multifile responses (such as GMP) easier to understand and configure. Historically it has been hard for releng to enlist much help from the community due to the access restrictions inherent in our systems. Kudos to Ben for finding suitable community projects in the Balrog space, and then more importantly, finding the time to mentor Varun and others through the work.

Improve CI Pipeline:

With build promotion well underway for the upcoming Firefox 46 release, releng is switching gears and jumping into the TaskCluster migration with both feet. Kim and Mihai will be working full-time on migration efforts, and many others within releng have smaller roles. There is still a lot of work to do just to migrate all existing Linux workloads into TaskCluster, and that will be our focus for the next 3 months.

Release:

We started doing the uplifts for the Firefox 46 release cycle late last week. Release candidates builds should be starting soon. As mentioned above, this is the first non-beta release of Firefox to use the new build promotion process.

Last week, we shipped Firefox and Fennec 45.0.2 and 46.0b10, Firefox 45.0.2esr and Thunderbird 45.0. For further details, check out the release notes here:

See you next week!

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 18 Apr 2016

Mozilla planet - mo, 18/04/2016 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Nathan Froyd: rr talk post-mortem

Mozilla planet - mo, 18/04/2016 - 18:43

On Wednesday last week, I gave an invited talk on rr to a group of interested students and faculty at Rose-Hulman. The slides I used are available, though I doubt they make a lot of sense without the talk itself to go with them. Things I was pleased with:

  • I didn’t overrun my time limit, which was pretty satisfying.  I would have liked to have an hour (40 minutes talk/20 minutes for questions or overrun), but the slot was for a standard class period of 50 minutes.  I also wanted to leave some time for questions at the end, of which there were a few. Despite the talk being scheduled for the last class period of the day, it was well-attended.
  • The slides worked well  My slides are inspired by Lawrence Lessig’s style of presenting, which I also used for my lightning talk in Orlando.  It forces you to think about what you’re putting on each slide and make each slide count.  (I realize I didn’t use this for my Gecko onboarding presentation; I’m not sure if the Lessig method would work for things like that.  Maybe at the next onboarding…)
  • The level of sophistication was just about right, and I think the story approach to creating rr helped guide people through the presentation.  At least, it didn’t look as though many people were nodding off or completely confused, despite rr being a complex systems-heavy program.

Most of the above I credit to practicing the talk repeatedly.  I forget where I heard it, but a rule of thumb I use for presentations is 10 hours of prep time minimum (!) for every 1 hour of talk time.  The prep time always winds up helping: improving the material, refining the presentation, and boosting my confidence giving the presentation.  Despite all that practice, opportunities for improvement remain:

  • The talk could have used any amount of introduction on “here’s how debuggers work”.  This is kind of old hat to me, but I realized after the fact that to many students (perhaps even some faculty), blithely asserting that rr can start and stop threads at will, for instance, might seem mysterious.  A slide or two on the differences between how rr record works vs. how rr replay works and interacts with GDB would have been clarifying as well.
  • The above is an instance where a diagram or two might have been helpful.  I dislike putting diagrams in my talks because I dislike the thought of spending all that time to find a decent, simple app for drawing things, actually drawing them, and then exporting a non-awful version into a presentation.  It’s just a hurdle that I have to clear once, though, so I should just get over it.
  • Checkpointing and the actual mechanisms by which rr can run forwards or backwards in your program got short shrift and should have been explained in a little more detail.  (Diagrams again…)  Perhaps not surprisingly, the checkpointing material got added later during the talk prep and therefore didn’t get practiced as much.
  • The demo received very little practice (I’m sensing a theme here) and while it was able to show off a few of rr‘s capabilities, it wasn’t very polished or impressive.  Part of that is due to rr mysteriously deciding to cease working on my virtual machine, but part of that was just my own laziness and assuming things would work out just fine at the actual talk.  Always practice!
Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Slide Bit: From Chaos

Mozilla planet - mo, 18/04/2016 - 18:25

fromchaos

At the beginning of a new computing era, it’s fairly easy to sketch a long-term vision of the era. All it takes is knowledge of current technical trajectories and a bit of imagination. But it’s impossible to predict any of the essential details of how it will actually play out.

Technical, business, and social innovation is rampant in the early years of a new era. Chaotic interactions drive the churn of innovation. The winners that will emerge from this churn are unpredictable. Serendipity is as much a factor as merit. But eventually, the stable pillars of the new era will emerge from the chaos. There are no guarantees of success, but for innovators right now is your best opportunity for impacting the ultimate form of the Ambient Computing Era.

Categorieën: Mozilla-nl planet

Kartikaya Gupta: Using multiple keyboards

Mozilla planet - mo, 18/04/2016 - 16:32

When typing on a laptop keyboard, I find that my posture tends to get very closed and hunched. To fix this I resurrected an old low-tech solution I had for this problem: using two keyboards. Simply plug in an external USB keyboard, and use one keyboard for each hand. It's like a split keyboard, but better, because you can position it wherever you want to get a posture that's comfortable for you.

I used to do this on a Windows machine back when I was working at RIM and it worked great. Recently I tried to do it on my Mac laptop, but ran into the problem where the modifier state from one keyboard didn't apply to the other keyboard. So holding shift on one keyboard and T on the other wouldn't produce an uppercase T. This was quite annoying, and it seems to be an OS-level thing. After some googling I found Karabiner which solves this problem. Well, really it appears to be a more general keyboard customization tool, but the default configuration also combines keys across keyboards which is exactly what I wanted. \o/

Of course, changing your posture won't magically fix everything - moving around regularly is still the best way to go, but for me personally, this helps a bit :)

Categorieën: Mozilla-nl planet

QMO: Firefox 47.0 Aurora Testday Results

Mozilla planet - mo, 18/04/2016 - 10:42

Hello Mozillians!

As you may already know, last Friday – April 15th – we held a new successful Testday event, for Firefox 47.0 Aurora.

Results:

We’d like to take this opportunity to say a big THANK YOU to Teodora Vermesan, Chandrakant Dhutadmal, gaby2300, Moin Shaikh, Juan David Patiño, Luis Fernandez, Vignesh Kumar, Ilse Macías, Iryna Thompson and to our amazing Bangladesh QA Community: Hossain Al Ikram, Rezaul Huque Nayeem, Azmina Akter Papeya, Saheda Reza Antora, Raihan Ali, Khalid Syfullah Zaman, Sajedul Islam, Samad Talukdar, John Sujoy, Saddam Hossain, Asiful Kabir Heemel, Roman Syed, Md. Tanvir Ahmed, Md Rakibul Islam, Anik Roy, Kazi Nuzhat Tasnem, Sauradeep Dutta, Kazi Sakib Ahmad, Maruf Rahman, Shanewas Niloy, Tanvir Rahman, Tazin Ahmed, Mohammed Jawad Ibne Ishaque, A.B.M.Nashrif , Fahim, Mohammad Maruf Islam, akash, Zayed News, Forhad Hossain, Md.Tarikul Islam Oashi, Sajal Ahmed, Fahmida Noor, Mubina Rahaman Jerin and Md.Faysal Alam Riyad for getting involved in this event and making Firefox as best as it could be.

Also, many thanks to all our active moderators.

Keep an eye on QMO for upcoming events! 

Categorieën: Mozilla-nl planet

Emily Dunham: Persona and third-party cookies in Firefox

Mozilla planet - mo, 18/04/2016 - 09:00
Persona and third-party cookies in Firefox

Although its front page claims we’ve deprecated persona, it’s the only way to log into the statusboard and Air Mozilla. For a long time, I was unable to log into any site using Persona from Firefox 43 and 44 because of an error about my browser not being configured to accept third-party cookies.

The support article on the topic says that checking the “always accept cookies” box should fix the problem. I tried setting “accept third-party cookies” to “Always”, and yet the error persisted. (setting the top-level history configuration to “always remember history” didn’t affect the error either).

Fortunately, there’s also an “Exceptions” button by the “Accept cookies from sites” checkbox. Editing the exceptions list to universally allow “http://persona.org” lets me use Persona in Firefox normally.

_static/persona-exception.png

That’s the fix, but I don’t know whose bug it is. Did Firefox mis-balance privacy against convenience? Is the “always accept third-party cookies” setting’s failure to accept a cookie without an exception some strange edge case of a broken regex? Is Persona in the wrong for using a design that requires third-party cookies at all? Who knows!

Categorieën: Mozilla-nl planet

Frédéric Harper: The Day I Wanted to Kill Myself

Mozilla planet - mo, 18/04/2016 - 02:54
My semicolon tattoo

My semicolon tattoo

A little more than a year ago, my life was perfect. At least, from my own point of view. I was engaged to the most formidable woman I ever knew and we were living in our beautiful spacious condominium with our kids, our three cats. People were paying me very well to share my passion about technology and travel all over the world to help developers succeed with their projects. My friends and family were an important part of my life and even if I wasn’t the healthiest man on earth, I had no real health issues. I was happy and I couldn’t ask for more… Until my world collapsed.

I was 4500 kilometers away from home when I learned that the woman of my life, the one I spent one fourth of my young existence with, was leaving me. That was, the end of my world! Like if it wasn’t enough to lose the person you share your life with, some people I considered friends ran away from me: sad Fred is no fun, and obviously, when there is a separation, people feel the needs to “take a side”. Right before, I realized that the company I was working for, wasn’t the right one for me, so I decided to resign: I wasn’t able to deliver as I should taking this in the equation with everything else. Of course, I had no savings and it’s at that exact moment that we had water damage in our building and that I had to pay out a couple of thousands for the renovation. During that period, I sank deeply and very quickly: someone calls Depression knocked at my door.

For months, I was going deeper in the rabbit hole. Everything was hard to achieve, and uninteresting: even taking my shower was a real exploit. I was staying home, doing nothing except eating shitty food, getting weight and watching Netflix. I’ve always considered myself a social beast, but even just seeing my best friend was painful and unpleasant. I didn’t want to talk to people at all. I didn’t want to see people. I didn’t need help even if I felt I was a failure. My life was a failure. During that period, my “happiest” moments were when I was at a bar, drinking: alcohol was making me numb, making me forget that I was swimming in the dark all day long. Obviously, that tasty nectar called beer wasn’t helping me at all: it was taking me deeper than I was and as any stupid human, I was trying to get back my love, in the most ineffective way ever, to stay polite with myself. On top of that, even with good will, everyone was giving me shitty advices (side note: in that situation, the only thing you should do is being there for the other – you don’t know what the person is going thru and please, don’t give advices, just be there). That piece of shit that I was seeing in the mirror couldn’t have been me: I was strong. I’ve always done everything in my life to be happy: why I was not able to make the necessary changes to get back on my feet? Something was pulling me to the bottom and was putting weight on my shoulder. I wasn’t happy anymore, my life wasn’t valuable anymore. Maybe the solution was to kill myself?

Seriously, why live in a world where the woman I wanted to spend the rest of my life, the woman who wanted to spend the rest of her life with my own person, was running away from me? Why live in a world where people were spitting on me, not literally, in the lovely world that is social media? Why live in a world where the job I thought I was born for was maybe not made for me? You know that thing call the impostor syndrome? I wasn’t happy and I wasn’t seeing the light at the end of the tunnel. I had no more strength left. I didn’t have a proper night of sleep for weeks and no healthy meal since, forever. I don’t even talk about exercise… I was practically dead already, so one night, I drank like never before, and had the marvellous idea to nearly harass my former fiancé: I wanted her back. She closed her phone, it was the end: I decided it was the end. I was an asshole. I had enough. I wasn’t able to take more of that shit that is life. Fortunately, I blacked out, being truly intoxicated, before doing anything irreparable… until the cops knocked at my door. They were there to check if I was still alive. Two cops, at my door, wanted to see if I was alive. Can you imagine? I’m pretty sure you can’t. I was shaking and nearly crying: they were ready to smash my door if I wasn’t answering them in the second after I did. I reached a point where people who still cared about me were worried enough to call the police. Can you imagine again? Worrying the people you love so much that they need to take drastic actions like this? I was terrified. I. Was. Terrified. Not about the cops, but about me… I was at a breaking point! Fuck…

At that exact moment, I decided I needed to try to take care of myself. I started to see a psychologist twice a week. My doctor prescribed me antidepressants and pills to help me sleep a bit. Until now, I didn’t take any medicine for my severe deficit attention disorder (ADD – ADHD, with hyperactivity, in my case) that was diagnosed years ago, but I asked my doctor to add this to the cocktails of pills she was giving me. I also forced myself to see my close friends and I stopped taking anything containing alcohol. It was a complete turn around: anything that was helping me to see some light out of that terrible time of my life was part of my plan. Actually, I didn’t had any plan, I just wanted to run away from that scariest part of me. I even started to write a personal diary every time I had a difficult thought in my mind, which was more than once daily. It wasn’t easy. I wasn’t happy, but I was scared. I was scared to get back to that moment when the only plausible idea was to end my life. The frightening was bigger than the sadness, trust me. Baby steps were made to go forward. It was, and still is the biggest challenge I ever had in my life.

One evening, I was with my best friend at a Jean Leloup show: for a small moment, first time since months, I was having fun. I was smiling! And I started to cry… I realized that if I had killed myself, I wouldn’t be able to be there, with a man who is loving me as a friend for eighteen years and supported me like no one during that difficult time. I wouldn’t have been able to be there, singing and dancing on the music I love so much… At that exact moment, I knew I was starting to slowly get back on my feet. I knew that it wasn’t only the right thing to do, it was the thing to do. Thanks to my parents, my friends and the health professionals, I was finally feeling like my life was improving. It was a work in progress, but I was going in the right direction.

Still today, life isn’t easy. Life is continuing to throw rocks at me, like my mother getting a diagnostic of Alzheimer and this week, a cancer. I’m still trying to fix parts of my life, trying to find myself, but I can smile now, most of the time. It’s a constant battle, but I now know it’s worth it. Anyhow, I have mental illness and I’m not ashamed anymore of it: I’m not ashamed anymore of what happened! I’m putting all efforts I can to make my life better. Again, it’s not easy, but small steps at a time, I’m getting better. Since, there is a semicolon tattoo on my wrist (picture above) to remember me that life is precious. That my life is precious. I could’ve ended my life, like an author ending a sentence sentence with a coma, but I chose not to: my story isn’t over…

P.S.: If you have suicide ideas or feels like you are going thru what I’ve lived, please call a friend. If you don’t want or can’t, call Suicide Action Montreal at 514-723-4000 or check the hotline number in your country. You deserve better. You deserve to live!

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 60

Mozilla planet - mo, 18/04/2016 - 02:30

In the last week, we landed 120 PRs in the Servo organization’s repositories.

We have cancelled the weekly meeting. In order to ensure we still make the same or more information available, This Week in Servo will be extended to include planning/status information, call out any other ad-hoc meetings that occur during the week, and mention any notable threads on the mailing list.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions
  • KiChjang implemented the CORS Preflight Fetch algorithm
  • mbrubeck fixed some margin collapse code in layout
  • larsberg added a Windows buildbot
  • notriddle avoided propagation of floated layout elements into or out of absolutely positioned ones
  • mrobinson removed the concept of stacking levels for display lists
  • edunham packaged up our tidy script and published it to PyPi
  • ajeffrey added panic messages to failures
  • fitzgen made the profiling data take the stdout lock to avoid jumbled output
  • manish upgraded the Rust compiler to 2016-04-12
  • bholley avoided in-memory movement of Gecko style structs
  • manish reduced the number of threads Servo uses just for panic handling
  • izgzhen implemented the first parts of window.performance.timing
  • danlrobertson implemented flexbox reordering
  • pcwalton disallowed margins from collapsing through block formatting contexts in layout
  • kaksmet fixed sandboxing on OSX
  • frewsxcv implemented rowIndex and sectionRowIndex on <tr>
  • nox continued the SpiderMonkey update, ./mach test-css now passes on the smup branch
  • yoava333 corrected a Windows panic when resolving file URLs
  • jack toggled more SpiderMonkey options that improve performance on JS benchmarks
  • dzbarsky enabled a huge swath of WebGL conformance tests after a heroic struggle
  • DDEFISHER implemented support for responding to basic HTTP authorization requests
  • perlun extracted the monolithic parts of our Mako-based code generation into separate Python files
New Contributors Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

The University of Szeged team continues their awesome work on WebBluetooth support, with a neat demo video!

Meetings and Mailing List

Last week, we had a meeting with the Firefox developer tools team discussing protocol plans, Devtools.html, architectural design, and some potential future features brainstorming.

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Ganesh, remover of obstacles

Mozilla planet - mo, 18/04/2016 - 01:01

Earthquake in Kumamoto (Green circle is where I live). For people away from Japan, it always sounds like a scary thing. This one was large but still Japan is a big series of islands and where I live was far from the Earthquake. Or if you want a comparison it's a bit like people in Japan who would be worried for people in New-York because of an earthquake in Miami, or for people in Oslo because of an earthquake in Athens. Time to remove any obstacles, so Ganesh.

Tune of the week: Deva Shree Ganesha - Agneepath.

Webcompat Life

Progress this week:

Today: 2016-04-18T07:52:17.560261 374 open issues ---------------------- needsinfo 3 needsdiagnosis 126 needscontact 25 contactready 95 sitewait 122 ----------------------

You are welcome to participate

Preparing Londong agenda.

After reducing the incoming bug to nearly 0 every day in the previous weeks, I have now reduced the needscontact around 25 issues. Most of the remaining are for Microsoft, Opera, Firefox OS community to deal with. My next target is the contactready ones around 100 issues.

Posted my git/github workflow.

Webcompat issues

(a selection of some of the bugs worked on this week).

Webcompat development They are invisible

Some people in the Mozilla community are publishing on Medium, aka their content will disappear one day when you know Medium will be bought or just sunset. It's a bit sad. Their content is also most of the time not syndicated elsewhere.

Reading List
  • A must-watch video about the Web and understands why it is cool to develop for the Web. All along the talk I was thinking: When native will finally catch up with the Web.
  • A debugging thought process
Follow Your Nose TODO
  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Categorieën: Mozilla-nl planet

web-ext: Mozilla-Werkzeug zur Entwicklung von WebExtensions - soeren-hentzschel.at

Nieuws verzameld via Google - snein, 17/04/2016 - 22:23

web-ext: Mozilla-Werkzeug zur Entwicklung von WebExtensions
soeren-hentzschel.at
Mit web-ext hat Mozilla ein neues Kommandozeilen-Werkzeug vorgestellt, welches hilfreich für die Entwicklung der sogenannten WebExtensions ist, dem neuen Standard für Firefox-Add-ons. Im August 2015 hatte Mozilla die WebExtensions erstmals offiziell ...

en meer »Google Nieuws
Categorieën: Mozilla-nl planet

Armen Zambrano: Project definition: Give Treeherder the ability to schedule TaskCluster jobs

Mozilla planet - snein, 17/04/2016 - 18:01
This is a project definition that I put up for GSoC 2016. This helps students to get started researching the project.

The main things I give in here are:

  • Background
    • Where we came from, where we are and we are heading towards
  • Goal
    • Use case for developers
  • Breakdown of components
    • Rather than all aspects being mixed and not logically separate

NOTE: This project has few parts that have risks and could change the implementation. It depends on close collaboration with dustin.

-----------------------------------
Mentor: armenzg IRC:   #ateam channelGive Treeherder the ability to schedule TaskCluster jobsThis work will enable "adding new jobs" on Treeherder to work with pushes lacking TaskCluster jobs (our new continuous integration system). Read this blog post to know how the project was built for Buildbot jobs (our old continous integration system).
The main work for this project is tracked in bug 1254325.
In order for this to work we need the following pieces:A - Generate data source with all possible tasksB - Teach Treeherder to use the artifact
  • This will require close collaboration with Treeherder engineers
  • This work can be done locally with a Treeherder instance
  • It can also be deployed to the “staging” version of Treeherder to do tests
  • Alternative mentors for this section is: camd
C - Teach pulse_actions to listen for requests from Treeherder
  • pulse_actions is a pulse listener of Treeherder actions
  • You can see pulse_actions’ workflow in here
  • Once part B is completed, we will be able to listen for messages requesting certain TaskCluster tasks to be scheduled and we will schedule those tasks on behalf of the user
  • RISK: Depending if the TaskCluster actions project is completed on time, we might instead make POST requests to an API

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Armen Zambrano: Project definition: SETA re-write

Mozilla planet - snein, 17/04/2016 - 17:54
As an attempt to attract candidates to GSoC I wanted to make sure that the possible projects were achievable rather than lead them on a path of pain and struggle. It also helps me picture the order on which it makes more sense to accomplish.

It was also a good exercise for students to have to read and ask questions about what was not clear and give lots to read about the project.

I want to share this and another project definition in case it is useful for others.

----------------------------------
We want to rewrite SETA to be easy to deploy through Heroku and to support TaskCluster (our new continuous integration system) [0].
Please read carefully this document before starting to ask questions. There is high interest in this project and it is burdensome to have to re-explain it to every new prospective student.
Main mentor: armenzg (#ateam)Co-mentor: jmaher (#ateam)
Please read jmaher’s blog post carefully [1] before reading anymore.
Now that you have read jmaher’s blog post, I will briefly go into some specifics.SETA reduces the number of jobs that get scheduled on a developer’s push.A job is every single letter you see on Treeherder. For every developer’s push there is a number of these jobs scheduled.On every push, Buildbot [6] decides what to schedule depending on the data that it fetched from SETA [7].
The purpose of this project is two-fold:
  1. Write SETA as an independent project that is:
    1. maintainable
    2. more reliable
    3. automatically deployed through Heroku app
  2. Support TaskCluster, our new CI (continuous integration system)

NOTE: The current code of SETA [2] lives within a repository called ouija.
Ouija does the following for SETA:
  1. It has a cronjob which kicks in every 12 hours to scrape information about jobs from every push
  2. It takes the information about jobs (which it grabs from Treeherder) into a database

SETA then goes a queries the database to determine which jobs should be scheduled. SETA chooses jobs that are good at reporting issues introduced by developers. SETA has its own set of tables and adds the data there for quick reference.
Involved pieces for this project:
  1. Get familiar with deploying apps and using databases in Heroku
  2. Host SETA in Heroku instead of http://alertmanager.allizom.org/seta.html
    1. https://bugzilla.mozilla.org/show_bug.cgi?id=1253020
  3. Teach SETA about TaskCluster
    1. https://bugzilla.mozilla.org/show_bug.cgi?id=1243123
  4. Change the gecko decision task to reliably use SETA [5][6]
    1. If the SETA service is not available we should fall back to run all tasks/jobs
  5. Document how SETA works and auto-deployments of docs and Heroku
    1. Write automatically generated documentation
    2. Add auto-deployments to Heroku and readthedocs
  6. Add tests for SETA
    1. Add tox/travis support for tests and flake8
  7. Re-write SETA using ActiveData [3] instead of using data collected by Ouija
    1. https://bugzilla.mozilla.org/show_bug.cgi?id=1253028
  8. Make the current CI (Buildbot) use the new SETA Heroku service
    1. https://bugzilla.mozilla.org/show_bug.cgi?id=1252568
  9. Create SETA data for per test information instead of per job information (stretch goal)
    1. On Treeherder we have jobs that contain tests
    2. Tests re-order between those different chunks
    3. We want to run jobs at a per-directory level or per-manifest
  10. Add priorities into SETA data (stretch goal)
    1. Priority 1 gets every time
    2. Priority 2 gets triggered on Y push

[0] http://docs.taskcluster.net/[1] https://elvis314.wordpress.com/tag/seta/[2] https://github.com/dminor/ouija/blob/master/tools/seta.py[3] http://activedata.allizom.org/tools/query.html[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1243123[5] https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&filter-searchStr=gecko[6] testing/taskcluster/mach_commands.py#l280[7] http://hg.mozilla.org/build/buildbot-configs/file/default/mozilla-tests/config_seta.py[8] http://alertmanager.allizom.org/seta.html

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Frédéric Wang: OpenType MATH in HarfBuzz

Mozilla planet - sn, 16/04/2016 - 23:23

TL;DR:

  • Work is in progress to add OpenType MATH support in HarfBuzz and will be instrumental for many math rendering engines relying on that library, including browsers.

  • For stretchy operators, an efficient way to determine the required number of glyphs and their overlaps has been implemented and is described here.

In the context of Igalia browser team effort to implement MathML support using TeX rules and OpenType features, I have started implementation of OpenType MATH support in HarfBuzz. This table from the OpenType standard is made of three subtables:

  • The MathConstants table, which contains layout constants. For example, the thickness of the fraction bar of <semantics>ab<annotation encoding="application/x-tex">\frac{a}{b}</annotation></semantics>.

  • The MathGlyphInfo table, which contains glyph properties. For instance, the italic correction indicating how slanted an integral is e.g. to properly place the subscript in <semantics>∫D<annotation encoding="application/x-tex">\displaystyle\displaystyle\int_{D}</annotation></semantics>.

  • The MathVariants table, which provides larger size variants for a base glyph or data to build a glyph assembly. For example, either a larger parenthesis or a assembly of U+239B, U+239C, U+239D to write something like:

      <semantics>(abcdefgh<annotation encoding="application/x-tex">\left(\frac{\frac{\frac{a}{b}}{\frac{c}{d}}}{\frac{\frac{e}{f}}{\frac{g}{h}}}\right.</annotation></semantics>  

Code to parse this table was added to Gecko and WebKit two years ago. The existing code to build glyph assembly in these Web engines was adapted to use the MathVariants data instead of only private tables. However, as we will see below the MathVariants data to build glyph assembly is more general, with arbitrary number of glyphs or with additional constraints on glyph overlaps. Also there are various fallback mechanisms for old fonts and other bugs that I think we could get rid of when we move to OpenType MATH fonts only.

In order to add MathML support in Blink, it is very easy to import the OpenType MATH parsing code from WebKit. However, after discussions with some Google developers, it seems that the best option is to directly add support for this table in HarfBuzz. Since this library is used by Gecko, by WebKit (at least the GTK port) and by many other applications such as Servo, XeTeX or LibreOffice it make senses to share the implementation to improve math rendering everywhere.

The idea for HarfBuzz is to add an API to

  1. 1.

    Expose data from the MathConstants and MathGlyphInfo.

  2. 2.

    Shape stretchy operators to some target size with the help of the MathVariants.

It is then up to a higher-level math rendering engine (e.g. TeX or MathML rendering engines) to beautifully display mathematical formulas using this API. The design choice for exposing MathConstants and MathGlyphInfo is almost obvious from the reading of the MATH table specification. The choice for the shaping API is a bit more complex and discussions is still in progress. For example because we want to accept stretching after glyph-level mirroring (e.g. to draw RTL clockwise integrals) we should accept any glyph and not just an input Unicode strings as it is the case for other HarfBuzz shaping functions. This shaping also depends on a stretching direction (horizontal/vertical) or on a target size (and Gecko even currently has various ways to approximate that target size). Finally, we should also have a way to expose italic correction for a glyph assembly or to approximate preferred width for Web rendering engines.

As I mentioned at the beginning, the data and algorithm to build glyph assembly is the most complex part of the OpenType MATH and deserves a special interest. The idea is that you have a list of <semantics>n≥1<annotation encoding="application/x-tex">n\geq 1</annotation></semantics> glyphs available to build the assembly. For each <semantics>0≤i≤n-1<annotation encoding="application/x-tex">0\leq i\leq n-1</annotation></semantics>, the glyph <semantics>gi<annotation encoding="application/x-tex">g_{i}</annotation></semantics> has advance <semantics>ai<annotation encoding="application/x-tex">a_{i}</annotation></semantics> in the stretch direction. Each <semantics>gi<annotation encoding="application/x-tex">g_{i}</annotation></semantics> has straight connector part at its start (of length <semantics>si<annotation encoding="application/x-tex">s_{i}</annotation></semantics>) and at its end (of length <semantics>ei<annotation encoding="application/x-tex">e_{i}</annotation></semantics>) so that we can align the glyphs on the stretch axis and glue them together. Also, some of the glyphs are “extenders” which means that they can be repeated 0, 1 or more times to make the assembly as large as possible. Finally, the end/start connectors of consecutive glyphs must overlap by at least a fixed value <semantics>omin<annotation encoding="application/x-tex">o_{\mathrm{min}}</annotation></semantics> to avoid gaps at some resolutions but of course without exceeding the length of the corresponding connectors. This gives some flexibility to adjust the size of the assembly and get closer to the target size <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>.

<foreignObject color="#0000FF" width="16.19" height="100%">

<semantics>gi<annotation encoding="application/x-tex">g_{i}</annotation></semantics>

</foreignObject><foreignObject color="#0000FF" width="16.19" height="100%">

<semantics>si<annotation encoding="application/x-tex">s_{i}</annotation></semantics>

</foreignObject><foreignObject color="#0000FF" width="16.19" height="100%">

<semantics>ei<annotation encoding="application/x-tex">e_{i}</annotation></semantics>

</foreignObject><foreignObject color="#0000FF" width="16.19" height="100%">

<semantics>ai<annotation encoding="application/x-tex">a_{i}</annotation></semantics>

</foreignObject><foreignObject color="#FF0000" width="27.81" height="100%">

<semantics>gi+1<annotation encoding="application/x-tex">g_{i+1}</annotation></semantics>

</foreignObject><foreignObject color="#FF0000" width="27.81" height="100%">

<semantics>si+1<annotation encoding="application/x-tex">s_{i+1}</annotation></semantics>

</foreignObject><foreignObject color="#FF0000" width="27.81" height="100%">

<semantics>ei+1<annotation encoding="application/x-tex">e_{i+1}</annotation></semantics>

</foreignObject><foreignObject color="#FF0000" width="27.81" height="100%">

<semantics>ai+1<annotation encoding="application/x-tex">a_{i+1}</annotation></semantics>

</foreignObject><foreignObject color="#000000" width="39.43" height="100%">

<semantics>oi,i+1<annotation encoding="application/x-tex">o_{i,i+1}</annotation></semantics>

</foreignObject> Figure 0.1: Two adjacent glyphs in an assembly

To ensure that the width/height is distributed equally and the symmetry of the shape is preserved, the MATH table specification suggests the following iterative algorithm to determine the number of extenders and the connector overlaps to reach a minimal target size <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>:

  1. 1.

    Assemble all parts by overlapping connectors by maximum amount, and removing all extenders. This gives the smallest possible result.

  2. 2.

    Determine how much extra width/height can be distributed into all connections between neighboring parts. If that is enough to achieve the size goal, extend each connection equally by changing overlaps of connectors to finish the job.

  3. 3.

    If all connections have been extended to minimum overlap and further growth is needed, add one of each extender, and repeat the process from the first step.

We note that at each step, each extender is repeated the same number of times <semantics>r≥0<annotation encoding="application/x-tex">r\geq 0</annotation></semantics>. So if <semantics>IExt<annotation encoding="application/x-tex">I_{\mathrm{Ext}}</annotation></semantics> (respectively <semantics>INonExt<annotation encoding="application/x-tex">I_{\mathrm{NonExt}}</annotation></semantics>) is the set of indices <semantics>0≤i≤n-1<annotation encoding="application/x-tex">0\leq i\leq n-1</annotation></semantics> such that <semantics>gi<annotation encoding="application/x-tex">g_{i}</annotation></semantics> is an extender (respectively is not an extender) we have <semantics>ri=r<annotation encoding="application/x-tex">r_{i}=r</annotation></semantics> (respectively <semantics>ri=1<annotation encoding="application/x-tex">r_{i}=1</annotation></semantics>). The size we can reach at step <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> is at most the one obtained with the minimal connector overlap <semantics>omin<annotation encoding="application/x-tex">o_{\mathrm{min}}</annotation></semantics> that is

<semantics>∑i=0N-1(∑j=1riai-omin)+omin=(∑i∈INonExtai-omin)+(∑i∈IExtr⁢(ai-omin))+omin<annotation encoding="application/x-tex">\sum_{i=0}^{N-1}\left(\sum_{j=1}^{r_{i}}{a_{i}-o_{\mathrm{min}}}\right)+o_{% \mathrm{min}}=\left(\sum_{i\in I_{\mathrm{NonExt}}}{a_{i}-o_{\mathrm{min}}}% \right)+\left(\sum_{i\in I_{\mathrm{Ext}}}r{(a_{i}-o_{\mathrm{min}})}\right)+o% _{\mathrm{min}}</annotation></semantics>

We let <semantics>NExt=|IExt|<annotation encoding="application/x-tex">N_{\mathrm{Ext}}={|I_{\mathrm{Ext}}|}</annotation></semantics> and <semantics>NNonExt=|INonExt|<annotation encoding="application/x-tex">N_{\mathrm{NonExt}}={|I_{\mathrm{NonExt}}|}</annotation></semantics> be the number of extenders and non-extenders. We also let <semantics>SExt=∑i∈IExtai<annotation encoding="application/x-tex">S_{\mathrm{Ext}}=\sum_{i\in I_{\mathrm{Ext}}}a_{i}</annotation></semantics> and <semantics>SNonExt=∑i∈INonExtai<annotation encoding="application/x-tex">S_{\mathrm{NonExt}}=\sum_{i\in I_{\mathrm{NonExt}}}a_{i}</annotation></semantics> be the sum of advances for extenders and non-extenders. If we want the advance of the glyph assembly to reach the minimal size <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> then

  <semantics>SNonExt-omin⁢(NNonExt-1)+r⁢(SExt-omin⁢NExt)≥t<annotation encoding="application/x-tex">{S_{\mathrm{NonExt}}-o_{\mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}+{r% \left(S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}\right)}\geq t</annotation></semantics>  

We can assume <semantics>SExt-omin⁢NExt>0<annotation encoding="application/x-tex">S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}>0</annotation></semantics> or otherwise we would have the extreme case where the overlap takes at least the full advance of each extender. Then we obtain

  <semantics>r≥rmin=max⁡(0,⌈t-SNonExt+omin⁢(NNonExt-1)SExt-omin⁢NExt⌉)<annotation encoding="application/x-tex">r\geq r_{\mathrm{min}}=\max\left(0,\left\lceil\frac{t-{S_{\mathrm{NonExt}}+o_{% \mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}}{S_{\mathrm{Ext}}-o_{\mathrm{% min}}N_{\mathrm{Ext}}}\right\rceil\right)</annotation></semantics>  

This provides a first simplification of the algorithm sketched in the MATH table specification: Directly start iteration at step <semantics>rmin<annotation encoding="application/x-tex">r_{\mathrm{min}}</annotation></semantics>. Note that at each step we start at possibly different maximum overlaps and decrease all of them by a same value. It is not clear what to do when one of the overlap reaches <semantics>omin<annotation encoding="application/x-tex">o_{\mathrm{min}}</annotation></semantics> while others can still be decreased. However, the sketched algorithm says all the connectors should reach minimum overlap before the next increment of <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics>, which means the target size will indeed be reached at step <semantics>rmin<annotation encoding="application/x-tex">r_{\mathrm{min}}</annotation></semantics>.

One possible interpretation is to stop overlap decreasing for the adjacent connectors that reached minimum overlap and to continue uniform decreasing for the others until all the connectors reach minimum overlap. In that case we may lose equal distribution or symmetry. In practice, this should probably not matter much. So we propose instead the dual option which should behave more or less the same in most cases: Start with all overlaps set to <semantics>omin<annotation encoding="application/x-tex">o_{\mathrm{min}}</annotation></semantics> and increase them evenly to reach a same value <semantics>o<annotation encoding="application/x-tex">o</annotation></semantics>. By the same reasoning as above we want the inequality

  <semantics>SNonExt-o⁢(NNonExt-1)+rmin⁢(SExt-o⁢NExt)≥t<annotation encoding="application/x-tex">{S_{\mathrm{NonExt}}-o\left(N_{\mathrm{NonExt}}-1\right)}+{r_{\mathrm{min}}% \left(S_{\mathrm{Ext}}-oN_{\mathrm{Ext}}\right)}\geq t</annotation></semantics>  

which can be rewritten

  <semantics>SNonExt+rmin⁢SExt-o⁢(NNonExt+rmin⁢NExt-1)≥t<annotation encoding="application/x-tex">S_{\mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-{o\left(N_{\mathrm{NonExt% }}+{r_{\mathrm{min}}N_{\mathrm{Ext}}}-1\right)}\geq t</annotation></semantics>  

We note that <semantics>N=NNonExt+rmin⁢NExt<annotation encoding="application/x-tex">N=N_{\mathrm{NonExt}}+{r_{\mathrm{min}}N_{\mathrm{Ext}}}</annotation></semantics> is just the exact number of glyphs used in the assembly. If there is only a single glyph, then the overlap value is irrelevant so we can assume <semantics>NNonExt+r⁢NExt-1=N-1≥1<annotation encoding="application/x-tex">N_{\mathrm{NonExt}}+{rN_{\mathrm{Ext}}}-1=N-1\geq 1</annotation></semantics>. This provides the greatest theorical value for the overlap <semantics>o<annotation encoding="application/x-tex">o</annotation></semantics>:

  <semantics>omin≤o≤omaxtheorical=SNonExt+rmin⁢SExt-tNNonExt+rmin⁢NExt-1<annotation encoding="application/x-tex">o_{\mathrm{min}}\leq o\leq o_{\mathrm{max}}^{\mathrm{theorical}}=\frac{S_{% \mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-t}{N_{\mathrm{NonExt}}+{r_{% \mathrm{min}}N_{\mathrm{Ext}}}-1}</annotation></semantics>  

Of course, we also have to take into account the limit imposed by the start and end connector lengths. So <semantics>omax<annotation encoding="application/x-tex">o_{\mathrm{max}}</annotation></semantics> must also be at most <semantics>min⁡(ei,si+1)<annotation encoding="application/x-tex">\min{(e_{i},s_{i+1})}</annotation></semantics> for <semantics>0≤i≤n-2<annotation encoding="application/x-tex">0\leq i\leq n-2</annotation></semantics>. But if <semantics>rmin≥2<annotation encoding="application/x-tex">r_{\mathrm{min}}\geq 2</annotation></semantics> then extender copies are connected and so <semantics>omax<annotation encoding="application/x-tex">o_{\mathrm{max}}</annotation></semantics> must also be at most <semantics>min⁡(ei,si)<annotation encoding="application/x-tex">\min{(e_{i},s_{i})}</annotation></semantics> for <semantics>i∈IExt<annotation encoding="application/x-tex">i\in I_{\mathrm{Ext}}</annotation></semantics>. To summarize, <semantics>omax<annotation encoding="application/x-tex">o_{\mathrm{max}}</annotation></semantics> is the minimum of <semantics>omaxtheorical<annotation encoding="application/x-tex">o_{\mathrm{max}}^{\mathrm{theorical}}</annotation></semantics>, of <semantics>ei<annotation encoding="application/x-tex">e_{i}</annotation></semantics> for <semantics>0≤i≤n-2<annotation encoding="application/x-tex">0\leq i\leq n-2</annotation></semantics>, of <semantics>si<annotation encoding="application/x-tex">s_{i}</annotation></semantics> <semantics>1≤i≤n-1<annotation encoding="application/x-tex">1\leq i\leq n-1</annotation></semantics> and possibly of <semantics>e0<annotation encoding="application/x-tex">e_{0}</annotation></semantics> (if <semantics>0∈IExt<annotation encoding="application/x-tex">0\in I_{\mathrm{Ext}}</annotation></semantics>) and of of <semantics>sn-1<annotation encoding="application/x-tex">s_{n-1}</annotation></semantics> (if <semantics>n-1∈IExt<annotation encoding="application/x-tex">{n-1}\in I_{\mathrm{Ext}}</annotation></semantics>).

With the algorithm described above <semantics>NExt<annotation encoding="application/x-tex">N_{\mathrm{Ext}}</annotation></semantics>, <semantics>NNonExt<annotation encoding="application/x-tex">N_{\mathrm{NonExt}}</annotation></semantics>, <semantics>SExt<annotation encoding="application/x-tex">S_{\mathrm{Ext}}</annotation></semantics>, <semantics>SNonExt<annotation encoding="application/x-tex">S_{\mathrm{NonExt}}</annotation></semantics> and <semantics>rmin<annotation encoding="application/x-tex">r_{\mathrm{min}}</annotation></semantics> and <semantics>omax<annotation encoding="application/x-tex">o_{\mathrm{max}}</annotation></semantics> can all be obtained using simple loops on the glyphs <semantics>gi<annotation encoding="application/x-tex">g_{i}</annotation></semantics> and so the complexity is <semantics>O⁢(n)<annotation encoding="application/x-tex">O(n)</annotation></semantics>. In practice <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> is small: For existing fonts, assemblies are made of at most three non-extenders and two extenders that is <semantics>n≤5<annotation encoding="application/x-tex">n\leq 5</annotation></semantics> (incidentally, Gecko and WebKit do not currently support larger values of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>). This means that all the operations described above can be considered to have constant complexity. This is much better than a naive implementation of the iterative algorithm sketched in the OpenType MATH table specification which seems to require at worst

  <semantics>∑r=0rmin-1NNonExt+r⁢NExt=NNonExt⁢rmin+rmin⁢(rmin-1)2⁢NExt=O⁢(n×rmin2)<annotation encoding="application/x-tex">\sum_{r=0}^{r_{\mathrm{min}}-1}{N_{\mathrm{NonExt}}+rN_{\mathrm{Ext}}}=N_{% \mathrm{NonExt}}r_{\mathrm{min}}+\frac{r_{\mathrm{min}}\left(r_{\mathrm{min}}-% 1\right)}{2}N_{\mathrm{Ext}}={O(n\times r_{\mathrm{min}}^{2})}</annotation></semantics>  

and at least <semantics>Ω⁢(rmin)<annotation encoding="application/x-tex">\Omega(r_{\mathrm{min}})</annotation></semantics>.

One of issue is that the number of extender repetitions <semantics>rmin<annotation encoding="application/x-tex">r_{\mathrm{min}}</annotation></semantics> and the number of glyphs in the assembly <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> can become arbitrary large since the target size <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> can take large values e.g. if one writes \underbrace{\hspace{65535em}} in LaTeX. The improvement proposed here does not solve that issue since setting the coordinates of each glyph in the assembly and painting them require <semantics>Θ⁢(N)<annotation encoding="application/x-tex">\Theta(N)</annotation></semantics> operations as well as (in the case of HarfBuzz) a glyph buffer of size <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics>. However, such large stretchy operators do not happen in real-life mathematical formulas. Hence to avoid possible hangs in Web engines a solution is to impose a maximum limit <semantics>Nmax<annotation encoding="application/x-tex">N_{\mathrm{max}}</annotation></semantics> for the number of glyph in the assembly so that the complexity is limited by the size of the DOM tree. Currently, the proposal for HarfBuzz is <semantics>Nmax=128<annotation encoding="application/x-tex">N_{\mathrm{max}}=128</annotation></semantics>. This means that if each assembly glyph is 1em large you won’t be able to draw stretchy operators of size more than 128em, which sounds a quite reasonable bound. With the above proposal, <semantics>rmin<annotation encoding="application/x-tex">r_{\mathrm{min}}</annotation></semantics> and so <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> can be determined very quickly and the cases <semantics>N≥Nmax<annotation encoding="application/x-tex">N\geq N_{\mathrm{max}}</annotation></semantics> rejected, so that we avoid losing time with such edge cases…

Finally, because in our proposal we use the same overlap <semantics>o<annotation encoding="application/x-tex">o</annotation></semantics> everywhere an alternative for HarfBuzz would be to set the output buffer size to <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> (i.e. ignore <semantics>r-1<annotation encoding="application/x-tex">r-1</annotation></semantics> copies of each extender and only keep the first one). This will leave gaps that the client can fix by repeating extenders as long as <semantics>o<annotation encoding="application/x-tex">o</annotation></semantics> is also provided. Then HarfBuzz math shaping can be done with a complexity in time and space of just <semantics>O⁢(n)<annotation encoding="application/x-tex">O(n)</annotation></semantics> and it will be up to the client to optimize or limit the painting of extenders for large values of <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics>…

Categorieën: Mozilla-nl planet

Mark Finkle: Pitching Ideas – It’s Not About Perfect

Mozilla planet - sn, 16/04/2016 - 21:30

I realized a long time ago that I was not the type of person who could create, build & polish ideas all by myself. I need collaboration with others to hone and build ideas. More than not, I’m not the one who starts the idea. I pick up something from someone else – bend it, twist it, and turn it into something different.

Like many others, I have a problem with ‘fear of rejection’, which kept me from shepherding my ideas from beginning to shipped. If I couldn’t finish the idea myself or share it within my trusted circle, the idea would likely die. I had most successes when sharing ideas with others. I have been working to increase the size of the trusted circle, but it still has limits.

Some time last year, Mozilla was doing some annual planning for 2016 and Mark Mayo suggested creating informal pitch documents for new ideas, and we’d put those into the planning process. I created a simple template and started turning ideas into pitches, sending the documents out to a large (it felt large to me) list of recipients. To people who were definitely outside my circle.

The world didn’t end. In fact, it’s been a very positive experience, thanks in large part to the quality of the people I work with. I don’t get worried about feeling the idea isn’t ready for others to see. I get to collaborate at a larger scale.

Writing the ideas into pitches also forces me to get a clear message, define objectives & outcomes. I have 1x1s with a variety of folks during the week, and we end up talking about the idea, allowing me to further build and hone the document before sending it out to a larger group.

I’m hooked! These days, I send out pitches quite often. Maybe too often?

Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Slide Bite: The Ambient Computing Era

Mozilla planet - sn, 16/04/2016 - 16:41

AmbientEduardo

In the Ambient Computing Era humans live in a rich environment of communicating computer enhanced devices interoperating with a ubiquitous cloud of computer mediated information and services. We don’t even perceive most of the computers we interact with. They are an invisible and indispensable part of our everyday life.

Categorieën: Mozilla-nl planet

Pages