mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Firefox Test Pilot: Introducing Firefox Color and Side View

Mozilla planet - di, 05/06/2018 - 16:17

We’re excited to launch two new Test Pilot experiments that add power and style to Firefox.

https://medium.com/media/32dc65c89184767b96c26896efb78323/href

Side View enables you to multitask with Firefox like never before by letting you keep two websites open side by side in the same window.

https://medium.com/media/66965c47993ec6ef8783ef525951c67d/href

Firefox Color makes it easy to customize the look and feel of your Firefox browser. With just a few clicks you can create beautiful Firefox themes all your own.

Both experiments are available today from Firefox Test Pilot. Try them out, and don’t forget to give us feedback. You’re helping to shape the future of Firefox!

Introducing Firefox Color and Side View was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Firefox Frontier: It’s A New Firefox Multi-tasking Extension: Side View

Mozilla planet - di, 05/06/2018 - 16:02

Introducing Side View! Side View is a Firefox extension that allows you to view two different browser tabs simultaneously in the same tab, within the same browser window. Firefox Extensions … Read more

The post It’s A New Firefox Multi-tasking Extension: Side View appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Just call it macOS Death Valley and get it over with

Mozilla planet - di, 05/06/2018 - 02:27
What is Apple trying to say with macOS Mojave? That it's dusty, dry, substantially devoid of life in many areas, and prone to severe temperature extremes? Oh, it has dark mode. Ohhh, okay. That totally makes up for everything and all the bugs and all the recurrent lack of technical investment. It's like the anti-shiny.

In other news, besides the 32-bit apocalypse, they just deprecated OpenGL and OpenCL in order to make way for all the Metal apps that people have just been lining up to write. Not that this is any surprise, mind you, given how long Apple's implementation of OpenGL has rotted on the vine. It's a good thing they're talking about allowing iOS apps to run, because there may not be any legacy Mac apps compatible when macOS 10.15 "Zzyzx" rolls around.

Yes, looking forward to that Linux ARM laptop when the MacBook Air "Sierra Forever" wears out. I remember when I was excited about Apple's new stuff. Now it's just wondering what they're going to break next.

Categorieën: Mozilla-nl planet

Nicholas Nethercote: How to speed up the Rust compiler some more in 2018

Mozilla planet - di, 05/06/2018 - 02:05

I recently wrote about some work I’ve done to speed up the Rust compiler. Since then I’ve done some more.

rustc-perf improvements

Since my last post, rustc-perf — the benchmark suite, harness and visualizer — has seen some improvements. First, some new benchmarks were added: cargo, ripgrep, sentry-cli, and webrender. Also, the parser benchmark has been removed because it was a toy program and thus not a good benchmark.

Second, I added support for several new profilers: Callgrind, Massif, rustc’s own -Ztime-passes, and the use of ad hoc eprintln! statements added to rustc. (This latter case is more useful than it might sound; in combination with post-processing it can be very helpful, as we will see below.)

Finally, the graphs shown on the website now have better y-axis scaling, which makes many of them easier to read. Also, there is a new dashboard view that shows performance across rustc releases.

Failures and incompletes

After my last post, multiple people said they would be interested to hear about optimization attempts of mine that failed. So here is an incomplete selection. I suggest that rustc experts read through these, because there is a chance they will be able to identify alternative approaches that I have overlooked.

nearest_common_ancestors 1: I managed to speed up this hot function in a couple of ways, but a third attempt failed. The representation of the scope tree is not done via a typical tree data structure; instead there is a HashMap of child/parent pairs. This means that moving from a child to its parent node requires a HashMap lookup, which is expensive. I spent some time designing and implementing an alternative data structure that stored nodes in a vector and the child-to-parent links were represented as indices to other elements in the vector. This meant that child-to-parent moves only required stepping through the vector. It worked, but the speed-up turned out to be very small, and the new code was significantly more complicated, so I abandoned it.

nearest_common_ancestors 2: A different part of the same function involves storing seen nodes in a vector. Searching this unsorted vector is O(n), so I tried instead keeping it in sorted order and using binary search, which gives O(log n) search. However, this change meant that node insertion changed from amortized O(1) to O(n) — instead of a simple push onto the end of the vector, insertion could be at any point, which which required shifting all subsequent elements along. Overall this change made things slightly worse.

PredicateObligation SmallVec: There is a type Vec<PredicationObligation> that is instantiated frequently, and the vectors often have few elements. I tried using a SmallVec instead, which avoids the heap allocations when the number of elements is below a threshold. (A trick I’ve used multiple times.) But this made things significantly slower! It turns out that these Vecs are copied around quite a bit, and a SmallVec is larger than a Vec because the elements are inline. Furthermore PredicationObligation is a large type, over 100 bytes. So what happened was that memcpy calls were inserted to copy these SmallVecs around. The slowdown from the extra function calls and memory traffic easily outweighed the speedup from avoiding the Vec heap allocations.

SipHasher128: Incremental compilation does a lot of hashing of data structures in order to determine what has changed from previous compilation runs. As a result, the hash function used for this is extremely hot. I tried various things to speed up the hash function, including LEB128-encoding of usize inputs (a trick that worked in the past) but I failed to speed it up.

LEB128 encoding: Speaking of LEB128 encoding, it is used a lot when writing metadata to file. I tried optimizing the LEB128 functions by special-casing the common case where the value is less than 128 and so can be encoded in a single byte. It worked, but gave a negligible improvement, so I decided it wasn’t worth the extra complication.

Token shrinking: A previous PR shrunk the Token type from 32 to 24 bytes, giving a small win. I tried also replacing the Option<ast::Name> in Literal with just ast::Name and using an empty name to represent “no name”. That  change reduced it to 16 bytes, but produced a negligible speed-up and made the code uglier, so I abandoned it.

#50549: rustc’s string interner was structured in such a way that each interned string was duplicated twice. This PR changed it to use a single Rc‘d allocation, speeding up numerous benchmark runs, the best by 4%. But just after I posted the PR, @Zoxc posted #50607, which allocated the strings out of an arena, as an alternative. This gave better speed-ups and was landed instead of my PR.

#50491: This PR introduced additional uses of LazyBTreeMap (a type I had previously introduced to reduce allocations) speeding up runs of multiple benchmarks, the best by 3%. But at around the same time, @porglezomp completed a PR that changed BTreeMap to have the same lazy-allocation behaviour as LazyBTreeMap, rendering my PR moot. I subsequently removed the LazyBTreeMap type because it was no longer necessary.

#51281: This PR, by @Mark-Simulacrum, removed an unnecessary heap allocation from the RcSlice type. I had looked at this code because DHAT’s output showed it was hot in some cases, but I erroneously concluded that the extra heap allocation was unavoidable, and moved on! I should have asked about it on IRC.

Wins

#50339: rustc’s pretty-printer has a buffer that can contain up to 55 entries, and each entry is 48 bytes on 64-bit platforms. (The 55 somehow comes from the algorithm being used; I won’t pretend to understand how or why.) Cachegrind’s output showed that the pretty printer is invoked frequently (when writing metadata?) and that the zero-initialization of this buffer was expensive. I inserted some eprintln! statements and found that in the vast majority of cases only the first element of the buffer was ever touched. So this PR changed the buffer to default to length 1 and extend when necessary, speeding up runs for numerous benchmarks, the best by 3%.

#50365: I had previously optimized the nearest_common_ancestor function. Github user @kirillkh kindly suggested a tweak to the code from that PR which reduced the number comparisons required. This PR implemented that tweak, speeding up runs of a couple of benchmarks by another 1–2%.

#50391: When compiling certain annotations, rustc needs to convert strings from unescaped form to escaped form. It was using the escape_unicode function to do this, which unnecessarily converts every char to \u{1234} form, bloating the resulting strings greatly. This PR changed the code to use the escape_default function — which only escapes chars that genuinely need escaping — speeding up runs of most benchmarks, the best by 13%. It also slightly reduced the on-disk size of produced rlibs, in the best case by 15%.

#50525: Cachegrind showed that even after the previous PR, the above string code was still hot, because string interning was happening on the resulting string, which was unnecessary in the common case where escaping didn’t change the string. This PR added a scan to determine if escaping is necessary, thus avoiding the re-interning in the common case, speeding up a few benchmark runs, the best by 3%.

#50407: Cachegrind’s output showed that the trivial methods for the simple BytePos and CharPos types in the parser are (a) extremely hot and (b) not being inlined. This PR annotated them so they are inlined, speeding up most benchmarks, the best by 5%.

#50564: This PR did the same thing for the methods of the Span type, speeding up incremental runs of a few benchmarks, the best by 3%.

#50931: This PR did the same thing for the try_get function, speeding up runs of many benchmarks, the best by 1%.

#50418: DHAT’s output showed that there were many heap allocations of the cmt type, which is refcounted. Some code inspection and ad hoc instrumentation with eprintln! showed that many of these allocated cmt instances were very short-lived. However, some of them ended up in longer-lived chains, in which the refcounting was necessary. This PR changed the code so that cmt instances were mostly created on the stack by default, and then promoted to the heap only when necessary, speeding up runs of three benchmarks by 1–2%. This PR was a reasonably large change that took some time, largely because it took me five(!) attempts (the final git branch was initially called cmt5) to find the right dividing line between where to use stack allocation and where to use refcounted heap allocation.

#50565: DHAT’s output showed that the dep_graph structure, which is a IndexVec<DepNodeIndex,Vec<DepNodeIndex>>, caused many allocations, and some eprintln! instrumentation showed that the inner Vec‘s were mostly only a few elements. This PR changed the Vec<DepNodeIndex> to SmallVec<[DepNodeIndex;8]>, which avoids heap allocations when the number of elements is less than 8, speeding up incremental runs of many benchmarks, the best by 2%.

#50566: Cachegrind’s output shows that the hottest part of rustc’s lexer is the bump function, which is responsible for advancing the lexer onto the next input character. This PR streamlined it slightly, speeding up most runs of a couple of benchmarks by 1–3%.

#50818: Both Cachegrind and DHAT’s output showed that the opt_normalize_projection_type function was hot and did a lot of heap allocations. Some eprintln! instrumentation showed that there was a hot path involving this function that could be explicitly extracted that would avoid unnecessary HashMap lookups and the creation of short-lived Vecs. This PR did just that, speeding up most runs of serde and futures by 2–4%.

#50855: DHAT’s output showed that the macro parser performed a lot of heap allocations, particular on the html5ever benchmark. This PR implemented ways to avoid three of them: (a) by storing a slice of a Vec in a struct instead of a clone of the Vec; (b) by introducing a “ref or box” type that allowed stack allocation of the MatcherPos type in the common case, but heap allocation when necessary; and (c) by using Cow to avoid cloning a PathBuf that is rarely modified. These changes sped up runs of html5ever by up to 10%, and crates.io by up to 3%. I was particularly pleased with these changes because they all involved non-trivial changes to memory management that required the introduction of additional explicit lifetimes. I’m starting to get the hang of that stuff… explicit lifetimes no longer scare me the way they used to. It certainly helps that rustc’s error messages do an excellent job of explaining where explicit lifetimes need to be added.

#50932: DHAT’s output showed that a lot of HashSet instances were being created in order to de-duplicate the contents of a commonly used vector type. Some eprintln! instrumentation showed that most of these vectors only had 1 or 2 elements, in which case the de-duplication can be done trivially without involving a HashSet. (Note that the order of elements within this vector is important, so de-duplication via sorting wasn’t an option.) This PR added special handling of these trivial cases, speeding up runs of a few benchmarks, the best by 2%.

#50981: The compiler does a liveness analysis that involves vectors of indices that represent variables and program points. In rare cases these vectors can be enormous; compilation of the inflate benchmark involved one that has almost 6 million 24-byte elements, resulting in 143MB of data. This PR changed the type used for the indices from usize to u32, which is still more than large enough, speeding up “clean incremental” builds of inflate by about 10% on 64-bit platforms, as well as reducing their peak memory usage by 71MB.

What’s next?

These improvements, along with those recently done by others, have significantly sped up the compiler over the past month or so: many workloads are 10–30% faster, and some even more than that. I have also seen some anecdotal reports from users about the improvements over recent versions, and I would be interested to hear more data points, especially those involving rustc nightly.

The profiles produced by Cachegrind, Callgrind, and DHAT are now all looking very “flat”, i.e. with very little in the way of hot functions that stick out as obvious optimization candidates. (The main exceptions are the SipHasher128 functions I mentioned above, which I haven’t been able to improve.) As a result, it has become difficult for me to make further improvements via “bottom-up” profiling and optimization, i.e. optimizing hot leaf and near-leaf functions in the call graph.

Therefore, future improvement will likely come from “top-down” profiling and optimization, i.e. observations such as “rustc spends 20% of its time in phase X, how can that be improved”? The obvious place to start is the part of compilation taken up by LLVM. In many debug and opt builds LLVM accounts for up to 70–80% of instructions executed. It doesn’t necessarily account for that much time, because the LLVM parts of execution are parallelized more than the rustc parts, but it still is the obvious place to focus next. I have looked at a small amount of generated MIR and LLVM IR, but there’s a lot to take in. Making progress will likely require a lot broader understanding of things than many of the optimizations described above, most of which require only a small amount of local knowledge about a particular part of the compiler’s code.

If anybody reading this is interested in trying to help speed up rustc, I’m happy to answer questions and provide assistance as much as I can. The #rustc IRC channel is also a good place to ask for help.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.26.2

Mozilla planet - di, 05/06/2018 - 02:00

The Rust team is happy to announce a new version of Rust, 1.26.2. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.26.2 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.26.2 on GitHub.

What’s in 1.26.2 stable

This patch release fixes a bug in the borrow checker verification of match expressions. This bug was introduced in 1.26.0 with the stabilization of match ergonomics. Specifically, it permitted code which took two mutable borrows of the bar path at the same time.

let mut foo = Some("foo".to_string()); let bar = &mut foo; match bar { Some(baz) => { bar.take(); // Should not be permitted, as baz has a unique reference to the bar pointer. }, None => unreachable!(), }

1.26.2 will reject the above code with this error message:

error[E0499]: cannot borrow `*bar` as mutable more than once at a time --> src/main.rs:6:9 | 5 | Some(baz) => { | --- first mutable borrow occurs here 6 | bar.take(); // Should not be permitted, as baz has a ... | ^^^ second mutable borrow occurs here ... 9 | } | - first borrow ends here error: aborting due to previous error

The Core team decided to issue a point release to minimize the window of time in which this bug in the Rust compiler was present in stable compilers.

Categorieën: Mozilla-nl planet

Armen Zambrano: Some webdev knowledge gained

Mozilla planet - ma, 04/06/2018 - 17:24

Easlier this year I had to split a Koa/SPA app into two separate apps. As part of that I switched from webpack to Neutrino.

Through this work I learned a lot about full stack development (frontend, backend and deployments for both). I could write a blog post per item, however, listing it all in here is better than never getting to write a post for any of them.

Note, I’m pointing to commits that I believe have enough information to understand what I learned.

Npm packages to the rescue:

Node/backend notes:

Neutrino has been a great ally to me and here’s some knowledge on how to use it:

Heroku was my tool for deployment and here you have some specific notes:

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Announces $225,000 for Art and Advocacy Exploring Artificial Intelligence

Mozilla planet - ma, 04/06/2018 - 16:00
Mozilla’s latest awards will support people and projects that examine the effects of AI on society

 

At Mozilla, one way we support a healthy internet is by fueling the people and projects on the front lines — from grants for community technologists in Detroit, to fellowships for online privacy activists in Rio.

Today, we are opening applications for a new round of Mozilla awards. We’re awarding $225,000 to technologists and media makers who help the public understand how threats to a healthy internet affect their everyday lives.

APPLY»

Specifically, we’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users.

These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.

What we’re seeking

We’re seeking projects that are accessible to broad audiences and native to the internet, from videos and games to browser extensions and data visualizations.

We will consider projects that are at either the conceptual or prototype phases. All projects must be freely available online and suitable for a non-expert audience. Projects must also respect users’ privacy.

In the past, Mozilla has supported creative media projects like:

Data Selfie, an open-source browser add-on that simulates how Facebook interprets users’ data.

Chupadados, a mix of art and journalism that examines how women and non-binary individuals are tracked and surveilled online.

Paperstorm, a web-based advocacy game that lets users drop digital leaflets on cities — and, in the process, tweet messages to lawmakers.

Codemoji, a tool for encoding messages with emoji and teaching the basics of encryption.

*Privacy Not Included, a holiday shopping guide that assesses the latest gadgets’ privacy and security features.

Details and how to apply

Mozilla is awarding a total of $225,000, with individual awards ranging up to $50,000. Final award amounts are at the discretion of award reviewers and Mozilla staff, but it is currently anticipated that the following awards will be made:

Two $50,000 total prize packages ($47,500 award + $2,500 MozFest travel stipend)

Five $25,000 total prize packages ($22,500 award + $2,500 MozFest travel stipend)

Awardees are selected based on quantitative scoring of their applications by a review committee and a qualitative discussion at a review committee meeting. Committee members include Mozilla staff, current and alumni Mozilla Fellows, and — as available — outside experts. Selection criteria are designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium are also considered.

Mozilla will accept applications through August 1, 2018, and notify winners by September 15, 2018. Winners will be publicly announced on or around MozFest, which is held October 26-28, 2018.

APPLY. Applicants should attend the informational webinar on Monday, June 18 (4pm EDT, 3pm CDT, 1pm PDT). Sign up to be notified here.

Brett Gaylor is Commissioning Editor at Mozilla

The post Mozilla Announces $225,000 for Art and Advocacy Exploring Artificial Intelligence appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Pagina's