mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Software-update: Mozilla Firefox 68.0.2 - Computer - Downloads - Tweakers

Nieuws verzameld via Google - wo, 14/08/2019 - 17:35
Software-update: Mozilla Firefox 68.0.2 - Computer - Downloads  Tweakers

Mozilla heeft een update voor versie 68 van zijn webbrowser Firefox uitgebracht, de elfde versie gebaseerd op Firefox Quantum. De browser heeft met Quantum ...

Categorieën: Mozilla-nl planet

Google en Mozilla stoppen met tonen bedrijfsnaam in url-balk van browser - Tweakers

Nieuws verzameld via Google - ti, 13/08/2019 - 19:35
Google en Mozilla stoppen met tonen bedrijfsnaam in url-balk van browser  Tweakers

Google en Mozilla zijn van plan om in komende versies van hun Chrome- en Firefox-browser de Extended Validation-certificaten van https-websites niet meer te ...

Categorieën: Mozilla-nl planet

IBM wil browserdata in blockchain stoppen - Techzine.be

Nieuws verzameld via Google - ti, 13/08/2019 - 10:27
IBM wil browserdata in blockchain stoppen  Techzine.be

IBM wil browsergegevens uit de handen van softwarebouwers halen via een blockchain. Voor dat idee wenst de techreus een patent. IBM browserdata ...

Categorieën: Mozilla-nl planet

Mozilla past naam Firefox Quantum aan naar Firefox Browser - Tweakers

Nieuws verzameld via Google - mo, 12/08/2019 - 16:44
Mozilla past naam Firefox Quantum aan naar Firefox Browser  Tweakers

Mozilla gebruikt vanaf versie 70 niet langer de naam Firefox Quantum voor zijn webbrowser maar Firefox Browser. Versie 70 van de software verschijnt in ...

Categorieën: Mozilla-nl planet

Workshop: zo browse je veiliger op het internet - Clickx.be

Nieuws verzameld via Google - wo, 07/08/2019 - 15:41
Workshop: zo browse je veiliger op het internet  Clickx.be

Virussen en malware: je hebt er liever niets mee te maken. Toch kies je er in de meeste gevallen niet zelf voor. Gelukkig er zijn er voldoende applicaties die ...

Categorieën: Mozilla-nl planet

Firefox voor Android ondersteunt inloggen via vingerafdruk - Security.nl

Nieuws verzameld via Google - mo, 05/08/2019 - 16:57
Firefox voor Android ondersteunt inloggen via vingerafdruk  Security.nl

Gebruikers van Firefox voor Android kunnen voortaan ook via hun vingerafdruk op websites en webapplicaties inloggen, zo laat Mozilla weten. De nieuwste ...

Categorieën: Mozilla-nl planet

Spionagemalware lokaliseert slachtoffers via locatiedienst Mozilla - Security.nl

Nieuws verzameld via Google - mo, 05/08/2019 - 12:31
Spionagemalware lokaliseert slachtoffers via locatiedienst Mozilla  Security.nl

Onderzoekers van antivirusbedrijf ESET hebben spionagemalware ontdekt die allerlei gegevens van systemen steelt en slachtoffers via de locatiedienst van ...

Categorieën: Mozilla-nl planet

Cameron Kaiser: Clean out your fonts, people

Mozilla planet - snein, 21/07/2019 - 06:06
Someone forwarded me a MacRumours post that a couple of the (useless) telemetry options in TenFourFox managed to escape my notice and should be disabled. This is true and I'll be flagging them off in FPR16. However, another source of slowdowns popped up recently and while I think it's been pointed out it bears repeating.

On startup, and to a lesser extent when browsing, TenFourFox (and Firefox) enumerates the fonts you have installed on your Power Mac so that sites requesting them can use locally available fonts and not download them unnecessarily. The reason for periodically rechecking is that people can, and do, move fonts around and it would be bad if TenFourFox had stale font information particularly for commonly requested ones. To speed this up, I actually added a TenFourFox-specific font directory cache so that subsequent enumerations are quicker. However, the heuristic for determining when fonts should be rescanned is imperfect and when in doubt I always err towards a fresh scan. That means a certain amount of work is unavoidable under normal circumstances.

Thus, the number of fonts you have currently installed directly affects TenFourFox's performance, and TenFourFox is definitely not the only application that needs to know what fonts are installed. If you have a large (as in several hundred) number of font files and particularly if you are not using an SSD, you should strongly consider thinning them out or using some sort of font management system. Even simply disabling the fonts in Font Book will help, because under the hood this will move the font to a disabled location, and TenFourFox and other applications will then not have to track it further.

How many is too many? On my quad G5, I have about 800 font files on my Samsung SSD. This takes about 3-4 seconds to initially populate the cache and then less than a second on subsequent enumerations. However, on a uniprocessor system and especially on systems without an SSD, I would strongly advise getting that number down below one hundred. Leave the fonts in /System/Library/Fonts alone, but on my vanilla Tiger Sawtooth G4 server, /Library/Fonts has just 87 files. Use Font Book to enable fonts later if you need them for stuff you're working on, or, if you know those fonts aren't ever being used, consider just deleting them entirely.

Due to a work crunch I will not be doing much work on FPR16 until August. However, I will be at the Vintage Computer Festival West again August 3 and 4 at the Computer History Museum in Mountain View. I've met a few readers of this blog in past years, and hopefully getting to play with various PowerPC (non-Power Mac), SPARC and PA-RISC laptops and portable workstations will tempt the rest of you. Come by, say hi, and play around a bit with the other great exhibits that aren't as cool as mine.

Categorieën: Mozilla-nl planet

Patrick Cloke: Celery without a Results Backend

Mozilla planet - to, 18/07/2019 - 02:35

The Celery send_task method allows you to invoke a task by name without importing it. [1] There is an undocumented [2] caveat to using send_task: it doesn’t have access to the configuration of the task (from when the task was created using the @task decorator).

Much of this configuration …

Categorieën: Mozilla-nl planet

Joel Maher: Recent fixes to reduce backlog on Android phones

Mozilla planet - wo, 17/07/2019 - 21:54

Last week it seemed that all our limited resource machines were perpetually backlogged. I wrote yesterday to provide insight into what we run and some of our limitations. This post will be discussing the Android phones backlog last week specifically.

The Android phones are hosted at Bitbar and we split them into pools (battery testing, unit testing, perf testing) with perf testing being the majority of the devices.

There were 6 fixes made which resulted in significant wins:

  1. Recovered offline devices at Bitbar
  2. Restarting host machines to fix intermittent connection issues at Bitbar
  3. Update Taskcluster generic-worker startup script to consume superceded jobs
  4. Rewrite the scheduling script as multi-threaded and utilize bitbar APIs more efficiently
  5. Turned off duplicate jobs that were on by accident last month
  6. Removed old taskcluster-worker devices

On top of this there are 3 future wins that could be done to help future proof this:

  1. upgrade android phones from 8.0 -> 9.0 for more stability
  2. Enable power testing on generic usb hubs rather than special hubs which require dedicated devices.
  3. merge all separate pools together to maximize device utilization

With the fixes in place, we are able to keep up with normal load and expect that future spikes in jobs will be shorter lived, instead of lasting an entire week.

Recovered offline devices at Bitbar
Every day a 2-5 devices are offline for some period of time. The Bitbar team finds some on their own and resets the devices, sometimes we notice them and ask for the devices
to be reset. In many cases the devices are hung or have trouble on a reboot (motivation for upgrading to 9.0). I will add to this that the week prior things started getting sideways and it was a holiday week for many, so less people were watching things and more devices ended up in various states.

In total we have 40 pixel2 devices in the perf pool (and 37 Motorola G5 devices as well) and 60 pixel2 devices when including the unittest and battery pools. We found that 19 devices were not accepting jobs and needed attention Monday July 8th. For planning purposes it is assumed that 10% of the devices will be offline, in this case we had 1/3 of our devices offline and we were doing merge day with a lot of big pushes running all the jobs.

Restarting host machines to fix intermittent connection issues at Bitbar
At Bitbar we have a host machine with 4 or more docker containers running and each docker container runs Linux with the Taskcluster generic-worker and the tools to run test jobs. Each docker container is also mapped directly to a phone. The host machines are rarely rebooted and maintained, and we noticed a few instances where the docker containers had trouble connecting to the network.  A fix for this was to update the kernel and schedule periodic reboots.

Update Taskcluter generic-worker startup script
When the job is superseded, we would shut down the Taskcluster generic-worker, the docker image, and clean up. Previously it would terminate the job and docker container and then wait for another job to show up (often a 5-20 minute cycle). With the changes made Taskcluster generic-worker will just restart (not docker container) and quickly pick up the next job.

Rewrite the scheduling script as multi-threaded
This was a big area of improvement that was made. As our jobs increased in volume and had a wider range of runtimes, our tool for scheduling was iterating through the queue and devices and calling the APIs at Bitbar to spin up a worker and hand off a task. This is something that takes a few seconds per job or device and with 100 devices it could take 10+ minutes to come around and schedule a new job on a device. With changes made last week ( Bug 1563377 ) we now have jobs starting quickly <10 seconds, which greatly increases our device utilization.

Turn off duplicate opt jobs and only run PGO jobs
In reviewing what was run by default per push and on try, a big oversight was discovered. When we turned PGO on for Android, all the perf jobs were scheduled both for opt and PGO, when they should have been only scheduled for PGO. This was an easy fix and cut a large portion of the load down (Bug 1565644)

Removed old taskcluster-worker devices
Earlier this year we switched to Taskcluster generic-worker and in the transition had to split devices between the old taskcluster-worker and the new generic-worker (think of downstream branches).  Now everything is run on generic-worker, but we had 4 devices still configured with taskcluster-worker sitting idle.
Given all of these changes, we will still have backlogs that on a bad day could take 12+ hours to schedule try tasks, but we feel confident with the current load we have that most of the time jobs will be started in a reasonable time window and worse case we will catch up every day.

A caveat to the last statement, we are enabling webrender reftests on android and this will increase the load by a couple devices/day. Any additional tests that we schedule or large series of try pushes will cause us to hit the tipping point. I suspect buying more devices will resolve many complaints about lag and backlogs. Waiting for 2 more weeks would be my recommendation to see if these changes made have a measurable change on our backlog. While we wait it would be good to have agreement on what is an acceptable backlog and when we cross that threshold regularly that we can quickly determine the number of devices needed to fix our problem.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Q&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier

Mozilla planet - wo, 17/07/2019 - 18:51


When you were in school, you may have taken a trip to a museum or a local park, but you probably never got to see an active volcano or watch great whites hunt. As Virtual Reality grows, this could be the way your kids will learn — using headsets the way we use computers.

When you were in school, you may have gone on a trip to the museum, but you probably never stood next to an erupting volcano, watching molten lava pouring down its sides. As Virtual Reality (VR) grows, learning by going into the educational experience could be the way children will learn — using VR headsets the way we use computers.

This kind of technology holds huge potential in shaping young minds, but like with most technology, not all public schools get the same access. For those who come from underserved communities, the high costs to technology could widen an already existing gap in learning, and future incomes.

Kai Frazier, Founder of the education startup Curated x Kai, has seen this first-hand. As a history teacher who was once a homeless teen, she’s seen how access to technology can change lives. Her experiences on both ends of the spectrum are one of the reasons why she founded Curated x Kai.

As a teacher trying to make a difference, she began to experiment with VR and was inspired by the responses from her students. Now, she is hard at work building a company that stands for opening up opportunities for all children.

We recently sat down with Frazier to talk about the challenges of launching a startup with no engineering experience, and the life-changing impact of VR in the classroom.

How did the idea to use VR to help address inequality come up?
I was teaching close to Washington D.C., and my school couldn’t afford a field trip to the Martin Luther King Memorial or any of the free museums. Even though it was a short distance from their school, they couldn’t go. Being a teacher, I know how those things correlate to classroom performance.

 

As a teacher, it must have been difficult for you to see kids unable to tour museums and monuments, especially when most are free in D.C. Was it a similar situation similar when it came to accessing technology?
When I was teaching, it was really rough because I didn’t have much. We had so few laptops, that we had a laptop cart to share between classrooms. You’d sign up for a laptop for your class, and get a laptop cart once every other week. It got so bad that we went to a ‘bring your own device’ policy. So, if they had a cellphone, they could just use a cellphone because that’s a tiny computer.

And computers are considered essential in today’s schools. We know that most kids like to play with computers, but what about VR? How do educational VR experiences impact kids, especially those from challenging backgrounds or kids that aren’t the strongest students?
The kids that are hardest to reach had such strong responses to Virtual Reality. It awakened something in them that the teachers didn’t know was there. I have teachers in Georgia who work with emotionally-troubled children, and they say they’ve never seen them behave so well. A lot of my students were doing bad in science, and now they’re excited to watch science VR experiences.

Let’s back up a bit and talk about how it all started. How did you came up with the idea for Curated x Kai?
On Christmas day, I went to the MLK Memorial with my camera. I took a few 360-videos, and my students loved it. From there, I would take a camera with me and film in other places. I would think, ‘What could happen if I could show these kids everything from history museums, to the world, to colleges to jobs?’

And is that when you decided to launch?
While chaperoning kids on a college tour to Silicon Valley, I was exposed to Bay Area insights that showed me I wasn’t thinking big enough. That was October 2017, so by November 2017, I decided to launch my company.

When I launched, I didn’t think I was going to have a VR company. I have no technical background. My degrees are in history and secondary education. I sold my house. I sold my car. I sold everything I owned. I bootstrapped everything, and moved across the country here (to the San Francisco Bay Area).

When you got to San Francisco, you tried to raise venture capital but you had a hard time. Can you tell us more about that?
No matter what stage you’re in, VCs (Venture Capitalists) want to know you’re not going to be a large risk. There aren’t many examples of people with my background creating an ed tech VR company, so it feels extremely risky to most.The stat is that there’s a 0.2 percent chance for a black woman to get VC funding. I’m sure that if I looked differently, it would have helped us to scale a lot faster.

 

You’re working out of the Kapor Center in Oakland right now. What has that experience been like for you?
The Kapor Center is different because they are taking a chance on people who are doing good for the community. It’s been an amazing experience being incubated at their facility.

You’re also collaborating with the Emerging Technologies team here at Mozilla. Can you tell us more about that?
I’m used to a lot of false promises or lures of diversity initiatives. I was doing one program that catered to diverse content creators. When I got the contract, it said they were going to take my IP, my copyright, my patent…they would own it all. Mozilla was the first time I had a tech company treat me like a human being. They didn’t really care too much about my background or what credentials I didn’t have.

And of course many early engineers didn’t have traditional credentials. They were just people interested in building something new. As Curated x Kai developed, you choose Hubs, which is a Mozilla project, as a platform to host the virtual tours and museum galleries. Why Hubs when there are other, more high-resolution options on the market?
Tools like Hubs allow me to get students interested in VR, so they want to keep going. It doesn’t matter how high-res the tool is. Hubs enables me to create a pipeline so people can learn about VR in a very low-cost, low-stakes arena. It seems like many of the more expensive VR experiences haven’t been tested in schools. They often don’t work, because the WiFi can’t handle it.

That’s an interesting point because for students to get the full benefits of VR, it has to work in schools first. Besides getting kids excited to learn, if you had one hope for educational VR, what would it be?
I hope all VR experiences, educational or other, will incorporate diverse voices and perspectives. There is a serious lack of inclusive design, and it shows, from the content to the hardware. The lack of inclusive design creates an unwelcoming experience and many are robbed from the excitement of VR. Without that excitement, audiences lack the motivation to come back to the experience and it hinders growth, and adaptation in new communities — including schools.

The post Q&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl 7.65.2 fixes even more

Mozilla planet - wo, 17/07/2019 - 09:39

Six weeks after our previous bug-fix release, we ship a second release in a row with nothing but bug-fixes. We call it 7.65.2. We decided to go through this full release cycle with a focus on fixing bugs (and not merge any new features) since even after 7.65.1 shipped as a bug-fix only release we still seemed to get reports indicating problems we wanted fixed once and for all.

Download curl from curl.haxx.se as always!

Also, I personally had a vacation already planned to happen during this period (and I did) so it worked out pretty good to take this cycle as a slightly calmer one.

Of the numbers below, we can especially celebrate that we’ve now received code commits by more than 700 persons!

Numbers

the 183rd release
0 changes
42 days (total: 7,789)

76 bug fixes (total: 5,259)
113 commits (total: 24,500)
0 new public libcurl function (total: 80)
0 new curl_easy_setopt() option (total: 267)

0 new curl command line option (total: 221)
46 contributors, 25 new (total: 1,990)
30 authors, 19 new (total: 706)
1 security fix (total: 90)
200 USD paid in Bug Bounties

Security

Since the previous release we’ve shipped a security fix. It was special in the way that it wasn’t actually a bug in the curl source code, but in the build procedure for how we made curl builds for Windows. For this report, we paid out a 200 USD bug bounty!

Bug-fixes of interest

As usual I’ve carved out a list with some of the bugs since the previous release that I find interesting and that could warrant a little extra highlighting. Check the full changelog on the curl site.

bindlocal: detect and avoid IP version mismatches in bind

It turned out you could ask curl to connect to a IPv4 site and if you then asked it to bind to an interface in the local end, it could actually bind to an ipv6 address (or vice versa) and then cause havok and fail. Now we make sure to stick to the same IP version for both!

configure: more –disable switches to toggle off individual features

As part of the recent tiny-curl effort, more parts of curl can be disabled in the build and now all of them can be controlled by options to the configure script. We also now have a test that verifies that all the disabled-defines are indeed possible to set with configure!

(A future version could certainly get a better UI/way to configure which parts to enable/disable!)

http2: call done_sending on end of upload

Turned out a very small upload over HTTP/2 could sometimes end up not getting the “upload done” flag set and it would then just linger around or eventually cause a time-out…

libcurl: Restrict redirect schemes to HTTP(S), and FTP(S)

As a stronger safety-precaution, we’ve now made the default set of protocols that are accepted to redirect to much smaller than before. The set of protocols are still settable by applications using the CURLOPT_REDIR_PROTOCOLS option.

multi: enable multiplexing by default (again)

Embarrassingly enough this default was accidentally switched off in 7.65.0 but now we’re back to enabling multiplexing by default for multi interface uses.

multi: fix the transfer hashes in the socket hash entries

The handling of multiple transfers on the same socket was flawed and previous attempts to fix them were incorrect or simply partial. Now we have an improved system and in fact we now store a separate connection hash table for each internal separate socket object.

openssl: fix pubkey/signature algorithm detection in certinfo

The CURLINFO_CERTINFO option broke with OpenSSL 1.1.0+, but now we have finally caught up with the necessary API changes and it should now work again just as well independent of which version you build curl to use!

runtests: keep logfiles around by default

Previously, when you run curl’s test suite, it automatically deleted the log files on success and you had to use runtests.pl -k to prevent it from doing this. Starting now, it will erase the log files on start and not on exit so they will now always be kept on exit no matter how the tests run. Just a convenience thing.

runtests: report single test time + total duration

The output from runtests.pl when it runs each test, one by one, will now include timing information about each individual test. How long each test took and how long time it has spent on the tests so far. This will help us detect if specific tests suddenly takes a very long time and helps us see how they perform in the remote CI build farms etc.

Next?

I truly think we’ve now caught up with the worst problems and can now allow features to get merged again. We have some fun ones in the pipe that I can’t wait to put in the hands of users out there…

Categorieën: Mozilla-nl planet

Nicholas Nethercote: How to speed up the Rust compiler in 2019

Mozilla planet - wo, 17/07/2019 - 04:54

I have written previously about my efforts to speed up the Rust compiler in 2016 (part 1, part 2) and 2018 (part 1, part 2, NLL edition). It’s time for an update on the first half of 2019.

Faster globals

libsyntax has three tables in a global data structure, called Globals, storing information about spans (code locations), symbols, and hygiene data (which relates to macro expansion). Accessing these tables is moderately expensive, so I found various ways to improve things.

#59693: Every element in the AST has a span, which describes its position in the original source code. Each span consists of an offset, a length, and a third value that is related to macro expansion. The three fields are 12 bytes in total, which is a lot to attach to every AST element, and much of the time the three fields can fit in much less space. So the compiler used a 4 byte compressed form with a fallback to a hash table stored in Globals for spans that didn’t fit in 4 bytes. This PR changed that to 8 bytes. This increased memory usage and traffic slightly, but reduced the fallback rate from roughly 10-20% to less than 1%, speeding up many workloads, the best by an amazing 14%.

#61253: There are numerous operations that accessed the hygiene data, and often these were called in pairs or trios, thus repeating the hygiene data lookup. This PR introduced compound operations that avoid the repeated lookups. This won 10% on packed-simd, up to 3% on numerous other workloads.

#61484: Similar to #61253, this won up to 2% on many benchmarks.

#60630: The compiler has an interned string type, called symbol. It used this inconsistently. As a result, lots of comparisons were made between symbols and ordinary strings, which required a lookup of the string in the symbols table and then a char-by-char comparison. A symbol-to-symbol comparison is much cheaper, requiring just an integer comparison. This PR removed the symbol-to-string comparison operations, forcing more widespread use of the symbol type. (Fortunately, most of the introduced symbol uses involved statically-known, pre-interned strings, so there weren’t additional interning costs.) This won up to 1% on various benchmarks, and made the use of symbols more consistent.

#60815: Similar to #60630, this also won up to 1% on various benchmarks.

#60467, #60910, #61035, #60973: These PRs avoiding some more unnecessary symbol interning, for sub-1% wins.

Miscellaneous

The following improvements didn’t have any common theme.

#57719: This PR inlined a very hot function, for a 4% win on one workload.

#58210: This PR changed a hot assertion to run only in debug builds, for a 20%(!) win on one workload.

#58207: I mentioned string interning earlier. The Rust compiler also uses interning for a variety of other types where duplicate values are common, including a type called LazyConst. However, the intern_lazy_const function was buggy and didn’t actually do any interning — it just allocated a new LazyConst without first checking if it had been seen before! This PR fixed that problem, reducing peak memory usage and page faults by 59% on one benchmark.

#59507: The pretty-printer was calling write! for every space of
indentation, and on some workloads the indentation level can exceed 100. This PR reduced it to a single write! call in the vast majority of cases, for up to a
7% win on a few benchmarks.

#59626: This PR changed the preallocated size of one data structure to better match what was needed in practice, reducing peak memory usage by 20 MiB on some workloads.

#61612: This PR optimized a hot path within the parser, whereby constant tokens were uselessly subjected to repeated “is it a keyword?” tests, for up to a 7% win on programs with large constants.

Profiling improvements

The following changes involved improvements to our profiling tools.

#59899: I modified the output of -Zprint-type-sizes so that enum variants are listed from largest to smallest. This makes it much easier to see outsized variants, especially for enums with many variants.

#62110: I improved the output of the -Ztime-passes flag by removing some uninteresting entries that bloated the output and adding a measurement for the total compilation time.

Also, I improved the profiling support within the rustc-perf benchmark suite. First, I added support for profiling with OProfile. I admit I haven’t used it enough yet to gain any wins. It seg faults about half the time when I run it, which isn’t encouraging.

Second, I added support for profiling with the new version of DHAT. This blog post is about 2019, but it’s worth mentioning some improvements I made with the new DHAT’s help in Q4 of 2018, since I didn’t write a blog post about that period: #55167, #55346, #55383, #55384, #55501, #55525, #55556, #55574, #55604, #55777, #55558, #55745, #55778, #55905, #55906, #56268, #56090, #56269, #56336, #56369, #56737, and (ena crate) #14.

Finally, I wrote up brief descriptions for all the benchmarks in rustc-perf.

pipelined compilation

The improvements above (and all the improvements I’ve done before that) can be described as micro-optimizations, where I used profiling data to optimize a small piece of code.

But it’s also worth thinking about larger, systemic improvements to Rust compiler speed. In this vein, I worked in Q2 with Alex Crichton on pipelined compilation, a feature that increases the amount of parallelism available when building a multi-crate Rust project by overlapping the compilation of dependent crates. In diagram form, a compilation without pipelining looks like this:

metadata metadata [-libA----|--------][-libB----|--------][-binary-----------] 0s 5s 10s 15s 20s 30s

With pipelined compilation, it looks like this:

[-libA----|--------] [-libB----|--------] [-binary-----------] 0s 5s 10s 15s 25s

I did the work on the Rust compiler side, and Alex did the work on the Cargo side.

For more details on how it works, how to use it, and lots of measurements, see this thread. The effects are highly dependent on a project’s crate structure and the compiling machine’s configuration. We have seen speed-ups as high as 1.84x, while some projects see no speed-up at all. At worst, it should make things only negligibly slower, because it’s not causing any additional work, just changing the order in which certain things happen.

Pipelined compilation is currently a Nightly-only feature. There is a tracking issue for stabilizing the feature here.

Future work

I have a list of things I want to investigate in Q3.

  • For pipelined compilation, I want to try pushing metadata creation even earlier in the compiler front-end, which may increase the speed-ups some more.
  • The compiler uses memcpy a lot; not directly, but the generated code uses it for value moves and possibly other reasons. In “check” builds that don’t do any code generation, typically 2-8% of all instructions executed occur within memcpy. I want to understand why this is and see if it can be improved. One possibility is moves of excessively large types within the compiler; another possibility is poor code generation. The former would be easier to fix. The latter would be harder to fix, but would benefit many Rust programs.
  • Incremental compilation sometimes isn’t very effective. On some workloads, if you make a tiny change and recompile incrementally it takes about the same time as a full non-incremental compilation. Perhaps a small change to the incremental implementation could result in some big wins.
  • I want to see if there are other hot paths within the parser that could be improved, like in #61612.

I also have various pieces of Firefox work that I need to do in Q3, so I might not get to all of these. If you are interested in working on these ideas, or anything else relating to Rust compiler speed, please get in touch.

Categorieën: Mozilla-nl planet

Joel Maher: backlogs, lag, and waiting

Mozilla planet - ti, 16/07/2019 - 22:21

Many times each week I see a ping on IRC or Slack asking “why are my jobs not starting on my try push?”  I want to talk about why we have backlogs and some things to consider in regards to fixing the problem.

It a frustrating experience when you have code that you are working on or ready to land and some test jobs have been waiting for hours to run. I personally experienced this the last 2 weeks while trying to uplift some test only changes to esr68 and I would get results the next day. In fact many of us on our team joke that we work weekends and less during the week in order to get try results in a reasonable time.

It would be a good time to cover briefly what we run and where we run it, to understand some of the variables.

In general we run on 4 primary platforms:

  • Linux: Ubuntu 16.04
  • OSX: 10.14.5
  • Windows: 7 (32 bit) + 10 (v1803) + 10 (aarch64)
  • Android: Emulator v7.0, hardware 7.0/8.0

In addition to the platforms, we often run tests in a variety of configs:

  • PGO / Opt / Debug
  • Asan / ccov (code coverage)
  • Runtime prefs: qr (webrender) / spi (socket process) / fission (upcoming)

In some cases a single test can run >90 times for a given change when iterated through all the different platforms and configurations. Every week we are adding many new tests to the system and it seems that every month we are changing configurations somehow.

In total for January 1st to June 30th (first half of this year) Mozilla ran >25M test jobs. In order to do that, we need a lot of machines, here is what we have:

  • linux
    • unittests are in AWS – basically unlimited
    • perf tests in data center with 200 machines – 1M jobs this year
  • Windows
    • unittests are in AWS – some require instances with a dedicated GPU and that is a limited pool)
    • perf tests in data center with 600 machines – 1.5M jobs this year
    • Windows 10 aarch64 – 35 laptops (at Bitbar) that run all unittests and perftests, a new platform in 2019 and 20K jobs this year
    • Windows 10 perf reference (low end) laptop – 16 laptops (at Bitbar) that run select perf tests, 30K jobs this year
  • OSX
    • unittests and perf tests run in data center with 450 mac minis – 380K jobs this year
  • Android
    • Emulators (packet.net fixed pool of 50 hosts w/4 instances/host) 493K jobs this year – run most unittests on here
      • will have much larger pool in the near future
    • real devices – we have 100 real devices (at Bitbar) – 40 Motorola – G5’s, 60 Google Pixel2’s running all perf tests and some unittests- 288K jobs this year

You will notice that OSX, some windows laptops, and android phones are a limited resource and we need to be careful for what we run on them and ensure our machines and devices are running at full capacity.

These limited resource machines are where we see jobs scheduled and not starting for a long time. We call this backlog, it could also be referred to as lag. While it would be great to point to a public graph showing our backlog, we don’t have great resources that are uniform between all machine types. Here is a view of what we have internally for the Android devices:bitbar_queue

What typically happens when a developer pushes their code to a try server to run all the tests, many jobs finish in a reasonable amount of time, but jobs scheduled on resource constrained hardware (such as android phones) typically have a larger lag which then results in frustration.

How do we manage the load:

  1. reduce the number of jobs
  2. ensure tooling and infrastructure is efficient and fully operational

I would like to talk about how to reduce the number of jobs. This is really important when dealing with limited resources, but we shouldn’t ignore this on all platforms. The things to tweak are:

  1. what tests are run and on what branches
  2. what frequency we run the tests at
  3. what gets scheduled on try server pushes

I find that for 1, we want to run everything everywhere if possible, this isn’t possible, so one of our tricks is to run things on mozilla-central (the branch we ship nightlies off of) and not on our integration branches. A side effect here is a regression isn’t seen for a longer period of time and finding a root cause can be more difficult. One recent fix was  when PGO was enabled for android we were running both regular tests and PGO tests at the same time for all revisions- we only ship PGO and only need to test PGO, the jobs were cut in half with a simple fix.

Looking at 2, the frequency is something else. Many tests are for information or comparison only, not for tracking per commit.  Running most tests once/day or even once/week will give a signal while our most diverse and effective tests are running more frequently.

The last option 3 is where all developers have a chance to spoil the fun for everyone else. One thing is different for try pushes, they are scheduled on the same test machines as our release and integration branches, except they are put in a separate queue to run which is priority 2. Basically if any new jobs get scheduled on an integration branch, the next available devices will pick those up and your try push will have to wait until all integration jobs for that device are finished. This keeps our trees open more frequently (if we have 50 commits with no tests run, we could be backing out changes from 12 hours ago which maybe was released or maybe has bitrot while performing the backout). One other aspect of this is we have >10K jobs one could possibly run while scheduling a try push and knowing what to run is hard. Many developers know what to run and some over schedule, either out of difficulty in job selection or being overly cautious.

Keeping all of this in mind, I often see many pushes to our try server scheduling what looks to be way too many jobs on hardware. Once someone does this, everybody else who wants to get their 3 jobs run have to wait in line behind the queue of jobs (many times 1000+) which often only get ran during night for North America.

I would encourage developers pushing to try to really question if they need all jobs, or just a sample of the possible jobs.  With tools like |/.mach try fuzzy| , |./mach try chooser| , or |./mach try empty| it is easier to schedule what you need instead of blanket commands that run everything.  I also encourage everyone to cancel old try pushes if a second try push has been performed to fix errors from the first try push- that alone saves a lot of unnecessary jobs from running.

 

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: MDN’s First Annual Web Developer & Designer Survey

Mozilla planet - ti, 16/07/2019 - 17:04

Today we are launching the first edition of the MDN Developer & Designer Needs Survey. Web developers and designers, we need to hear from you! This is your opportunity to tell us about your needs and frustrations with the web. In fact, your participation will influence how browser vendors like Mozilla, Google, Microsoft, and Samsung prioritize feature development.

Goals of the MDN Developer Needs Survey

With the insights we gather from your input, we aim to:

  • Understand and prioritize the needs of web developers and designers around the world. If you create sites or services using HTML, CSS and/or JavaScript, we want to hear from you.
  • Understand how browser vendors should prioritize feature development to address your needs and challenges.
  • Set a baseline to evaluate how your needs change year over year.

The survey results will be published on MDN Web Docs (MDN) as an annual report. In addition, we’ll include a prioritized list of needs, with an analysis of priorities by geographic region.

It’s easy to take part. The MDN survey has been designed in collaboration with the major browser vendors mentioned above, and the W3C. We expect it will take approximately 20 minutes to complete. This is your chance to make yourself heard. Results and learnings will be shared publicly on MDN later this year.

Please participate

You’re invited to take the survey here. Your participation can help move the web forward, and make it a better experience for everyone. Thank you for your time and attention!

Take the MDN survey!

The post MDN’s First Annual Web Developer & Designer Survey appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Rabimba: ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Mozilla planet - ti, 16/07/2019 - 12:49
Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchorsCreating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors.What are anchors:Anchors are your virtual markers in the real world. As a developer, you anchor any virtual object to the real world and that 3d model will stay glued into the physical location of the real world. Anchors also get updated over time depending on the new information that the system learns. For example, if you anchor a pikcahu 2 ft away from you and then you actually walk towards your pikachu, it realises the actual distance is 2.1ft, then it will compensate that. In real life we have a static coordinate system where every object has it's own x,y,z coordinates. Anchors in devices like HoloLens override the rotation and position of the transform component.How Anchors stick?If we follow the documentation in Google ARCore then we see it is attached to something called "trackables", which are feature points and planes in an image. Planes are essentially clustered feature points. You can have a more in-depth look at what Google says ARCore anchor does by reading their really nice Fundamentals. But for our purposes, we first need to understand what exactly are these Feature Points.Feature Points: through the eyes of computer visionFeature points are distinctive markers on an image that an algorithm can use to track specific things in that image. Normally any distinctive pattern, such as T-junctions, corners are a good candidate. They lone are not too useful to distinguish between each other and reliable place marker on an image so the neighbouring pixels of that image are also analyzed and saved as a descriptor.Now a good anchor should have reliable feature points attached to it. The algorithm must be able to find the same physical space under different viewpoints. It should be able to accommodate the change in 
  • Camera Perspective
  • Rotation
  • Scale
  • Lightning
  • Motion blur and noise
Reliable Feature PointsThis is an open research problem with multiple solutions on board. Each with their own set of problems though. One of the most popular algorithms stem from this paper by David G. Lowe at IJCV is called Scale Invariant Feature Transform (SIFT). Another follow up work which claims to have even better speed was published in ECCV in 2006 called Speeded Up Robust Features (SURF) by Bay et al. Thought both of them are patented at this point. 
Microsoft Hololens doesn't really need to do all this heavy lifting since it can rely on an extensive amount of sensor data. Especially it gets aid from the depth sensor data from its Infrared Sensors. However, ARCore and ARkit doesn't enjoy those privileges and has to work with 2d images. Though we cannot say for sure which of the algorithm is actually used for ARKit or ARCore we can try to replicate the procecss with a patent-free algorithm to understand how the process actually works.
D.I.Y Feature Detection and keypoint descriptorsTo understand the process we will use an algorithm by Leutenegger et al called BRISK. To detect feature we must follow a multi-step process. A typical algorithm would adhere to the following two steps
  1. Keypoint Detection: Detecting keypoint can be as simple as just detecting corners. Which essentially evaluates the contrast between neighbouring pixels. A common way to do that in a large-scale image is to blur the image to smooth out pixel contrast variation and then do edge detection on it. The rationale for this is that you would normally want to detect a tree and a house as a whole to achieve reliable tracking instead of every single twig or window. SIFT and SURF adhere to this approach. However, for real-time scenario blurring adds a compute penalty which we don't want. In their paper "Machine Learning for High-Speed Corner Detection" Rosten and Drummond proposed a method called FAST which analyzes the circular surrounding if each pixel p. If the neighbouring pixels brightness is lower/higher than  and a certain number of connected pixels fall into this category then the algorithm found a corner. Image credits: Rosten E., Drummond T. (2006) Machine Learning for High-Speed Corner Detection.. Computer Vision – ECCV 2006. Now back to BRISK, for it out of the 16-pixel circle, 9 consecutive pixels must be brighter or darker than the central one. Also BRISk uses down-sized images allowing it to achieve better invariance to scale.
  2. Keypoint Descriptor: The primary property of the detected keypoints should be that they are unique. The algorithm should be able to find the feature in a different image with a different viewpoint, lightning. BRISK concatenates the brightness comparison results between different pixels surrounding the centre keypoint and concatenates them to a 512 bit string.Image credits: S. Leutenegger, M. Chli and R. Y. Siegwart, “BRISK: Binary Robust invariant scalable keypoints”, 2011 International Conference on Computer VisionAs we can see from the sample figure, the blue dots create the concentric circles and the red circles indicate the individually sampled areas. Based on these, the results of the brightness comparison are determined. BRISK also ensures rotational invariance. This is calculated by the largest gradients between two samples with a long distance from each other.
Test out the codeTo test out the algorithms we will use the reference implementations available in OpenCV. We can use a pip install to install it with python bindings
As test image, I used two test images I had previously captured in my talks at All Things Open and TechSpeaker meetup. One is an evening shot of Louvre, which should have enough corners as well as overlapping edges, and another a picture of my friend in a beer garden in portrait mode. To see how it fares against the already existing blur on the image.

Original Image Link
Original Image LinkVisualizing the Feature PointsWe use the below small code snippet to use BRISK on the two images above to visualize the feature points

What the code does:
  1. Loads the jpeg into the variable and converts into grayscale
  2. Initialize BRISK and run it. We use the paper suggested 4 octaves and an increased threshold of 70. This ensures we get a low number but highly reliable key points. As we will see below we still got a lot of key points
  3. We used the detectAndCompute() to get two arrays from the algo holding both the keypoints and their descriptors
  4. We draw the keypoints at their detected positions indicating keypoint size and orientation through circle diameter and angle, The "DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS" does that.


As you can see most of the key points are visible at the castle edges and all visible edges for Louvre and almost none on the floor. With the portrait mode pic of my friend it's more diverse but also shows some false positives like the reflection on the glass. Considering how BRISK works, this is normal.
ConclusionIn this first part of understanding what power ARCore/Kit we understood how basic feature detection works behind the scenes and tried our hands on to replicate that. These spatial anchors are vital to the virtual objects "glueing" to the real world. This whole demo and hands-on now should help you understand why designing your apps so that users place objects in areas where the device has a chance to create anchors is a good idea.Placing a lot of objects in a single non-textured smooth plane may produce inconsistent experience since its just too hard to detect keypoints in a plane surface (now you know why). As a result, the objects may sometimes drift away if tracking is not good enough.If your app design encourages placing objects near corners of floor or table then the app has a much better chance of working reliably. Also, Google's guide on anchor placement is an excellent read.
Their short recommendation is:
  • Release spatial anchors if you don’t need them. Each anchor costs CPU cycles which you can save
  • Keep the object close to the anchor. 
On our next post, we will see how we can use this knowledge to do basic SLAM.

Update: The next post lives here: https://blog.rabimba.com/2018/10/arcore-and-arkit-SLAM.html

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 295

Mozilla planet - ti, 16/07/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is overloadable, a crate to provides you with the capabilities to overload your functions in a similar style to C# or C++, including support for meta attributes, type parameters and constraints, and visibility modifiers Thanks to Stevensonmt for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

235 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs New RFCs Upcoming Events Africa Asia Pacific Europe North America South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is 5 languages stacked on top of each other, except that instead of ending up like 5 children under a trenchcoat, they end up like the power rangers.

reuvenpo on /r/rust

Thanks to Jelte Fennema for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

The Firefox Frontier: 100,985,047 have been invited to the Evite data breach “party”

Mozilla planet - mo, 15/07/2019 - 22:31

Did you get an invitation to the latest data breach? Over the weekend it was disclosed that Evite, the online invitation platform that has sent more than a few birthday … Read more

The post 100,985,047 have been invited to the Evite data breach “party” appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Kartikaya Gupta: The Gecko Hacker's Guide to Taskcluster

Mozilla planet - mo, 15/07/2019 - 16:22

Don't panic.

I spent a good chunk of this year fiddling with taskcluster configurations in order to get various bits of continuous integration stood up for WebRender. Taskcluster configuration is very flexible and powerful, but can also be daunting at first. This guide is intended to give you a mental model of how it works, and how to add new jobs and modify existing ones. I'll try and cover things in detail where I believe the detail would be helpful, but in the interest of brevity I'll skip over things that should be mostly obvious by inspection or experimentation if you actually start digging around in the configurations. I also try and walk through examples and provide links to code as much as possible.

Based on my experience, there are two main kinds of changes most Gecko hackers might want to do:
(1) modify existing Gecko test configurations, and
(2) add new jobs that run in automation.
I'm going to explain (2) in some detail, because on top of that explaining (1) is a lot easier.

Overview of fundamentals

The taskcluster configuration lives in-tree in the taskcluster/ folder. The most interesting subfolders there are the ci/ folder, which contain job defintiions in .yml files, and the taskgraph/ folder which contain python scripts to do transforms. A quick summary of the process is that the job definitions are taken from the .yml files, run through a series of "transforms", finally producing a task definition. The task definitions may have dependencies on other task definitions; together this set is called the "task graph". That is then submitted to Taskcluster for execution. Conceptually you can think of the stuff in the .yml files as a higher-level definition, which gets "compiled" down to the final taskgraph (the "machine code") that Taskcluster actually understands and executes.

It's helpful to walk through a quick example of a transform. Consider the webrender-linux-release job definition. At the top of the kind.yml file, there are a number of transforms listed, so each of those gets applied to the job definition in turn. The first one is use_toolchains, the code for which you can find in use_toolchains.py. This transform takes the toolchains attributes of the job definition (in this example, these attributes), and figures out the corresponding toolchain jobs and artifacts (artifacts are files that are outputs of a task), The toolchain jobs are defined in taskcluster/ci/toolchains/, so we can see that the linux64-rust toolchain is here and the wrench-deps toolchain is here. The transform code discovers this, then adds dependencies for those tasks to webrender-linux-release, and populates the MOZ_TOOLCHAINS env var with the links to the artifacts. Taskcluster ensures that tasks only get run after their dependencies are done, and the MOZ_TOOLCHAINS is used by ./mach artifact toolchain to actually download and unpack those toolchains when the webrender-linux-release task runs.

Similar to this, there are lots of other transforms that live in taskcluster/taskgraph/transforms/ that do other transformations on the job definitions. Some of the job definitions in the .yml files look very different in terms of what attributes are defined, and go through many layers of transformation before they produce the final task definition. In a sense, each folder in taskcluster/ci is a domain-specific language, with the transforms eventually compiling them down to the same machine code.

One of the more important transforms is the job transform, which determines which wrapper script will be used to run your job's commands. This is usually specified in your job definition using the run.using attribute. Examples include mach or toolchain-script. These both get transformed (see transforms for mach and toolchain-script) into commands that get passed to run-task. Or you can just use run-task directly in your job, and specify the command you want to have run. In the end most jobs will boil down to run-task commands; the run-task wrapper script just does some basic abstraction over the host machine/VM and then runs the command.

Host environment and Docker

Taskcluster can manage many different underlying host systems - from Amazon Linux VMs, to dedicated physical macOS machines, and everything in between. I'm not going to into details of provisioning but there's the notion of a "worker" which is useful to know. This is the binary that runs on the host system, polls taskcluster to find new tasks scheduled to be run on that kind of host system, and executes them in whatever sandboxing is available on that host system. For example, a docker-worker instance will start a specified docker image and execute the task commands in that. A generic-worker instance will instead create a dedicated work folder and run stuff in there, cleaning up afterwards.

If you're adding a new job (e.g. some sort of code analysis or whatever) most likely you should run on Linux using docker. This means you need to specify a docker image, which you can also do using the taskcluster configuration. Again I will use the webrender-linux-release job as an example. It specifies the webrender docker image to use, which is defined with an in-tree Dockerfile. The docker image is itself built using taskcluster with the job definition here (this job definition is an example of one that looks very different from other job definitions, because the job description is literally two lines and transforms do most of the work in compiling this into a full task definition).

Circling back to generic-worker, this is the default for jobs that need to run on Windows and macOS, because Docker doesn't run either of those platforms as a target. An example is the webrender-macos-debug job, which specifies using: run-task on a t-osx-1010 worker type, which will cause it go through this transform and eventually run using the run-task wrapper under generic worker on the macOS instance. For the most part you probably won't need to care about this but it's something to be aware of if you're running jobs targetted at Windows or macOS.

Caching

As we've seen from previous sections, jobs can use docker images and toolchain artifacts that are produced by other tasks in the taskgraph. Of course, we don't want to rebuild these docker images and toolchains on every single push, as that would be quite expensive. Taskcluster provides a mechanism for caching and reusing artifacts from previous pushes. I'm not going to go into too much detail on this, but you can look at the cached_tasks transform if you're interested. I will, however, point to the %include comments in e.g. this Dockerfile and note that these include statements are special because the mark the docker image as dependent on the given file. So if the included file changes, the docker image will be rebuilt. For toolchain tasks you can specify additional dependencies on inputs using the resources attribute; this also triggers toolchain rebuilds if the dependent inputs change.

The other thing to keep in mind when adding a new job is that you want to avoid too much network traffic or redundant work. So if your job involves downloading or building stuff that usually doesn't change from one push to the next, you probably want to split up your job so that the mostly-static part is done by a toolchain or other job, and the result of that is cached and reused by your main job. This will reduce overall load and also improve the runtime of your per-push job.

Even if you don't need caching across pushes, you might want to refactor two jobs so that their shared work is extracted into a dependency, and the two dependent jobs then just do their unique postprocessing bits. This can be done by manually specifying the dependencies and pulling in artifacts from those dependencies using the fetches attribute. See here for an example. In this scenario taskcluster will again ensure the jobs run in the right order, and you can use artifacts from the dependencies, but no caching across pushes takes place.

Adding new jobs

So hopefully with the above sections you have a general idea of how taskcluster configuration works in mozilla-central CI. To add a new job, you probably want to find an existing job kind that fits what you want, and then add your job to that folder, possibly by copy/pasting an existing job with appropriate modifications. Or if you have a new type of job you want to run that's significantly different from existing ones, you can add a new kind (a new subfolder in taskcluster/ci and documented in kinds.rst). Either way, you'll want to ensure the transforms being used are appropriate and allow you to reuse the features (e.g. toolchain dependencies) that you need and that already exist.

Gecko testing

The Gecko test jobs are defined in the taskcluster/ci/test/ folder. The entry point, as always, is the kind.yml file in that folder, which lists the transforms that get applied. The tests transform is one of the largest and most complex transforms. It does a variety of things (e.g. generating fission-enabled and fission-disabled tasks for jobs), but thankfully you probably won't need to fiddle with that too much, unless you find your test suite is behaving unexpectedly. Instead, you can mostly do copy-pasting in the other .yml files to enable test suites on particular platforms or adjust options. The test-platforms.yml file allows you define "test platforms" which show up as new rows on TreeHerder and run sets of tests on a particular build. The sets of tests are defined in test-sets.yml, which in turn reference the individual test jobs defined in the various other .yml files in that folder. Enabling a test suite on a platform is generally as easy as adding the test to the test set that you care about, and maybe tweaking some of the per-platform test attributes (e.g. number of chunks, or what trees to run on) to suit your new platform. I found the .yml files to mostly self-explanatory so I won't walk through any examples here.

What I will briefly mention is how the tests are actually run. This is not strictly part of taskcluster but is good to know anyway. The test tasks generally run using mozharness, which is a set of scripts in testing/mozharness/scripts and configuration files in testing/mozharness/configs. Mozharness is responsible for setting up the firefox environment and then delegates to the actual test harness. For example, running reftests on desktop linux would run testing/mozharness/scripts/desktop_unittest.py with the testing/mozharness/configs/unittests/linux_unittest.py config (which are indicated in the job description here). The config file, among other things, tells the mozharness script where the actual test harness entrypoint is (in this case, runreftest.py) and the mozharness script will invoke that test harness after doing some setup. There's many layers here (run-task, mozharness, test harness, etc.) with the number of layers varying across platforms (mozharness scripts for Android device/emulator are totally different than desktop), and I don't have as good a grasp on all this as I would like, but hopefully this is sufficient to point you in the right direction if you need to fiddle with tests at this level.

Debugging

As always, when modifying configs you might run into unexpected problems and need to debug. There are a few tools that are useful here. One is the ./mach taskgraph command, which can run different steps of the "decision" task and generate taskgraphs. When trying to debug task generation issues my go-to technique would be to download a parameters.yml file from an existing decision task on try or m-c (you can find it in the artifacts list for the decision task on TreeHerder), and then run ./mach taskgraph target-graph -p parameters.yml. This runs the taskgraph code and emits a list of tasks that would be scheduled given the taskcluster configuration in your local tree and the parameters provided. Likewise, ./mach taskcluster-build-image and ./mach taskcluster-load-image are useful for building and testing docker images for use with jobs. You can use these to e.g. run a docker image with your local Docker installation, and see what all you might need to install on it to make it ready to run your job.

Another useful debugging tool as you start doing try pushes to test your new tasks, is the task/group inspector at tools.taskcluster.net. This is easily accessible from TreeHerder by clicking on a job and using the "Inspect task" link in the details pane (bottom left). TreeHerder provides some information, but the Taskcluster tools website provides a much richer view including the final task description that was submitted to Taskcluster, its dependencies, environment variables, and so on. While TreeHerder is useful for looking at overall push health, the Taskcluster web UI is better for debugging specific task problems, specially as you're in the process of standing up a new task.

In general the taskcluster scripts are pretty good at identifying errors and printing useful error messages. Schemas are enforced, so it's rare to run into silent failures because of typos and such.

Conclusion

There's a lot of power and flexibility afforded by Taskcluster, but with that goes a steep learning curve. Thankfully once you understand the basic shape of how Taskcluster works, most of the configuration tends to be fairly intuitive, and reading existing .yml files is a good way to understand the different features available. grep/searchfox in the taskcluster/ folder will help you out a lot. If you run into problems, there's always people willing to help - as of this writing, :tomprince is the best first point of contact for most of this, and he can redirect you if needed.

Categorieën: Mozilla-nl planet

IRL (podcast): The Internet's Carbon Footprint

Mozilla planet - mo, 15/07/2019 - 09:05

Manoush Zomorodi explores the surprising environmental impact of the internet in this episode of IRL. Because while it’s easy to think of the internet as living only on your screen, energy demand for the internet is indeed powered by massive server farms, running around the clock, all over the world. What exactly is the internet’s carbon footprint? And, what can we do about it?

Music professor Kyle Devine considers the environmental costs of streaming music. Geophysicist and pop scientist Miles Traer takes his best shot at calculating the carbon footprint of the IRL podcast. Climate journalist Tatiana Schlossberg explores the environmental influence we don’t know we have and what the web’s got to do with it. Greenpeace’s Gary Cook explains which tech companies are committed to renewable energy — and which are not. Kris De Decker tries powering his website with a homebrew solar power system. And, Ecosia's Chief Tree Planting Officer Pieter Van Midwoud discusses how his company uses online search to plant trees.

IRL is an original podcast from Firefox. For more on the series go to irlpodcast.org

Love the internet, but also love the environment? Here are some ways you can reduce your energy consumption — or offset it — while online.

Learn more about Kyle Devine’s research on the environmental costs of music streaming.

For more from Tatiana Schlossberg, check out her book, Inconspicuous Consumption: The Environmental Impact You Don’t Know You Have.

Have a read through Greenpeace’s Click Clean Report that Gary Cook discusses in this IRL episode.

You can find solar-powered Low Tech Magazine here and, if the weather is bad, you can view the archive here.

As Pieter Van Midwoud notes, Ecosia uses the money it makes from your online searches to plant trees where they are needed most. Learn more about Ecosia, an alternative to Google Search.

Here’s more about Miles Traer, the geophysicist who calculated the carbon footprint of the IRL podcast.

And, if you’re interested in offsetting your personal carbon emissions overall, Carbonfund.org can help with that.

The sound of a data center in this episode is courtesy of artist Matt Parker. Download his music here.

Categorieën: Mozilla-nl planet

Pages