Mozilla Nederland LogoDe Nederlandse

Air Mozilla: Mozilla Weekly Project Meeting, 09 Oct 2017

Mozilla planet - mo, 09/10/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 09 Oct 2017

Mozilla planet - mo, 09/10/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla pilots Cliqz engine in Firefox to slurp user browsing data - ZDNet

Nieuws verzameld via Google - mo, 09/10/2017 - 15:05


Mozilla pilots Cliqz engine in Firefox to slurp user browsing data
Mozilla has launched a pilot program using Cliqz technology to pull user browsing data in Firefox. Last week, Mountain View, CA-based Mozilla said the inclusion of the Cliqz plugin, bolt-on software which recommends links to news, weather, sport and ...
Mozilla Firefox Announces End to Support for Windows XP and VistaWebProNews
Mozilla Experiments with Cliqz Plug-In for FirefoxHTML Goodies
Mozilla Firefox is ending its support so time to upgradeRS-Tech (press release) (blog)
Windows Report
alle 7 nieuwsartikelen »
Categorieën: Mozilla-nl planet

WINMAG #100: Mozilla Send - WINMAG Pro

Nieuws verzameld via Google - mo, 09/10/2017 - 12:57


WINMAG #100: Mozilla Send
Mozilla heeft een applicatie gemaakt waarmee bestanden tot 1GB verzonden kunnen worden. De tool vernietigt de bestanden na downloaden automatisch. De applicatie genaamd Send werkt niet alleen in Mozilla's eigen browser Firefox, maar ook op ...

Categorieën: Mozilla-nl planet

Robert O'Callahan: Type Safety And Data Flow Integrity

Mozilla planet - mo, 09/10/2017 - 00:30

We talk a lot about memory safety assuming everyone knows what it is, but I think that can be confusing and sell short the benefits of safety in modern programming languages. It's probably better to talk about "type safety". This can be formalized in various ways, but intuitively a language's type system proposes constraints on what is allowed to happen at run-time — constraints that programmers assume when reasoning about their programs; type-safe code actually obeys those constraints. This includes classic memory safety features such as avoidance of buffer overflows: writing past the end of an array has effects on the data after the array that the type system does not allow for. But type safety also means, for example, that (in most languages) a field of an object cannot be read or written except through pointers/references created by explicit access to that field. With this loose definition, type safety of a piece of code can be achieved in different ways. The compiler might enforce it, or you might prove the required properties mechanically or by hand, or you might just test it until you've fixed all the bugs.

One implication of this is that type-safe code provides data-flow integrity. A type system provides intuitive constraints on how data can flow from one part of the program to another. For example, if your code has private fields that the language only lets you access through a limited set of methods, then at run time it's true that all accesses to those fields are by those methods (or due to unsafe code).

Type-safe code also provides control-flow integrity, because any reasonable type system also suggests fine-grained constraints on control flow.

Data-flow integrity is very important. Most information-disclosure bugs (e.g. Heartbleed) violate data-flow integrity, but usually don't violate control-flow integrity. "Wild write" bugs are a very powerful primitive for attackers because they allow massive violation of data-flow integrity; most security-relevant decisions can be compromised if you can corrupt their inputs.

A lot of work has been done to enforce CFI for C/C++ using dynamic checks with reasonably low overhead. That's good and important work. But attackers will move to attacking DFI, and that's going to be a lot harder to solve for C/C++. For example the checking performed by ASAN is only a subset of what would be required to enforce the C++ type system, and ASAN's overhead is already too high. You would never choose C/C++ for performance reasons if you had to run under ASAN. (I guess you could reduce ASAN's overhead if you dropped all the support for debugging, but it would still be too high.)

Note 1: people often say "even type safe programs still have correctness bugs, so you're just solving one class of bugs which is not a big deal" (or, "... so you should just use C and prove everything correct"). This underestimates the power of type safety with a reasonably rich type system. Having fine-grained CFI and DFI, and generally being able to trust the assumptions the type system suggests to you, are essential for sound reasoning about programs. Then you can leverage the type system to build abstractions that let you check more properties; e.g. you can enforce separation between trusted and untrusted data by giving untrusted user input different types and access methods to trusted data. The more your code is type-safe, the stronger is your confidence in those properties.

Note 2: C/C++ could be considered "type safe" just because the specification says any program executing undefined behavior gets no behavioral constraints whatsoever. However, in practice, programmers reasoning about C/C++ code must (and do) assume the constraint "no undefined behavior occurs"; type-safe C/C++ code must ensure this.

Note 3: the presence of unsafe code within a hardware-enforced protection domain can undermine the properties of type-safe code within the same domain, but minimizing the amount of such unsafe code is still worthwhile, because it reduces your attack surface.

Categorieën: Mozilla-nl planet

Daniel Pocock: A step change in managing your calendar, without social media

Mozilla planet - snein, 08/10/2017 - 19:36

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

Categorieën: Mozilla-nl planet

Giorgos Logiotatidis: Automating Podcast generation from SoundCloud

Mozilla planet - snein, 08/10/2017 - 18:12

There's this popular daily FM Radio show in Greece which posts their shows on SoundCloud after broadcasting them. It's a good -albeit not great, just HTML5 audio is fine- way to listen the show on demand if you're on Desktop. The website is not mobile friendly and the whole embedded SoundCloud experience is sub-optimal. Let alone that you cannot just add the feed to your favorite podcast player to enjoy it.

There's an RSS feed on iTunes but it's manually updated and inevitably lags a day or two behind, depending on the availability of the maintainer.

I decided to fix the problem myself and since this turned out to be a solution involving a bunch of interesting technologies I thought to write a blog post about it. If you only care about the podcast you can find it here.

Step 1: Extracting content from SoundCloud

The episodes are embedded in the official website but are hidden in SoundCloud. Probably there's a hidden attribute you can set to SoundCloud media. That explains why my first attempt to download the episodes using SoundScrape failed with the later complaining that it can't find any videos.

Then I started examining SoundCloud's JS and JSON responses sent when you click the play button, with the ultimate goal to write a SoundCloud downloader. The service follows a typical authenticate-then-get unique auto-expiring link to S3, which it can be automated but it's not fun to do.

While taking a break from parsing JSON responses it occurred to me that youtube-dl despite it's very specific name it supports other websites too, actually hundreds of them. v Run youtube-dl against a URL with embedded SoundCloud audio and youtube-dl will find and download the best version of the audio file including the cover thumbnail!

All I need now is simple python script to extract all URLs with embedded SoundCloud audio and feed it to youtube-dl as a list using the --batch-file argument.

Step 2: Generate the Podcast RSS

With all the mp3 files for the show downloaded, next step is to generate the Podcast RSS. FeedGen is a simple pythonic library which builds RSS feeds, including extensions for podcasts and iTunes attributes.

Step 3: Serve the Podcast RSS

I serve all my personal websites using Dokku running on my VPS. I used a Debian based Docker image and installed Python2 and the needed python libraries for the feed generation. Also installed nginx-light to serve the content, both the RSS and the audio files.

I originally used the genRSS project to generate the RSS which complained about the Unicode characters in the mp3 filenames when run from the Docker image. I fixed this by adding en_US.UTF-8 to the supported locales and running locale-gen on image build.

RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \ locale-gen ENV LC_ALL en_US.UTF-8

The docker image default command runs nginx with a minimal nginx.conf.

Dokku takes care of everything else, including getting certificates from LetsEncrypt.

Step 4: Update the Feed

Cron runs a command to update the feed daily from Mon-Fri every 5 minutes from the moment the show ends and up to an hour after. The show producers are very consistent on uploading the show on time so that seems to just work. To be on the safe side I added another run two hours after the show ends.

The cron runs on the host, using dokku run. The podcast and the audio files are stored in a Docker volume and therefore both the web serving process and the cron job can access this persistent storage at the same time.

Youtube-dl is smart enough to not re-download content which exists, so running the command multiple times does not hammer the servers.

Step 5: Monitoring

For an automation to be perfect it must be monitored. As with all my websites, I setup a NewRelic Synthetics monitor which monitors the feed serving and that the content of the feed appears valid by looking for "pubDate" text.

To monitor the cronjob cURL a provided URL at the very end of the bash script that co-ordinates the fetching and building of the feed. Make sure to set -e your bash scripts so they exit after the first failed command. Not setting -e will always call cURL even if a step fails.

Actually use those two tools so much, I maintain two related projects NeReS and Babis.

Fun fact: It's the second time I build a podcast for this show. First one was around 2008.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Thoughts On Microsoft's Time-Travel Debugger

Mozilla planet - sn, 07/10/2017 - 14:45

I'm excited that Microsoft's TTD is finally available to the public. Congratulations to the team! The video is well worth watching. I haven't used TTD myself yet since I don't have a Windows system at hand, but I've talked to Mozilla developers who've tried it on Firefox.

The most important and obvious difference between TTD and rr is that TTD is for Windows and rr is for Linux (though a few crazy people have had success debugging Windows applications in Wine under rr).

TTD supports recording of multiple threads in parallel, while rr is limited to a single core. On the other hand, per-thread recording overhead seems to be much higher in TTD than in rr. It's hard to make a direct comparison, but a simple "start Firefox, display, shut down" test run on similar hardware takes about 250 seconds under TTD and 26 seconds under rr. This is not surprising given TTD relies on pervasive binary instrumentation and rr was designed not to. This means recording extremely parallel workloads might be faster under TTD, but for many workloads rr recording will be faster. Starting up a large application really stresses binary translation frameworks, so it's a bit of a worst-case scenario for TTD — though a common one for developers. TTD's multicore recording might be better at reproducing certain kinds of concurrency bugs, though rr's chaos mode helps mitigate that problem — and lower recording overhead means you can churn through test iterations faster.

Therefore for Firefox-like workloads, on Linux, I still think rr's recording approach is superior. Note that when the technology behind TTD was first developed the hardware and OS features needed to support an rr-like approach did not exist.

TTD's ability to attach to arbitrary processes and start recording sounds great and would mitigate some of the slow-recording problem. This would be nice to have with rr, but hard to implement. (Currently we require reserving a couple of pages at specific addresses that might not be available when attaching to an arbitrary process.)

Some of the performance overhead of TTD comes from it copying all loaded libraries into the trace file, to ensure traces are portable across machines. rr doesn't do that by default; instead you have to run rr pack to make traces self-contained. I still like our approach, especially in scenarios where you repeatedly re-record a testcase until it fails.

The video mentions that TTD supports shared memory and async I/O and suggests rr doesn't. It can be confusing, but to clarify: rr supports shared memory as long as you record all the processes that are using the shared memory; for example Firefox and Chromium communicate with subprocesses using shared memory and work fine under rr. Async I/O is pretty rare in Linux; where it has come up so far (V4L2) we have been able to handle it.

Supporting unlimited data breakpoints is a nice touch. I assume that's done using their binary instrumentation.

TTD's replay looks fast in the demo videos but they mention that it can be slower than live debugging. They have an offline index build step, though it's not clear to me yet what exactly those indexes contain. It would be interesting to compare TTD and rr replay speed, especially for reverse execution.

The TTD trace querying tools look cool. A lot more can be done in this area.

rr+gdb supports running application functions at debug time (e.g. to dump data structures), while TTD does not. This feature is very important to some rr users, so it might be worthwhile for the TTD people to look at.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Building WebVR Worlds Together: Mozilla and Sketchfab Launching Real-Time VR Design Challenge…

Mozilla planet - fr, 06/10/2017 - 22:29
Building WebVR Worlds Together: Mozilla and Sketchfab Launching Real-Time VR Design Challenge “Medieval Fantasy”

Mozilla’s mission is to ensure the Internet is a global public resource, open and accessible to all, which is great for the innovators, creators and builders on the web. Virtual Reality is set to change the future of web interaction and the ability for anyone to access and enjoy Virtual Reality experiences is critical for its further development. This is why Mozilla set out to bring virtual reality to Firefox and other web browsers, using A-Frame as a web framework for building interactive VR experiences. Originally from Mozilla, A-Frame was developed to be an easy but powerful way to develop VR content. As an independent open source project, A-Frame has grown to be one of the largest and most welcoming VR communities, making it easy for anyone to get involved with virtual reality.

To invite more developers and content creators to play with WebVR and A-Frame, Mozilla is excited to be partnering with Sketchfab for the Real-time Design Challenge. And what better playground could you imagine than going back to medieval times?

<figcaption>Credit: Kevin Pauly (Sketchfab)</figcaption>

We call for artists and designers to create open assets for use in A-Frame: castles, medieval towns, knights, spears, horses… and dragons (of course dragons!!)

By providing these assets, we will be allowing game builders and world builders a set of 3D images that they can plug into their scenes and create a whole new world with. Over time we aim to create an A-Frame ecosystem that is vibrant, shows the potential of WebVR and attracts both creators and users.

The winners of this challenge will receive prizes that will further enhance their experience in WebVR, including a VR laptop, an Oculus headset, a Wacom Intuos Pro Tablet or 12 months of Sketchfab pro.

How to participate

To enter this contest, create a scene in the described visual style of Kevin Pauly’s work and theme. Build as many reusable components for it as you can. For example: if you create a castle scene, provide blocks for walls, floors, doors etc. You can start your own topic in the Medieval Fantasy contest forum to document your work in progress.

Please find out more details, also on the technical requirements, on the Sketchfab blog.

The submission deadline is November 1st (23:59 New York time — EST)

We can’t wait to see what you come up with!

Building WebVR Worlds Together: Mozilla and Sketchfab Launching Real-Time VR Design Challenge… was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Why is my computer so slow? Your browser needs a tune-up.

Mozilla planet - fr, 06/10/2017 - 19:17

Nobody wants to go slow on the internet. (After all, it’s supposed to be a highway.) This quick fix-it-list will have you feeling the wind in your hair in no … Read more

The post Why is my computer so slow? Your browser needs a tune-up. appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

QMO: Firefox 57 Beta 8 Testday, October 13th

Mozilla planet - fr, 06/10/2017 - 15:52

Hello Mozillians,

We are happy to let you know that Friday, October 13th, we are organizing Firefox 57 Beta 8 Testday. We’ll be focusing our testing on the following new features: Activity Stream, Photon Structure and Photon Onboarding Tour Notifications & Tour Overlay 57.

Check out the detailed instructions via this etherpad .

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Alessio Placitelli: Recording Telemetry scalars from add-ons

Mozilla planet - fr, 06/10/2017 - 15:33
The Go Faster initiative is important as it enables us to ship code faster, using special add-ons, without being strictly tied to the Firefox train schedule. As Georg Fritzsche pointed out in his article, we have two options for instrumenting these add-ons: having probe definitions ride the trains (waiting a few weeks!) or implementing and … →
Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro signature generation overhaul and command line interface

Mozilla planet - fr, 06/10/2017 - 15:00

This quarter I worked on creating a command line interface for signature generation and in doing that extracted it from the processor into a standalone-ish module.

The end result of this work is that:

  1. anyone making changes to signature generation can can test the changes out on their local machine using a Socorro local development environment
  2. I can trivially test incoming signature generation changes--this both saves me time and gives me a much higher confidence of correctness without having to merge the code and test it in our -stage environment [1]
  3. we can research and experiment with changes to the signature generation algorithm and how that affects existing crash signatures
  4. it's a step closer to being usable by other groups

This blog post talks about that work briefly and then talks about some of the things I've been able to do with it.

[1]I can't overstate how awesome this is.

Read more… (19 mins to read)

Categorieën: Mozilla-nl planet

Anne van Kesteren: MIME type interoperability

Mozilla planet - fr, 06/10/2017 - 14:19

In order to figure out data: URL processing requirements I have been studying MIME types (also known as media types) lately. I thought I would share some examples that yield different results across user agents, mostly to demonstrate that even simple things are far from interoperable:

  • text/html;charset =gbk
  • text/html;charset='gbk'
  • text/html;charset="gbk"x
  • text/html(;charset=gbk
  • text/html;charset=gbk(
  • text/html;charset="gbk
  • text/html;charset=gbk"

These are the relatively simple issues to deal with, though it would have been nice if they had been sorted by now. The MIME type parsing issue also looks at parsing for the Content-Type header, which is even messier, with different requirements for its request and response variants.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Microsoft Using Chromium On Android Is Bad For The Web

Mozilla planet - fr, 06/10/2017 - 12:08

Microsoft is releasing "Edge for Android" and it uses Chromium. That is bad for the Web.

It's bad because engine diversity is really essential for the open Web. Having some users, even a relatively small number, using the Edge engine on Android would have been a good step. Going with Chromium increases Web developer expectations that all browsers on Android are — or even should be — Chromium. The less thoughtful sort of developer (i.e., pretty much everyone) will say "Microsoft takes this path, so why doesn't Mozilla too, so we can have the instant gratification of compatibility thanks to a single engine?" The slow accumulation of unfixable bugs due to de facto standardization will not register until the platform has thoroughly rotted; the only escape being alternative single-vendor platforms where developers are even more beholden to the vendor.

Sure, it would have been quite a lot of work to port Edge to Android, but Microsoft has the resources, and porting a browser engine isn't a research problem. If Microsoft would rather save resources than promote their own browser engine, perhaps they'll be switching to Chromium on Windows next. Of course that would be even worse for the Web, but it's not hard to believe Microsoft has stopped caring about that, to the extent they ever did.

(Of course Edge uses Webkit on iOS, and that's also bad, but it's Apple's ongoing decision to force browsers to use the least secure engine, so nothing new there.)

Categorieën: Mozilla-nl planet

Cameron Kaiser: Various and sundry: OverbiteWX is coming, TenFourFox FPR4 progress, get your Talos orders in and Microsoft's new browser has no clothes

Mozilla planet - fr, 06/10/2017 - 05:16
This blog post is coming to you from a midway build of TenFourFox FPR4, now with more AltiVec string acceleration, less browser chrome fat, some layout performance updates and upgraded Brotli, OTS and WOFF2 support (current to what's in mozilla-central). Next up is getting some more kinks out of CSS Grid support, and hopefully a beta will be ready in a couple weeks for you to play with.

Meanwhile, for those of you using the Gopher enabler add-on OverbiteFF on Firefox, its successor is on the way for the Firefox self-inflicted add-on apocalypse: OverbiteWX. OverbiteWX requires Firefox 56 or higher and implements an internal protocol handler that redirects gopher:// URLs typed in the Firefox omnibox or clicked on to the Floodgap Public Gopher Proxy. The reason I've decided to create a new one instead of uploading a "WebExtensions-compatible" version is because, frankly, right now it's impossible. Because there is still no TCP socket API in WebExtensions, there is presently no way to implement a Gopher handler except via a web proxy, and this would be unexpected behaviour to an OverbiteFF user expecting a direct connection (which implemented a true nsIChannel to make the protocol once again a first class citizen in the browser). Since this means Gopher URLs you access are now being sent through an external service, albeit a benign one I run, I think you at least should opt in to that by affirmatively getting the new extension rather than being silently "upgraded" to a new version with (despite my best efforts) rather less functionality.

The extension is designed to be forward compatible so that in the near future you can select from your choice of proxies, and eventually, once Someone(tm) writes the API, true socket access directly to the Gopher server of your choice. It won't be as nice as OverbiteFF was, but given that WebExtensions' first and most important goal is to reduce what add-on authors can do to the browser, it may be as good as we get. A prototype is available from the Floodgap Gopher server, which, if you care about Gopher, you already can access (please note that this URL is temporary). Assuming no issues, a more fully-fledged version with a bit more window dressing should be available in AMO hopefully sometime next week.

TenFourFox users, never fear; OverbiteFF remains compatible. I've also been approached about a Pale Moon version and I'm looking into it.

For those of you following my previous posts on the Raptor Talos II, the next-generation POWER9 workstation with a fully-open-source stack from the firmware to the operating system and no x86 anywhere, you'll recall that orders are scheduled for fulfillment starting in Q4 2017. And we're in Q4. Even though I think it's a stellar package given what you get, it hasn't gotten any cheaper, so if you've got your money together or you've at least got a little headroom on the credit card it's time to fish or cut bait. Raptor may still take orders after this batch starts shipping, but at best you'll have a long wait for their next production run (if there is one), and at worst you might not get to order at all. Let Raptor know there is a lasting and willing market for an alternative architecture you fully control. This machine really is the best successor to the Power Mac. When mine arrives you'll see it first.

Last but not least, Microsoft is announcing their Edge browser for iOS and Android. "Cool," sez I, owner of a Pixel XL, "another choice of layout engines on Android" (I use Android Firefox, natch); I was rather looking forward to seeing the desktop Edge layout engine running on non-Microsoft phones. Well, no, it's just a shell over Blink and Chromium. Remember a few years ago when I said Blink would eat the Web? Through attrition and now, arguably, collusion, that's exactly what's happening.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Building On Rock, Not Sand

Mozilla planet - fr, 06/10/2017 - 05:01

This quote is telling:

Billions of devices run dnsmasq, and it had been through multiple security audits before now. Simon had done the best job possible, I think. He got beat. No human and no amount of budget would have found these problems before now, and now we face the worldwide costs, yet again, of something ubiquitous now, vulnerable.

Some of this is quite accurate. Human beings can't write safe C code. Bug-finding tools and security audits catch some problems but miss a lot of others. But on the other hand, this message and its followup betray mistaken assumptions. There are languages running on commodity hardware that provide much better security properties than C. In particular, all three remote code execution vulnerabilities would have been prevented by Rust, Go or even Java. Those languages would have also made the other bugs much more unlikely. Contrary to the quote, given a finite "amount of budget", dnsmasq could have been Rewritten In Rust and these problems avoided.

I understand that for legacy code like dnsmasq, even that amount of budget might not be available. My sincere hope is that people will at least stop choosing C for new projects. At this point, doing so is professional negligence.

What about C++? In my circle I seldom see enthusiasm for C, yet there is still great enthusiasm for C++, which inherits C's core security weaknesses. Are the C++ projects of today going to be the vulnerability-ridden legacy codebases of tomorrow? (In some cases, e.g. browsers, they already are...) C++ proponents seem to believe that C++ libraries and analysis tools, including efforts such as C++ Core Guidelines: Lifetimes, plus mitigations such as control-flow integrity, will be "good enough". Personally, I'm pessimistic. C++ is a fantastically complex language and that complexity is growing steadily. Much more effort is going into increasing its complexity than addressing safety issues. It's now nearly two years since the Lifetimes document had any sort of update, and at CppCon 2017 just one of 99 talks focused on improving C++ safety.

Those of us building code to last owe it to the world to build on rock, not sand. C is sand. C++ is better, but it's far from a solid foundation.

Categorieën: Mozilla-nl planet

Marcia Knous: Firefox Nightly Session at Grace Hopper

Mozilla planet - fr, 06/10/2017 - 02:29
Kate Glazko and I were fortunate to be able to present a session on Firefox Nightly at this year's Grace Hopper event.My first impression was how massive an event it was! Just watching everyone stream into the venue for the keynote was magnificent. Legions of attendees from different companies were easily recognizable by their coordinated shirts. Whether it was Amazon's lime green or Facebook's blue, it was great to see (and almost like a parade!)I thought our presentation went really well. While we had originally conceived it as a workshop, we decided to opt for a presentation followed by a few exercises instead.  Part of the reasoning behind the decision was we simply did not have enough moderators to cover the session. The room held 180 people - I estimate we had about 80 attendees present at the session.We got some really good questions during the Q&A, even one about Thunderbird. Attendees were interested in a wide range of subjects, including privacy practices, how we monitor failing tests, and information and details about Project Quantum.  One attendee was interested in how she could get the Developer tools in Nightly.I hope we succeeded in getting more people downloading and using nightly and 57 beta. At least one student approached me after the event and wants to contribute - that is what makes these types of events so great!
Categorieën: Mozilla-nl planet

Tarek Ziadé: Autosizing web services

Mozilla planet - fr, 06/10/2017 - 00:00

Molotov, the load testing tool I've developed, comes now with an autosizing feature. When the --sizing option is used, Molotov will slowly ramp-up the number of workers per process and will stop once there are too many failures per minute.

The default tolerance for failure is 5%, but this can be tweaked with the --sizing-tolerance option.

Molotov will use 500 workers that are getting ramped up in 5 minutes, but you can set your own values with --workers and --ramp-up if you want to autosize at a different pace.

See all the options at

This load testing technique is useful to determine what is the limiting resource for a given application: RAM, CPU, I/O or Network.

Running Molotov against a single node that way can help decide what is the best combination of RAM, CPU, Disk and Bandwidth per node to deploy a project. In AWS that would mean helping chosing the size of the VM.

To perform this test you need to deploy the app on a dedicated node. Since most of our web services projects at Mozilla are now available as Docker images, it becomes easy to automate that deployment when we want to test the service.

I have created a small script on the top of Molotov that does exactly that, by using Amazon SSM (Systems Manager). See

Amazon SSM

SSM is a client-server tool that simplifies working with EC2 nodes. For instance, instead of writing a low-level script using Paramiko that drives EC2 instances through SSH, you can send batch commands through SSM to any number of EC2 instances, and get back the results asynchronously.

SSM integrates with S3 so you can get back your commands results as artifacts once they are finished.

Building a client around SSM is quite easy with Boto3. The only tricky part is waiting for the results to be ready.

This is my SSM client:

Deploying and running

Based on this SSM client, my script is doing the following operations on AWS:

  • Deploy (or reuse) an EC2 Instance that has an SSM agent and a Docker agent running
  • Run the Docker container of the service on that EC2 instance
  • Run a Docker container that runs Glances (more on this later)

Once the EC2 instance has the service up and running, it's ready to be used via Molotov.

The script takes a github repo and run it, using moloslave Once the test is over, metrics are grabbed via SSM and the results are presented in a fancy HTML 5 page where you can find out what is the bottleneck of your service

Example with Kinto

Kinto is a Python service that provides a rest-ish API to read write schemaless JSON documents. Running a load test on it using Molotov is pretty straightforward. The test script adds data, browses it and verifies that the Kinto service returns things correctly. And Kinto has a docker image published on Docker hub.

I've run the sizing script using that image on a t2.micro instance. Here are the results:

You can see that the memory is growing throughout the test, because the Docker image uses a memory database and the test keeps on adding data -- that is also why the I/O is sticking to 0.

If you double-click on the CPU metrics, you can see that the CPU reaches almost 100% at the end of the test before things starts to break.

So, for a memory backend, the limiting factor for Kinto is the CPU, which makes sense. If we had had a bottleneck on I/O, that would have been an indication that something was wrong.

Another interesting test would be to run it against a Postgres RDS deployment instead of a memory database.

Collecting Metrics with Glances

The metrics are collected on the EC2 box using Glances ( which runs in its own Docker container and has the ability to measure other docker images running on the same agent. see

In other words, you can follow the resource usage per docker container, and in our case that's useful to track the container that runs the actual service.

My Glances docker container uses this image: which runs the tool and spits out the metrics in a CSV file I can collect via SSM once the test is over.

Vizualizing results

I could have send the metrics to an Influxdb or Grafana system, but I wanted to create a simple static page that could work locally and be passed around as a test artifact.

That's where Plotly ( comes in handy. This tool can turn a CSV file produced by Glances into a nice looking HTML5 page where you can toggle between metrics and do other nice stuff.

I have used Pandas/Numpy to process the data, which is probably overkill given the amount of processed lines, but their API are a natural fit to work with Plotly.

See the small class I've built here:


The new Molotov sizing feature is pretty handy as long as you can automate the deployment of isolated nodes for the service you want to test -- and that's quite easy with Docker and AWS.

Autosizing can give you a hint on how an application behaves under stress and help you decide how you want to initially deploy it.

In an ideal world, each one of our services has a Molotov test already, and running an autosizing test can be done with minimal work.

In a super ideal world, everything I've described is part of the continuous deployement process :)

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Oct. 5, 2017

Mozilla planet - to, 05/10/2017 - 18:00

Reps Weekly Meeting Oct. 5, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet