mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 4 maanden 4 dagen geleden

Daniel Pocock: A step change in managing your calendar, without social media

zo, 08/10/2017 - 19:36

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

Categorieën: Mozilla-nl planet

Giorgos Logiotatidis: Automating Podcast generation from SoundCloud

zo, 08/10/2017 - 18:12

There's this popular daily FM Radio show in Greece which posts their shows on SoundCloud after broadcasting them. It's a good -albeit not great, just HTML5 audio is fine- way to listen the show on demand if you're on Desktop. The website is not mobile friendly and the whole embedded SoundCloud experience is sub-optimal. Let alone that you cannot just add the feed to your favorite podcast player to enjoy it.

There's an RSS feed on iTunes but it's manually updated and inevitably lags a day or two behind, depending on the availability of the maintainer.

I decided to fix the problem myself and since this turned out to be a solution involving a bunch of interesting technologies I thought to write a blog post about it. If you only care about the podcast you can find it here.

Step 1: Extracting content from SoundCloud

The episodes are embedded in the official website but are hidden in SoundCloud. Probably there's a hidden attribute you can set to SoundCloud media. That explains why my first attempt to download the episodes using SoundScrape failed with the later complaining that it can't find any videos.

Then I started examining SoundCloud's JS and JSON responses sent when you click the play button, with the ultimate goal to write a SoundCloud downloader. The service follows a typical authenticate-then-get unique auto-expiring link to S3, which it can be automated but it's not fun to do.

While taking a break from parsing JSON responses it occurred to me that youtube-dl despite it's very specific name it supports other websites too, actually hundreds of them. v Run youtube-dl against a URL with embedded SoundCloud audio and youtube-dl will find and download the best version of the audio file including the cover thumbnail!

All I need now is simple python script to extract all URLs with embedded SoundCloud audio and feed it to youtube-dl as a list using the --batch-file argument.

Step 2: Generate the Podcast RSS

With all the mp3 files for the show downloaded, next step is to generate the Podcast RSS. FeedGen is a simple pythonic library which builds RSS feeds, including extensions for podcasts and iTunes attributes.

Step 3: Serve the Podcast RSS

I serve all my personal websites using Dokku running on my VPS. I used a Debian based Docker image and installed Python2 and the needed python libraries for the feed generation. Also installed nginx-light to serve the content, both the RSS and the audio files.

I originally used the genRSS project to generate the RSS which complained about the Unicode characters in the mp3 filenames when run from the Docker image. I fixed this by adding en_US.UTF-8 to the supported locales and running locale-gen on image build.

RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \ locale-gen ENV LC_ALL en_US.UTF-8

The docker image default command runs nginx with a minimal nginx.conf.

Dokku takes care of everything else, including getting certificates from LetsEncrypt.

Step 4: Update the Feed

Cron runs a command to update the feed daily from Mon-Fri every 5 minutes from the moment the show ends and up to an hour after. The show producers are very consistent on uploading the show on time so that seems to just work. To be on the safe side I added another run two hours after the show ends.

The cron runs on the host, using dokku run. The podcast and the audio files are stored in a Docker volume and therefore both the web serving process and the cron job can access this persistent storage at the same time.

Youtube-dl is smart enough to not re-download content which exists, so running the command multiple times does not hammer the servers.

Step 5: Monitoring

For an automation to be perfect it must be monitored. As with all my websites, I setup a NewRelic Synthetics monitor which monitors the feed serving and that the content of the feed appears valid by looking for "pubDate" text.

To monitor the cronjob cURL a healthchecks.io provided URL at the very end of the bash script that co-ordinates the fetching and building of the feed. Make sure to set -e your bash scripts so they exit after the first failed command. Not setting -e will always call cURL even if a step fails.

Actually use those two tools so much, I maintain two related projects NeReS and Babis.

Fun fact: It's the second time I build a podcast for this show. First one was around 2008.

Links:
Categorieën: Mozilla-nl planet

Robert O'Callahan: Thoughts On Microsoft's Time-Travel Debugger

za, 07/10/2017 - 14:45

I'm excited that Microsoft's TTD is finally available to the public. Congratulations to the team! The video is well worth watching. I haven't used TTD myself yet since I don't have a Windows system at hand, but I've talked to Mozilla developers who've tried it on Firefox.

The most important and obvious difference between TTD and rr is that TTD is for Windows and rr is for Linux (though a few crazy people have had success debugging Windows applications in Wine under rr).

TTD supports recording of multiple threads in parallel, while rr is limited to a single core. On the other hand, per-thread recording overhead seems to be much higher in TTD than in rr. It's hard to make a direct comparison, but a simple "start Firefox, display mozilla.org, shut down" test run on similar hardware takes about 250 seconds under TTD and 26 seconds under rr. This is not surprising given TTD relies on pervasive binary instrumentation and rr was designed not to. This means recording extremely parallel workloads might be faster under TTD, but for many workloads rr recording will be faster. Starting up a large application really stresses binary translation frameworks, so it's a bit of a worst-case scenario for TTD — though a common one for developers. TTD's multicore recording might be better at reproducing certain kinds of concurrency bugs, though rr's chaos mode helps mitigate that problem — and lower recording overhead means you can churn through test iterations faster.

Therefore for Firefox-like workloads, on Linux, I still think rr's recording approach is superior. Note that when the technology behind TTD was first developed the hardware and OS features needed to support an rr-like approach did not exist.

TTD's ability to attach to arbitrary processes and start recording sounds great and would mitigate some of the slow-recording problem. This would be nice to have with rr, but hard to implement. (Currently we require reserving a couple of pages at specific addresses that might not be available when attaching to an arbitrary process.)

Some of the performance overhead of TTD comes from it copying all loaded libraries into the trace file, to ensure traces are portable across machines. rr doesn't do that by default; instead you have to run rr pack to make traces self-contained. I still like our approach, especially in scenarios where you repeatedly re-record a testcase until it fails.

The video mentions that TTD supports shared memory and async I/O and suggests rr doesn't. It can be confusing, but to clarify: rr supports shared memory as long as you record all the processes that are using the shared memory; for example Firefox and Chromium communicate with subprocesses using shared memory and work fine under rr. Async I/O is pretty rare in Linux; where it has come up so far (V4L2) we have been able to handle it.

Supporting unlimited data breakpoints is a nice touch. I assume that's done using their binary instrumentation.

TTD's replay looks fast in the demo videos but they mention that it can be slower than live debugging. They have an offline index build step, though it's not clear to me yet what exactly those indexes contain. It would be interesting to compare TTD and rr replay speed, especially for reverse execution.

The TTD trace querying tools look cool. A lot more can be done in this area.

rr+gdb supports running application functions at debug time (e.g. to dump data structures), while TTD does not. This feature is very important to some rr users, so it might be worthwhile for the TTD people to look at.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Building WebVR Worlds Together: Mozilla and Sketchfab Launching Real-Time VR Design Challenge…

vr, 06/10/2017 - 22:29
Building WebVR Worlds Together: Mozilla and Sketchfab Launching Real-Time VR Design Challenge “Medieval Fantasy”

Mozilla’s mission is to ensure the Internet is a global public resource, open and accessible to all, which is great for the innovators, creators and builders on the web. Virtual Reality is set to change the future of web interaction and the ability for anyone to access and enjoy Virtual Reality experiences is critical for its further development. This is why Mozilla set out to bring virtual reality to Firefox and other web browsers, using A-Frame as a web framework for building interactive VR experiences. Originally from Mozilla, A-Frame was developed to be an easy but powerful way to develop VR content. As an independent open source project, A-Frame has grown to be one of the largest and most welcoming VR communities, making it easy for anyone to get involved with virtual reality.

To invite more developers and content creators to play with WebVR and A-Frame, Mozilla is excited to be partnering with Sketchfab for the Real-time Design Challenge. And what better playground could you imagine than going back to medieval times?

<figcaption>Credit: Kevin Pauly (Sketchfab)</figcaption>

We call for artists and designers to create open assets for use in A-Frame: castles, medieval towns, knights, spears, horses… and dragons (of course dragons!!)

By providing these assets, we will be allowing game builders and world builders a set of 3D images that they can plug into their scenes and create a whole new world with. Over time we aim to create an A-Frame ecosystem that is vibrant, shows the potential of WebVR and attracts both creators and users.

https://medium.com/media/c31e6f468189a3885dd31706aa574934/href

The winners of this challenge will receive prizes that will further enhance their experience in WebVR, including a VR laptop, an Oculus headset, a Wacom Intuos Pro Tablet or 12 months of Sketchfab pro.

How to participate

To enter this contest, create a scene in the described visual style of Kevin Pauly’s work and theme. Build as many reusable components for it as you can. For example: if you create a castle scene, provide blocks for walls, floors, doors etc. You can start your own topic in the Medieval Fantasy contest forum to document your work in progress.

Please find out more details, also on the technical requirements, on the Sketchfab blog.

The submission deadline is November 1st (23:59 New York time — EST)

We can’t wait to see what you come up with!

Building WebVR Worlds Together: Mozilla and Sketchfab Launching Real-Time VR Design Challenge… was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Why is my computer so slow? Your browser needs a tune-up.

vr, 06/10/2017 - 19:17

Nobody wants to go slow on the internet. (After all, it’s supposed to be a highway.) This quick fix-it-list will have you feeling the wind in your hair in no … Read more

The post Why is my computer so slow? Your browser needs a tune-up. appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

QMO: Firefox 57 Beta 8 Testday, October 13th

vr, 06/10/2017 - 15:52

Hello Mozillians,

We are happy to let you know that Friday, October 13th, we are organizing Firefox 57 Beta 8 Testday. We’ll be focusing our testing on the following new features: Activity Stream, Photon Structure and Photon Onboarding Tour Notifications & Tour Overlay 57.

Check out the detailed instructions via this etherpad .

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Alessio Placitelli: Recording Telemetry scalars from add-ons

vr, 06/10/2017 - 15:33
The Go Faster initiative is important as it enables us to ship code faster, using special add-ons, without being strictly tied to the Firefox train schedule. As Georg Fritzsche pointed out in his article, we have two options for instrumenting these add-ons: having probe definitions ride the trains (waiting a few weeks!) or implementing and … →
Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro signature generation overhaul and command line interface

vr, 06/10/2017 - 15:00
Summary

This quarter I worked on creating a command line interface for signature generation and in doing that extracted it from the processor into a standalone-ish module.

The end result of this work is that:

  1. anyone making changes to signature generation can can test the changes out on their local machine using a Socorro local development environment
  2. I can trivially test incoming signature generation changes--this both saves me time and gives me a much higher confidence of correctness without having to merge the code and test it in our -stage environment [1]
  3. we can research and experiment with changes to the signature generation algorithm and how that affects existing crash signatures
  4. it's a step closer to being usable by other groups

This blog post talks about that work briefly and then talks about some of the things I've been able to do with it.

[1]I can't overstate how awesome this is.

Read more… (19 mins to read)

Categorieën: Mozilla-nl planet

Anne van Kesteren: MIME type interoperability

vr, 06/10/2017 - 14:19

In order to figure out data: URL processing requirements I have been studying MIME types (also known as media types) lately. I thought I would share some examples that yield different results across user agents, mostly to demonstrate that even simple things are far from interoperable:

  • text/html;charset =gbk
  • text/html;charset='gbk'
  • text/html;charset="gbk"x
  • text/html(;charset=gbk
  • text/html;charset=gbk(
  • text/html;charset="gbk
  • text/html;charset=gbk"

These are the relatively simple issues to deal with, though it would have been nice if they had been sorted by now. The MIME type parsing issue also looks at parsing for the Content-Type header, which is even messier, with different requirements for its request and response variants.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Microsoft Using Chromium On Android Is Bad For The Web

vr, 06/10/2017 - 12:08

Microsoft is releasing "Edge for Android" and it uses Chromium. That is bad for the Web.

It's bad because engine diversity is really essential for the open Web. Having some users, even a relatively small number, using the Edge engine on Android would have been a good step. Going with Chromium increases Web developer expectations that all browsers on Android are — or even should be — Chromium. The less thoughtful sort of developer (i.e., pretty much everyone) will say "Microsoft takes this path, so why doesn't Mozilla too, so we can have the instant gratification of compatibility thanks to a single engine?" The slow accumulation of unfixable bugs due to de facto standardization will not register until the platform has thoroughly rotted; the only escape being alternative single-vendor platforms where developers are even more beholden to the vendor.

Sure, it would have been quite a lot of work to port Edge to Android, but Microsoft has the resources, and porting a browser engine isn't a research problem. If Microsoft would rather save resources than promote their own browser engine, perhaps they'll be switching to Chromium on Windows next. Of course that would be even worse for the Web, but it's not hard to believe Microsoft has stopped caring about that, to the extent they ever did.

(Of course Edge uses Webkit on iOS, and that's also bad, but it's Apple's ongoing decision to force browsers to use the least secure engine, so nothing new there.)

Categorieën: Mozilla-nl planet

Cameron Kaiser: Various and sundry: OverbiteWX is coming, TenFourFox FPR4 progress, get your Talos orders in and Microsoft's new browser has no clothes

vr, 06/10/2017 - 05:16
This blog post is coming to you from a midway build of TenFourFox FPR4, now with more AltiVec string acceleration, less browser chrome fat, some layout performance updates and upgraded Brotli, OTS and WOFF2 support (current to what's in mozilla-central). Next up is getting some more kinks out of CSS Grid support, and hopefully a beta will be ready in a couple weeks for you to play with.

Meanwhile, for those of you using the Gopher enabler add-on OverbiteFF on Firefox, its successor is on the way for the Firefox self-inflicted add-on apocalypse: OverbiteWX. OverbiteWX requires Firefox 56 or higher and implements an internal protocol handler that redirects gopher:// URLs typed in the Firefox omnibox or clicked on to the Floodgap Public Gopher Proxy. The reason I've decided to create a new one instead of uploading a "WebExtensions-compatible" version is because, frankly, right now it's impossible. Because there is still no TCP socket API in WebExtensions, there is presently no way to implement a Gopher handler except via a web proxy, and this would be unexpected behaviour to an OverbiteFF user expecting a direct connection (which implemented a true nsIChannel to make the protocol once again a first class citizen in the browser). Since this means Gopher URLs you access are now being sent through an external service, albeit a benign one I run, I think you at least should opt in to that by affirmatively getting the new extension rather than being silently "upgraded" to a new version with (despite my best efforts) rather less functionality.

The extension is designed to be forward compatible so that in the near future you can select from your choice of proxies, and eventually, once Someone(tm) writes the API, true socket access directly to the Gopher server of your choice. It won't be as nice as OverbiteFF was, but given that WebExtensions' first and most important goal is to reduce what add-on authors can do to the browser, it may be as good as we get. A prototype is available from the Floodgap Gopher server, which, if you care about Gopher, you already can access (please note that this URL is temporary). Assuming no issues, a more fully-fledged version with a bit more window dressing should be available in AMO hopefully sometime next week.

TenFourFox users, never fear; OverbiteFF remains compatible. I've also been approached about a Pale Moon version and I'm looking into it.

For those of you following my previous posts on the Raptor Talos II, the next-generation POWER9 workstation with a fully-open-source stack from the firmware to the operating system and no x86 anywhere, you'll recall that orders are scheduled for fulfillment starting in Q4 2017. And we're in Q4. Even though I think it's a stellar package given what you get, it hasn't gotten any cheaper, so if you've got your money together or you've at least got a little headroom on the credit card it's time to fish or cut bait. Raptor may still take orders after this batch starts shipping, but at best you'll have a long wait for their next production run (if there is one), and at worst you might not get to order at all. Let Raptor know there is a lasting and willing market for an alternative architecture you fully control. This machine really is the best successor to the Power Mac. When mine arrives you'll see it first.

Last but not least, Microsoft is announcing their Edge browser for iOS and Android. "Cool," sez I, owner of a Pixel XL, "another choice of layout engines on Android" (I use Android Firefox, natch); I was rather looking forward to seeing the desktop Edge layout engine running on non-Microsoft phones. Well, no, it's just a shell over Blink and Chromium. Remember a few years ago when I said Blink would eat the Web? Through attrition and now, arguably, collusion, that's exactly what's happening.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Building On Rock, Not Sand

vr, 06/10/2017 - 05:01

This quote is telling:

Billions of devices run dnsmasq, and it had been through multiple security audits before now. Simon had done the best job possible, I think. He got beat. No human and no amount of budget would have found these problems before now, and now we face the worldwide costs, yet again, of something ubiquitous now, vulnerable.

Some of this is quite accurate. Human beings can't write safe C code. Bug-finding tools and security audits catch some problems but miss a lot of others. But on the other hand, this message and its followup betray mistaken assumptions. There are languages running on commodity hardware that provide much better security properties than C. In particular, all three remote code execution vulnerabilities would have been prevented by Rust, Go or even Java. Those languages would have also made the other bugs much more unlikely. Contrary to the quote, given a finite "amount of budget", dnsmasq could have been Rewritten In Rust and these problems avoided.

I understand that for legacy code like dnsmasq, even that amount of budget might not be available. My sincere hope is that people will at least stop choosing C for new projects. At this point, doing so is professional negligence.

What about C++? In my circle I seldom see enthusiasm for C, yet there is still great enthusiasm for C++, which inherits C's core security weaknesses. Are the C++ projects of today going to be the vulnerability-ridden legacy codebases of tomorrow? (In some cases, e.g. browsers, they already are...) C++ proponents seem to believe that C++ libraries and analysis tools, including efforts such as C++ Core Guidelines: Lifetimes, plus mitigations such as control-flow integrity, will be "good enough". Personally, I'm pessimistic. C++ is a fantastically complex language and that complexity is growing steadily. Much more effort is going into increasing its complexity than addressing safety issues. It's now nearly two years since the Lifetimes document had any sort of update, and at CppCon 2017 just one of 99 talks focused on improving C++ safety.

Those of us building code to last owe it to the world to build on rock, not sand. C is sand. C++ is better, but it's far from a solid foundation.

Categorieën: Mozilla-nl planet

Marcia Knous: Firefox Nightly Session at Grace Hopper

vr, 06/10/2017 - 02:29
Kate Glazko and I were fortunate to be able to present a session on Firefox Nightly at this year's Grace Hopper event.My first impression was how massive an event it was! Just watching everyone stream into the venue for the keynote was magnificent. Legions of attendees from different companies were easily recognizable by their coordinated shirts. Whether it was Amazon's lime green or Facebook's blue, it was great to see (and almost like a parade!)I thought our presentation went really well. While we had originally conceived it as a workshop, we decided to opt for a presentation followed by a few exercises instead.  Part of the reasoning behind the decision was we simply did not have enough moderators to cover the session. The room held 180 people - I estimate we had about 80 attendees present at the session.We got some really good questions during the Q&A, even one about Thunderbird. Attendees were interested in a wide range of subjects, including privacy practices, how we monitor failing tests, and information and details about Project Quantum.  One attendee was interested in how she could get the Developer tools in Nightly.I hope we succeeded in getting more people downloading and using nightly and 57 beta. At least one student approached me after the event and wants to contribute - that is what makes these types of events so great!
Categorieën: Mozilla-nl planet

Tarek Ziadé: Autosizing web services

vr, 06/10/2017 - 00:00

Molotov, the load testing tool I've developed, comes now with an autosizing feature. When the --sizing option is used, Molotov will slowly ramp-up the number of workers per process and will stop once there are too many failures per minute.

The default tolerance for failure is 5%, but this can be tweaked with the --sizing-tolerance option.

Molotov will use 500 workers that are getting ramped up in 5 minutes, but you can set your own values with --workers and --ramp-up if you want to autosize at a different pace.

See all the options at http://molotov.readthedocs.io/en/stable/cli

This load testing technique is useful to determine what is the limiting resource for a given application: RAM, CPU, I/O or Network.

Running Molotov against a single node that way can help decide what is the best combination of RAM, CPU, Disk and Bandwidth per node to deploy a project. In AWS that would mean helping chosing the size of the VM.

To perform this test you need to deploy the app on a dedicated node. Since most of our web services projects at Mozilla are now available as Docker images, it becomes easy to automate that deployment when we want to test the service.

I have created a small script on the top of Molotov that does exactly that, by using Amazon SSM (Systems Manager). See http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html

Amazon SSM

SSM is a client-server tool that simplifies working with EC2 nodes. For instance, instead of writing a low-level script using Paramiko that drives EC2 instances through SSH, you can send batch commands through SSM to any number of EC2 instances, and get back the results asynchronously.

SSM integrates with S3 so you can get back your commands results as artifacts once they are finished.

Building a client around SSM is quite easy with Boto3. The only tricky part is waiting for the results to be ready.

This is my SSM client: https://github.com/tarekziade/sizer/blob/master/sizer/ssm.py

Deploying and running

Based on this SSM client, my script is doing the following operations on AWS:

  • Deploy (or reuse) an EC2 Instance that has an SSM agent and a Docker agent running
  • Run the Docker container of the service on that EC2 instance
  • Run a Docker container that runs Glances (more on this later)

Once the EC2 instance has the service up and running, it's ready to be used via Molotov.

The script takes a github repo and run it, using moloslave http://molotov.readthedocs.io/en/stable/slave Once the test is over, metrics are grabbed via SSM and the results are presented in a fancy HTML 5 page where you can find out what is the bottleneck of your service

Example with Kinto

Kinto is a Python service that provides a rest-ish API to read write schemaless JSON documents. Running a load test on it using Molotov is pretty straightforward. The test script adds data, browses it and verifies that the Kinto service returns things correctly. And Kinto has a docker image published on Docker hub.

I've run the sizing script using that image on a t2.micro instance. Here are the results: https://ziade.org/sizer_tested.html

You can see that the memory is growing throughout the test, because the Docker image uses a memory database and the test keeps on adding data -- that is also why the I/O is sticking to 0.

If you double-click on the CPU metrics, you can see that the CPU reaches almost 100% at the end of the test before things starts to break.

So, for a memory backend, the limiting factor for Kinto is the CPU, which makes sense. If we had had a bottleneck on I/O, that would have been an indication that something was wrong.

Another interesting test would be to run it against a Postgres RDS deployment instead of a memory database.

Collecting Metrics with Glances

The metrics are collected on the EC2 box using Glances (http://glances.readthedocs.io/) which runs in its own Docker container and has the ability to measure other docker images running on the same agent. see http://glances.readthedocs.io/en/stable/aoa/docker.html?highlight=docker

In other words, you can follow the resource usage per docker container, and in our case that's useful to track the container that runs the actual service.

My Glances docker container uses this image: https://github.com/tarekziade/sizer/blob/master/Dockerfile which runs the tool and spits out the metrics in a CSV file I can collect via SSM once the test is over.

Vizualizing results

I could have send the metrics to an Influxdb or Grafana system, but I wanted to create a simple static page that could work locally and be passed around as a test artifact.

That's where Plotly (https://plot.ly/) comes in handy. This tool can turn a CSV file produced by Glances into a nice looking HTML5 page where you can toggle between metrics and do other nice stuff.

I have used Pandas/Numpy to process the data, which is probably overkill given the amount of processed lines, but their API are a natural fit to work with Plotly.

See the small class I've built here: https://github.com/tarekziade/sizer/blob/master/sizer/chart.py

Conclusion

The new Molotov sizing feature is pretty handy as long as you can automate the deployment of isolated nodes for the service you want to test -- and that's quite easy with Docker and AWS.

Autosizing can give you a hint on how an application behaves under stress and help you decide how you want to initially deploy it.

In an ideal world, each one of our services has a Molotov test already, and running an autosizing test can be done with minimal work.

In a super ideal world, everything I've described is part of the continuous deployement process :)

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Oct. 5, 2017

do, 05/10/2017 - 18:00

Reps Weekly Meeting Oct. 5, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The life of a curl security bug

do, 05/10/2017 - 14:59
The report

Usually, security problems in the curl project come to us out of the blue. Someone has found a bug they suspect may have a security impact and they tell us about it on the curl-security@haxx.se email address. Mails sent to this address reach a private mailing list with the curl security team members as the only subscribers.

An important first step is that we respond to the sender, acknowledging the report. Often we also include a few follow-up questions at once. It is important to us to keep the original reporter in the loop and included in all subsequent discussions about this issue – unless they prefer to opt out.

If we find the issue ourselves, we act pretty much the same way.

In the most obvious and well-reported cases there are no room for doubts or hesitation about what the bugs and the impact of them are, but very often the reports lead to discussions.

The assessment

Is it a bug in the first place, is it perhaps even documented or just plain bad use?

If it is a bug, is this a security problem that can be abused or somehow put users in some sort of risk?

Most issues we get reported as security issues are also in the end treated as such, as we tend to err on the safe side.

The time plan

Unless the issue is critical, we prefer to schedule a fix and announcement of the issue in association with the pending next release, and as we do releases every 8 weeks like clockwork, that’s never very far away.

We communicate the suggested schedule with the reporter to make sure we agree. If a sooner release is preferred, we work out a schedule for an extra release. In the past we’ve did occasional faster security releases also when the issue already had been made public, so we wanted to shorten the time window during which users could be harmed by the problem.

We really really do not want a problem to persist longer than until the next release.

The fix

The curl security team and the reporter work on fixing the issue. Ideally in part by the reporter making sure that they can’t reproduce it anymore and we add a test case or two.

We keep the fix undisclosed for the time being. It is not committed to the public git repository but kept in a private branch. We usually put it on a private URL so that we can link to it when we ask for a CVE, see below.

All security issues should make us ask ourselves – what did we do wrong that made us not discover this sooner? And ideally we should introduce processes, tests and checks to make sure we detect other similar mistakes now and in the future.

Typically we only generate a single patch from the git master master and offer that as the final solution. In the curl project we don’t maintain multiple branches. Distros and vendors who ship older or even multiple curl versions backport the patch to their systems by themselves. Sometimes we get backported patches back to offer users as well, but those are exceptions to the rule.

The advisory

In parallel to working on the fix, we write up a “security advisory” about the problem. It is a detailed description about the problem, what impact it may have if triggered or abused and if we know of any exploits of it.

What conditions need to be met for the bug to trigger. What’s the version range that is affected, what’s the remedies that can be done as a work-around if the patch is not applied etc.

We work out the advisory in cooperation with the reporter so that we get the description and the credits right.

The advisory also always contains a time line that clearly describes when we got to know about the problem etc.

The CVE

Once we have an advisory and a patch, none of which needs to be their final versions, we can proceed and ask for a CVE.

Depending on where in the release cycle we are, we might have to hold off at this point. For all bugs that aren’t proprietary-operating-system specific, we pre-notify and ask for a CVE on the distros@openwall mailing list. This mailing list prohibits an embargo longer than 14 days, so we cannot ask for a CVE from them longer than 2 weeks in advance before our release.

The idea here is that the embargo time gives the distributions time and opportunity to prepare updates of their packages so they can be pretty much in sync with our release and reduce the time window their users are at risk. Of course, not all operating system vendors manage to actually ship a curl update on two weeks notice, and at least one major commercial vendor regularly informs me that this is a too short time frame for them.

For flaws that don’t affect the free operating systems at all, we ask MITRE directly for CVEs.

The last 48 hours

When there is roughly 48 hours left until the coming release and security announcement, we merge the private security fix branch into master and push it. That immediately makes the fix public and those who are alert can then take advantage of this knowledge – potentially for malicious purposes. The security advisory itself is however not made public until release day.

We use these 48 hours to get the fix tested on more systems to verify that it is not doing any major breakage. The weakest part of our security procedure is that the fix has been worked out in secret so it has not had the chance to get widely built and tested, so that is performed now.

The release

We upload the new release. We send out the release announcement email, update the web site and make the advisory for the issue public. We send out the security advisory alert on the proper email lists.

Bug Bounty?

Unfortunately we don’t have any bug bounties on our own in the curl project. We simply have no money for that. We actually don’t have money at all for anything.

Hackerone offers bounties for curl related issues. If you have reported a critical issue you can request one from them after it has been fixed in curl.

 

Categorieën: Mozilla-nl planet

Emily Dunham: Saying Ping

do, 05/10/2017 - 09:00
Saying Ping

There’s an idiom on IRC, and to a lesser extent other more modern communication media, where people indicate interest in performing a real-time conversation with someone by saying “ping” to them. This effectively translates to “I would like to converse with you as soon as you are available”.

The traditional response to “ping” is to reply with “pong”. This means “I am presently available to converse with you”.

If the person who pinged is not available at the time that the ping’s recipient replies, what happens? Well, as soon as they see the pong, they re-ping (either by saying “ping” or sometimes “re-ping” if they are impersonating a sufficiently complex system to hold some state).

This attempt at communication, like “phone tag”, can continue indefinitey in its default state.

It is an inefficient use of both time and mental overhead, since each missed “ping” leaves the recipient with a vague curiosity or concern: “I wonder what the person who pinged wanted to talk to me about...”. Additionally, even if both parties manage to arrange synchronous communication at some point in the future, there’s the very real risk that the initiator may forget why they originally pinged at all.

There is an extremely simple solution to the inefficiency of waiting until both parties are online, which is to stick a little metadata about your question onto the ping. “Ping, could you look issue # xyz?” “Ping, can we chat about your opinions on power efficiency sometime?”. And yet there appears to be a decent correlation between people I regard as knowing more than I do about IRC etiquette, and people who issue pings without attaching any context to them.

If you do this, and happen to read this, could you please explain why to me sometime?

Categorieën: Mozilla-nl planet

Karl Dubost: A Web Compatibility Issue? Maybe Not.

do, 05/10/2017 - 07:40

There is an ongoing Firefox sprint (This looks like a URI that will die in the future. Sniff.) trying to identify issues in the browsers and report them on Webcompat.

CSS in JS

We got a flood of new issues. Unfortunately a lot of them were invalid and were not part of what we qualify as Web Compatibility issues. This strains a lot on our team (usually Adam, Eric and myself doing triage, but for this occasion everyone had to join) and it is probably frustrating for the reporters to see their bugs closed as invalid. So in the spirit of "How do we make it better for everyone?", a couple of guidelines for people running the events related to Webcompat sprint.

What is a Webcompat issue?

Or more exactly what is not a Webcompat issue.

Our work focuses on identifying differences in between browsers when accessing a Web site. That said all differences are not necessary Web compatibility issues.

Before starting a Webcompat sprint (organizers)
  • Fully understand this document. If you organizing an event and have questions unanswered in this page, please ask a question.
  • Make sure to explain to participants that quality of the reports is the goal.
  • Make sure that participants have created a fresh profile.
    • This fresh profile should not contains any add-ons, except if needed the reporter extension. Developer Edition and Nightly already have the Report Site Issue button into the "•••" menu.
    • This fresh profile can be set in a way that it resets itself at each restart.
  • Encourage participants to write detailed steps for reproducing the issue. An idea is for example to work as a team. One finds an issue, write the form without submitting it. The team mate on another computer is trying to reproduce the issue with only the form instructions. If the person can't understand the given instructions and/or can't see where the issue is, then it's probably an incomplete report.
During the Webcompat sprint (participants) Testing in Multiple Browsers

The most important thing of all. You need to test in at least two browsers. The more, the merrier. Being sure that it's actually not breaking elsewhere is a good for a webcompat issue. Sometimes it's better to ask someone else who is using a browser and compare the results.

Responsive Mode and Devices

Responsive mode on desktop browsers is a quick way to test if the website is ready for mobile devices. That said these responsive modes have their own limitations. They are not simulators, just emulators. You might want to double check on a real device before reporting.

Slow Network

If the network is slow. It is usually slow in any browsers. These do not make necessary a Web compatibility issue. Performance issues are very hard to decipher, because they are dependent on many external parameters.

  1. Try to reproduce in another browser. If it's blazing fast in another browser. There might be something interesting.
  2. Try to reload and see if the second load is faster (it should if the website has been correctly parametrized.)

Most of the time, this is not a Webcompat issue.

Flash plug-in

Some sites do not work or half work without a flash plugin. We can't really do anything about it. It might not be a good business decision but they chose it. It creates basically the same issue for all browsers. If you are unsastified with it, try to find their feedback or contact page and send them an email.

This is not a Webcompat issue.

Java Applets

Java Applets were used a lot in the 90s to provide application contexts in Web page when HTML was not enough. It has mostly disappeared in the context of a Web page. If you see a page dependent on Java, no need to report it.

This is not a Webcompat issue.

Tracking Protection List

Firefox and some other browsers have mechanisms to block trackers. If you are accessing a website with a strict tracking protection active. The site is likely to break in some fashions. Sites have a tendency to track all your actions. And some scripts fail when the tracker is not working.

This is not a Webcompat issue.

Ads Blockers/Script Blockers

This is a variation of the Tracking Protection list. Any addon you install has the potential to disrupt the normal operation of a website. It's even more acute when it's about blocking ads or scripts. ADB, uBlock, NoScript are the most current reasons for a website failing. Sometimes you will get a site completely blank or just with the text and no layout at all.

This is not a Webcompat issue.

Desktop site instead of mobile site

When testing on mobile, make sure to test on a device. Receiving a desktop site on a mobile device is frequent. Not all websites have specific versions of their site for mobile, or a website which adjusts itself to the screen context. That said, there are specific case which are worth reporting.

  1. Receiving a desktop site on a mobile device while on the same device, Chrome is receiving the mobile version. Very often this is the result of user agent sniffing. You can report it.
  2. Receiving a desktop site partially visible while Chrome seems to adjust the site to the current screen. This is likely a duplicate of the lack of virtual viewport on Firefox Android. You can report it.
Different mobile versions

Sometimes some websites are sending a different version to two browsers. For example a text only version to one browser and a very graphic one to another browser. It's probably the result of user agent sniffing. You can report it.

Fluid layout

Some websites have fluid or responsive layouts. These layouts adjust well more or less to small screens. When they don't adjust, you might, for example, see a navigation bar with items folding on a second line. These issues are difficult to identify. Your best lead is to test in another browser. If you get the same behavior in both Chrome, Firefox, Edge and Safari on mobile, then it's not a webcompat issue.

This is not a Webcompat issue.

Login/Password required / Credit card required

This one is quite hard and heart wrenching. Many sites require login and password to be able to interact with them. Think social networks, emails, school networks, bank accounts, etc. For a very common site, we might be able to reproduce because some of us have an account on the same sites. But in many occasions, we do not. For example, you want to report a webcompat issue for a private page on your bank. It's unlikely that we will be able to do anything without being able to access the actual page. We are not client on this site. We do not have a credit card for buying stuff you bought.

  1. If you are anonymous, do not report it. There's nothing we can do about it.
  2. If you report it as a github user on webcompat.com, be ready to followup. It's very likely we will not be able to access the page ourselves, but we might be able to guide you in analyzing the issue for finding out the issue.

It might be a Webcompat issue, but it might not be worth reporting it.

Browser features (Reader View, Tabs, etc.)

Browsers all have specific features accessible through what we call the chrome (different from the chrome browser). This is basically things which are part of the browser UI and assists in having a better usage. The best way to report an issue you noticed is to directly open a bug report on the browser vendor reporting system.

This is not a Webcompat issue.

SSL errors

Some sites have very poor security practices or outdated certificates. So when the browser blocks you to access a website with a message saying "An error occurred during a connection to …" is likely an issue with their SSL handling. You can test that yourself by entering the domain name into a validator. Another common reason is the HTTPSEverywhere add-on (and alike) which tried to force redirect to https for all websites. Some sites do not have an https version. Then the site will fail under https, but will work with http. Feel free to contact with a link to the SSL validation.

This is mostly not a Webcompat issue.

Private domain names / Proxies / Routers UI (not on the public internet)

Web UIs are used for many type of applications. Many of them are accessible only in the context of a company network, a local home network, etc.

  1. If the site is not accessible through the public internet and you are anonymous, do not report it. We will not be able to do anything about it.
  2. On the other hand, if you are a github user and you are ready to help us with followup questions and diagnosis, it might be worth reporting.
After the Webcompat sprint (organizers, participants)

It's not about quantity, it's about quality. We try to improve the Web. A larger number of issues might not necessary help us to fix a browser or a website faster. On the other hand, being able to followup on the issues when there are unknown details or additional questions from people triaging and diagnosing is invaluable. Reporting many issues without being able to answer questions afterward because you have no time, or you are not interested, might waste time of many people.

You might want to do additional meetings for specifically following up on these issues. It's a great opportunity to learn how to debug and understand what is happening in a Web page.

Otsukare!

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, September 2017

do, 05/10/2017 - 02:00

Here’s what happened in September in Kuma, the engine of MDN Web Docs:

  • Ran Maintenance Mode Tests in AWS
  • Updated Article Styling
  • Continued Conversion to Browser Compat Data
  • Shipped Tweaks and Fixes

Here’s the plan for October:

  • Move MDN to AWS
  • Improve Performance of the Interactive Editor
Done in September Ran Maintenance Mode Tests in AWS

Back in March 2017, we added Maintenance Mode to Kuma, which allows the site content to be available when we can’t write to the database. This mode got its first workout this month, as we put MDN into Maintenance Mode in SCL3, and then sent an increasing percentage of public traffic to an MDN deployment in AWS.

We ran 3 tests in September. In the first, we just tried Maintenance Mode with production traffic in SCL3. In the second test we sent 5% of traffic to AWS, and in the third test we ramped it up to 15%, then 50%, and finally 100%. The most recent test, on October 3, included New Relic monitoring, which gave us useful data and pretty charts.

Web Transactions Time shows how the average request is handled by the different services. For the SCL3 side, you can see a steady improvement in transaction time from 125 to 75 ms, as more traffic is handled by AWS.

SCL3 transaction time

On the AWS side, the response time grows from 40 to 90 ms, as the DNS configuration sends 100% of traffic to the new cluster.

AWS transaction time

The Web Transaction Percentiles chart shows useful statistics beyond the average. For example, 99% of users see at least 375 ms response time, and the median is at 50 ms.

SCL3 transaction percent

On the AWS side, 99% of users see at least 350 ms response time (slightly better), and the median is at 100 ms (slightly worse).

AWS transaction percent

Finally, Throughput measures the requests handled per minute. SCL3 continued handling over 500 requests per minute during the test. This may be due to clients using old DNS records, or because KumaScript continues making requests to render out-of-date pages.

SCL3 throughput

AWS ramped up to over 2000 requests per minute during the test, easily handing the load of a US afternoon.

AWS throughput

We consider this a successful test. Our AWS environment can easily handle regular, read-only MDN traffic, with capacity to spare. We don’t expect MDN users to notice much of a difference when we make the change.

Updated Article Styling

We’re working on the next phase of redesigning MDN. We’re looking at ways to present MDN articles, to make them easier to read, to scan quickly, and to emphasize the most useful information. We’re testing some ideas with users, and some of the adjustments showed up on the site this month.

For example, MDN documents a lot of code in prose, such as HTML element and attribute names. In PR 4400, Stephanie Hobson added a highlight background to make these stand out.

Before PR 4400, a fixed-width font was used to display literals:

Before 4400 no highlight

After PR 4000, the literals stand out with a light grey background:

After 4400 highlight

There’s a lot that goes into making text on the web readable (see Stephanie’s slides from her talk at #a11yTOConf for some suggestions). One of the things we can do with the default style is to try to make lines about 50-75 characters wide. On the other hand, code examples don’t wrap well, and we want to make them stand out. We’re experimenting with style changes for line length with beta testers, using some of the ideas from blog.mozilla.org. For example, PR 4402 expands the sample output, making the examples stand out from the rest of the page.

Before PR 4402, the examples shared the text’s narrow width:

Before 4402 narrow

After PR 4402, the example is as wide as the code samples, and the buttons restyled:

After 4402 narrow

We’ll test more adjustments with beta testers and in individual user tests. Some of these we’ll ship immediately, and others will inform the article redesign.

Continued Conversion to Browser Compat Data

The Browser Compat Data (BCD) project now includes all the HTML and JavaScript compatibility data from MDN. 1,500 MDN pages now generate their compatibility tables from this data. Only 4,500 more to go!

The BCD project was the most active MDN project in September. There were 159 commits over 90 pull requests. These PRs came from from 18 different contributors, bringing the total to 50 contributors. There’s over 58,000 additional lines in the project. 13 of these PRs are from Daniel D. Beck, who is joining the MDN team as a contractor.

This progress was made possible by Florian Scholz, Jean-Yves Perrier, and wbamberg, who quickly and accurately reviewed the PRs, working out issues and getting them merged. Florian has also started a weekly release of the npm package, and we’re up to mdn-browser-compat-data 0.0.8.

Shipped Tweaks and Fixes

There were many PRs merged in September:

Here are some of the highlights:

Planned for October

Work will continue to migrate to Browser Compat Data, and to fix issues with the redesign and the new interactive examples.

Move MDN to AWS

This week, we’ll complete our functional testing of MDN, making sure that page editing and other read/write tests are working, and that the rarely used features continue to work.

On Tuesday October 10, we’ll put SCL3 in Maintenance Mode again, move the database, and come back with MDN in AWS.

We’ve done a lot of preparation, but we expect something to break, so we’re planning on fixing AWS-related bugs in October. The AWS move will also allow us to improve our deployment processes, helping us ship features faster. If things go smoothly, we have plenty of other work lined up, such as style improvements, SEO-related tweaks, updating to Django 1.11, and getting KumaScript UI strings into Pontoon.

Improve Performance of the Interactive Editor

We’re continuing the beta test for the interactive editor. The feedback has been overwhelming positive, but we’re not happy with the page speed impact. We’ll continue work in October to improve performance. In the meantime, contractor Mark Boas is preparing examples for the launch, such as 26 examples for JavaScript expressions and operators (PR 286).

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 04 Oct 2017

wo, 04/10/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Pagina's