mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Allison Naaktgeboren: Applying Privacy Series: Introduction

Mozilla planet - di, 14/10/2014 - 05:38

Introduction

In January, I laid out information in a presentation & blog post information for a discussion about applying Mozilla’s privacy principles in practice to engineering.  Several fellow engineers wanted to see it applied in a concrete example, complaining that the material presented was too abstract to be actionable. This  is a fictional series of conversations around the concrete development of a fictional mobile app feature. Designing and building software is a process of evolving and refining ideas, and this example is designed for engineers to understand actionable privacy and data safety concerns can and should be a part of the development process.

Disclaimer

The example is fictional. Any resemblance to any real or imagined feature, product, service, or person is purely accidental. Some technical statements to flesh out the fictional dialogues. They are assumed to only apply to this fictional feature of a fictional mobile application. The architecture might not be production-quality. Don’t get too hung up on it, it’s a fictional teaching example.

Thank You!

    Before I begin, a big thank you to Stacy Martin, Alina Hua, Dietrich Ayala, Matt Brubeck, Mark Finkle, Joe Stevenson, and Sheeri Cabral for their input on this series of posts.

The Cast of Characters

so fictional they don’t even get real names

  1. Engineer
  2. Engineering Manager
  3. Service Operations Engineer
  4. Database Administrator (DBA)
  5. Project Manager
  6. Product Manager
  7. Privacy Officer, Legal’s Privacy Auditor, Privacy & Security there are many names & different positions here
  8. UX Designer

Fictional Problem Setup

Imagine that the EU provides a free service to all residents that will translate English text to one of the EU’s supported languages. The service requires the target language and the device Id. It is however, rather slow.

For the purposes of this fictional example, the device id is a hard coded number on each computer, tablet, or phone. It is globally unique and unchangeable, and so highly identifiable.

A mobile application team wants to use this service to offer translation in page (a much desired feature non-English speakers) to EU residents using their mobile app.  For non-english readers, the ability to read the app’s content in their own language is a highly desired feature.

After some prototyping & investigation, they determine that the very slow speed of the translation service adversely affects usability. They’d still like to use it, so they decide to evolve the feature. They’d also like to translate open content while the device is offline so the translated content comes up quicker when the user reopens the app.

Every New Feature Starts Somewhere

Engineer sees announcement in tech press about the EU’s new service and its noble goal of overcoming language barriers on the web for its citizens. She sends an email to her team’s public mailing list “wouldn’t it be cool apply this to our content for users instead of them having to copy/paste blocks of text into an edit box? We have access to those values on the phone already”

Engineering Team, Engineering Manager & Product Manager on the thread are enthusiastic about the idea.  Engineering Manager assigns Engineer to make it happen.

 

She schedules the initial meeting to figure out what the heck that actually means and nail down a specification.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Back In New Zealand

Mozilla planet - di, 14/10/2014 - 04:37

I just finished a three-week stint in North America, mostly a family holiday but some work too. Some highlights:

  • Visited friends in Vancouver. Did the Grouse Grind in just over an hour. Lovely mountain.
  • Work week in Toronto. Felt productive. Ran barefoot from downtown to Humber River and back a couple of times. Lovely.
  • Rendezvoused with my family in New York. Spent a day in White Plains where we used to live, and at Trinity Presbyterian Church where we used to be members. Good sermon on the subject of "do not worry", and very interesting autobiographical talk by a Jewish Christian. Great time.
  • Visited the 9/11 Museum. Very good, though perhaps a shade overstressing the gravity of 3000 lives lost. One wonders what kind of memorial there will be if a nuke kills 100x that many.
  • Our favourite restaurant in Chinatown, Singapore Cafe, is gone :-(.
  • Had some great Persian food :-).
  • The amazingness of New York is still amazing.
  • Train to Boston. Gave a talk about rr at MIT, hosted by my former supervisor. Celebrated 20-year anniversary of me starting as his first (equal) grad student. Had my family watch Dad at work.
  • Spent time with wonderful friends.
  • Flew to Pittsburgh. More wonderful friends. Showed up at our old church with no prior warning to anyone. Enjoyed reactions. God continues to do great things there.
  • La Feria and Fuel-and-Fuddle still great. Still like Pittsburgh a lot.
  • Flew to San Francisco. Late arrival due to flight being rerouted through Dallas, but did not catch Ebola.
  • Saw numerous of seals and dolphins from the Golden Gate Bridge.
  • Showed my family a real Mozilla office.
  • Two days in Mountain View for Gecko planning meetings. Hilarious dinner incident. Failed to win at Settlers.
  • Took family to Big Basin Redwoods State Park; saw pelicans, deer, a dead snake, a banana slug, and a bobcat.
  • Ever since we made liquid nitrogen ice cream for my bachelor party, I've thought it would make a great franchise; Smitten delivers.
  • Kiwi friends in town for Salesforce conference; took them to Land's End for a walk. Saw a coyote.
  • Watched Fleet Week Blue Angels display from Twin Peaks. Excellent.
  • Played disc golf; absolutely hopeless.
  • Went to church at Home of Christ #5 with friends. Excellent sermon about the necessity of the cross.
  • Flew home on Air NZ's new 777. Upgraded entertainment system is great; more stuff than you could ever hope to watch.

Movie picoreviews:

    Edge Of Tomorrow: Groundhog Day meets Starship Troopers. Not as good as Groundhog Day but pretty good.

    X-Men: Days Of Future Past: OK.

    Godzilla: OK if your expectations are set appropriately.

    Dawn Of The Planet Of The Apes: watched without sound, which more or less worked. OK.

    Amazing Spider-Man 2: Bad.

    Se7en: Good.

Categorieën: Mozilla-nl planet

Gregory Szorc: Deterministic and Minimal Docker Images

Mozilla planet - ma, 13/10/2014 - 18:50

Docker is a really nifty tool. It vastly lowers the barrier to distributing and executing applications. It forces people to think about building server side code as a collection of discrete applications and services. When it was released, I instantly realized its potential, including for uses it wasn't primary intended for, such as applications in automated build and test environments.

Over the months, Docker's feature set has grown and many of its shortcomings have been addressed. It's more usable than ever. Most of my early complaints and concerns have been addressed or are actively being addressed.

But one supposedly solved part of Docker still bothers me: image creation.

One of the properties that gets people excited about Docker is the ability to ship execution environments around as data. Simply produce an image once, transfer it to a central server, pull it down from anywhere, and execute. That's pretty damn elegant. I dare say Docker has solved the image distribution problem. (Ignore for a minute that the implementation detail of how images map to filesystems still has a few quirks to work out. But they'll solve that.)

The ease at which Docker manages images is brilliant. I, like many, was overcome with joy and marvelled at how amazing it was. But as I started producing more and more images, my initial excitement turned to frustration.

The thing that bothers me most about images is that the de facto and recommended method for producing images is neither deterministic nor results in minimal images. I strongly believe that the current recommended and applied approach is far from optimal and has too many drawbacks. Let me explain.

If you look at the Dockerfiles from the official Docker library (examples: Node, MySQL), you notice something in common: they tend to use apt-get update as one of their first steps. For those not familiar with Apt, that command will synchronize the package repository indexes with a remote server. In other words, depending on when you run the command, different versions of packages will be pulled down and the result of image creation will differ. The same thing happens when you clone a Git repository. Depending on when you run the command - when you create the image - you may get different output. If you create an image from scratch today, it could have a different version of say Python than it did the day before. This can be a big deal, especially if you are trying to use Docker to accurately reproduce environments.

This non-determinism of building Docker images really bothers me. It seems to run counter to Docker's goal of facilitating reliable environments for running applications. Sure, one person can produce an image once, upload it to a Docker Registry server, and have others pull it. But there are applications where independent production of the same base image is important.

One area is the security arena. There are many people who are justifiably paranoid about running binaries produced by others and pre-built Docker images set off all kinds of alarms. So, these people would rather build an image from source, from a Dockerfile, than pull binaries. Except then they build the image from a Dockerfile and the application doesn't run because of an incompatibility with a new version of some random package whose version wasn't pinned. Of course, you probably lost numerous hours tracing down this obscure reason. How frustrating! Determinism and verifiability as part of Docker image creation help solve this problem.

Deterministic image building is also important for disaster recovery. What happens if your Docker Registry and all hosts with copies of its images go down? If you go to build the images from scratch again, what guarantee do you have that things will behave the same? Without determinism, you are taking a risk that things will be different and your images won't work as intended. That's scary. (Yes, Docker is no different here from existing tools that attempt to solve this problem.)

What if your open source product relies on a proprietary component that can't be legally distributed? So much for Docker image distribution. The best you can do is provide a base image and instructions for completing the process. But if that doesn't work deterministically, your users now have varying Docker images, again undermining Docker's goal of increasing consistency.

My other main concern about Docker images is that they tend to be large, both in size and in scope. Many Docker images use a full Linux install as their base. A lot of people start with a base e.g. Ubuntu or Debian install, apt-get install the required packages, do some extra configuration, and call it a day. Simple and straightforward, yes. But this practice makes me more than a bit uneasy.

One of the themes surrounding Docker is minimalism. Containers are lighter than VMs; just ship your containers around; deploy dozens or hundreds of containers simultaneously; compose your applications of many, smaller containers instead of larger, monolithic ones. I get it and am totally on board. So why are Docker images built on top of the bloaty excess of a full operating system (modulo the kernel)? Do I really need a package manager in my Docker image? Do I need a compiler or header files so I can e.g. build binary Python extensions? No, I don't, thank you.

As a security-minded person, I want my Docker images to consist of only the files they need, especially binary files. By leaving out non-critical elements from your image and your run-time environment, you are reducing the surface area to attack. If your application doesn't need a shell, don't include a shell and don't leave yourself potentially vulnerable to shellshock. I want the attacker who inevitably breaks out of my application into the outer container to get nothing, not something that looks like an operating system and has access to tools like curl and wget that could potentially be used to craft a more advanced attack (which might even be able to exploit a kernel vulnerability to break out of the container). Of course, you can and should pursue additional security protections in addition to attack surface reduction to secure your execution environment. Defense in depth. But that doesn't give Docker images a free pass on being bloated.

Another reason I want smaller containers is... because they are smaller. People tend to have relatively slow upload bandwidth. Pushing Docker images that can be hundreds of megabytes clogs my tubes. However, I'll gladly push 10, 20, or even 50 megabytes of only the necessary data. When you factor in that Docker image creation isn't deterministic, you also realize that different people are producing different versions of images from the same Dockerfiles and that you have to spend extra bandwidth transferring the different versions around. This bites me all the time when I'm creating new images and am experimenting with the creation steps. I tend to bypass the fake caching mechanism (fake because the output isn't deterministic) and this really results in data explosion.

I understand why Docker images are neither deterministic nor minimal: making them so is a hard problem. I think Docker was right to prioritize solving distribution (it opens up many new possibilities). But I really wish some effort could be put into making images deterministic (and thus verifiable) and more minimal. I think it would make Docker an even more appealing platform, especially for the security conscious. (As an aside, I would absolutely love if we could ship a verifiable Firefox build, for example.)

These are hard problems. But they are solvable. Here's how I would do it.

First, let's tackle deterministic image creation. Despite computers and software being ideally deterministic, building software tends not to be, so deterministic image creation is a hard problem. Even tools like Puppet and Chef which claim to solve aspects of this problem don't do a very good job with determinism. Read my post on The Importance of Time on Machine Provisioning for more on the topic. But there are solutions. NixOS and the Nix package manager have the potential to be used as the basis of a deterministic image building platform. The high-level overview of Nix is that the inputs and contents of a package determine the package ID. If you know how Git or Mercurial get their commit SHA-1's, it's pretty much the same concept. In theory, two people on different machines start with the same environment and bootstrap the exact same packages, all from source. Gitian is a similar solution. Although I prefer Nix's content-based approach and how it goes about managing packages and environments. Nix feels so right as a base for deterministically building software. Anyway, yes, fully verifiable build environments are turtles all the way down (I recommend reading Tor's overview of the problem and their approach. However, Nix's approach addresses many of the turtles and silences most of the critics. I would absolutely love if more and more Docker images were the result of a deterministic build process like Nix. Perhaps you could define the full set of packages (with versions) that would be used. Let's call this the package manifest. You would then PGP sign and distribute your manifest. You could then have Nix step through all the dependencies, compiling everything from source. If PGP verification fails, compilation output changes, or extra files are needed, the build aborts or issues a warning. I have a feeling the security-minded community would go crazy over this. I know I would.

OK, so now you can use Nix to produce packages (and thus images) (more) deterministically. How do you make them minimal? Well, instead of just packaging the entire environment, I'd employ tools like makejail. The purpose of makejail is to create minimal chroot jail environments. These are very similar to Docker/LXC containers. In fact, you can often take a tarball of a chroot directory tree and convert it into a Docker container! With makejail, you define a configuration file saying among other things what binaries to run inside the jail. makejail will trace file I/O of that binary and copy over accessed files. The result is an execution environment that (hopefully) contains only what you need. Then, create an archive of that environment and pipe it into docker build to create a minimal Docker image.

In summary, Nix provides you with a reliable and verifiable build environment. Tools like makejail pair down the produced packages into something minimal, which you then turn into your Docker image. Regular people can still pull binary images, but they are much smaller and more in tune with Docker's principles of minimalism. The paranoid among us can produce the same bits from source (after verifying the inputs look credible and waiting through a few hours of compiling). Or, perhaps the individual files in the image could be signed and thus verified via trust somehow? The company deploying Docker can have peace of mind that disaster scenarios resulting in Docker image loss should not result in total loss of the image (just rebuild it exactly as it was before).

You'll note that my proposed solution does not involve Dockerfiles as they exist today. I just don't think Dockerfile's design of stackable layers of commands is the right model, at least for people who care about determinism and minimalism. You really want a recipe that knows how to create a set of relevant files and some metadata like what ports to expose, what command to run on container start, etc and turn that into your Docker image. I suppose you could accomplish this all inside Dockerfiles. But that's a pretty radical departure from how Dockerfiles work today. I'm not sure the two solutions are compatible. Something to think about.

I'm pretty sure of what it would take to add deterministic and verifiable building of minimal and more secure Docker images. And, if someone solved this problem, it could be applicable outside of Docker (again, Docker images are essentially chroot environments plus metadata). As I was putting the finishing touches on this article, I discovered nix-docker. It looks very promising! I hope the Docker community latches on to these ideas and makes deterministic, verifiable, and minimal images the default, not the exception.

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 33 rc1 to rc2

Mozilla planet - ma, 13/10/2014 - 12:37

A important last change forced us to generate a build 2 of Firefox 33. We took this opportunity to backout an OMTC-related regression and two startup fixes on fennec.

  • 9 changesets
  • 19 files changed
  • 426 insertions
  • 60 deletions

ExtensionOccurrences cpp10 java5 h3 build1

ModuleOccurrences security12 mobile5 widget1 gfx1

List of changesets:

Ryan VanderMeulenBacked out changeset 9bf2a5b5162d (Bug 1044975) - 1dd4fb21d976 Ryan VanderMeulenBacked out changeset d89ec5b69c01 (Bug 1076825) - 1233c159ab6d Ryan VanderMeulenBacked out changeset bbc35ec2c90e (Bug 1061214) - 6b3eed217425 Jon CoppeardBug 1061214. r=terrence, a=sledru - a485602f5cb1 Ryan VanderMeulenBacked out changeset e8360a0c7d74 (Bug 1074378) - 7683a98b0400 Richard NewmanBug 1077645 - Be paranoid when parsing external intent extras. r=snorp, a=sylvestre - 628f8f6c6f72 Richard NewmanBug 1079876 - Handle unexpected exceptions when reading external extras. r=mfinkle, a=sylvestre - 96bcea5ee703 David KeelerBug 1058812 - mozilla::pkix: Add SignatureAlgorithm::unsupported_algorithm to better handle e.g. roots signed with RSA/MD5. r=briansmith, a=sledru - 4c62d5e8d5fc David KeelerBug 1058812 - mozilla::pkix: Test handling unsupported signature algorithms. r=briansmith, a=sledru - fe4f4c9342b1

Categorieën: Mozilla-nl planet

Mozilla vervangt wachtwoorden door inloggen via e-mail - Security.nl

Nieuws verzameld via Google - ma, 13/10/2014 - 10:23

Mozilla vervangt wachtwoorden door inloggen via e-mail
Security.nl
Mozilla experimenteert op dit moment met een methode waarbij gebruikers niet meer via een wachtwoord, maar via e-mail en uiteindelijk ook sms kunnen inloggen. Het idee is afkomstig van Mozilla Webmaker, de onderzoeksafdeling van de ...

Categorieën: Mozilla-nl planet

Daniel Glazman: Happy birthday Disruptive Innovations!

Mozilla planet - ma, 13/10/2014 - 09:58

Eleven years ago, I was driving as fast as I could to leave two administrative files, one in Saint-Quentin en Yvelines, one in Versailles. At 2pm, the registration of Disruptive Innovations as a LLC was done and the company could start operating in the open. Eleven years ago, holy cow :-) What a ride, so many fun years, so many clients and projects, so much code. Current state of mind? Still disruptive and still innovating. Code and chutzpah !!!

Categorieën: Mozilla-nl planet

Zen Mobile to launch Mozilla Firefox based mobile by the end of the October - JBG News

Nieuws verzameld via Google - ma, 13/10/2014 - 05:02

JBG News

Zen Mobile to launch Mozilla Firefox based mobile by the end of the October
JBG News
While Google and Apple continue to battle over smartphone dominance with their Android and OS X platforms, Mozilla has quietly made a market for itself in the lower end spectrum of the market. Their latest offering, the Firefox OS, is being adapted by ...
Now, Zen Mobile to launch low cost Firefox smartphone in OctoberIndian Express
Zen Mobiles to launch ultra cheap Firefox smartphone in October in India: MozillaDaily Bhaskar (press release) (registration) (blog)
Zen Mobile's Firefox powered smartphone out in OctoberIndia Today
Mobiletor.com -Delhi Daily News
alle 10 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Daniel Stenberg: What a removed search from Google looks like

Mozilla planet - zo, 12/10/2014 - 13:56

Back in the days when I participated in the starting of the Subversion project, I found the mailing list archive we had really dysfunctional and hard to use, so I set up a separate archive for the benefit of everyone who wanted an alternative way to find Subversion related posts.

This archive is still alive and it recently surpassed 370,000 archived emails, all related to Subversion, for seven different mailing lists.

Today I received a notice from Google (shown in its entirety below) that one of the mails received in 2009 is now apparently removed from a search using a name – if done within the European Union at least. It is hard to take this seriously when you look at the page in question, and as there aren’t that very many names involved in that page the possibilities of which name it is aren’t that many. As there are several different mail archives for Subversion mails I can only assume that the alternative search results also have been removed.

This is the first removal I’ve got for any of the sites and contents I host.

Notice of removal from Google Search

Hello,

Due to a request under data protection law in Europe, we are no longer able to show one or more pages from your site in our search results in response to some search queries for names or other personal identifiers. Only results on European versions of Google are affected. No action is required from you.

These pages have not been blocked entirely from our search results, and will continue to appear for queries other than those specified by individuals in the European data protection law requests we have honored. Unfortunately, due to individual privacy concerns, we are not able to disclose which queries have been affected.

Please note that in many cases, the affected queries do not relate to the name of any person mentioned prominently on the page. For example, in some cases, the name may appear only in a comment section.

If you believe Google should be aware of additional information regarding this content that might result in a reversal or other change to this removal action, you can use our form at https://www.google.com/webmasters/tools/eu-privacy-webmaster. Please note that we can’t guarantee responses to submissions to that form.

The following URLs have been affected by this action:

http://svn.haxx.se/users/archive-2009-08/0808.shtml

Regards,

The Google Team

Categorieën: Mozilla-nl planet

Christian Heilmann: Evangelism conundrum: Don’t mention the product

Mozilla planet - zo, 12/10/2014 - 12:59

Being a public figure for a company is tough. It is not only about what you do wrong or right – although this is a big part. It is also about fighting conditioning and bad experiences of the people you are trying to reach. Many a time you will be accused of doing something badly because of people’s preconceptions. Inside and outside the company.

The outside view: oh god, just another sales pitch!

One of these conditionings is the painful memory of the boring sales pitch we all had to endure sooner or later in our lives. We are at an event we went through a lot of hassle to get tickets for. And then we get a presenter on stage who is “excited” about a product. It is also obvious that he or she never used the product in earnest. Or it is a product that you could not care less about and yet here is an hour of it shoved in your face.

Many a time these are “paid for” speaking slots. Conferences offer companies a chance to go on stage in exchange for sponsorship. These don’t send their best speakers, but those who are most experienced in delivering “the cool sales pitch”. A product the marketing department worked on hard to not look like an obvious advertisement. In most cases these turn out worse than a – at least honest – straight up sales pitch would have.

I think my favourite nonsense moment is “the timelapse excitement”. That is when when a presenter is “excited” about a new feature of a product and having used it “for weeks now with all my friends”. All the while whilst the feature is not yet available. It is sadly enough often just too obvious that you are being fed a make-believe usefulness of the product.

This is why when you go on stage and you show a product people will almost immediately switch into “oh god, here comes the sale” mode. And they complain about this on Twitter as soon as you mention a product for the first time.

This is unfair to the presenter. Of course he or she would speak about the products they are most familiar with. It should be obvious when the person knows about it or just tries to sell it, but it is easier to be snarky instead of waiting for that.

The inside view: why don’t you promote our product more?

From your company you get pressure to talk more about your products. You are also asked to show proof that what you did on stage made a difference and got people excited. Often this is showing the Twitter time line during your talk which is when a snarky comment can be disastrous.

Many people in the company will see evangelists as “sales people” and “show men”. Your job is to get people excited about the products they create. It is a job filled with fancy hotels, a great flight status and a general rockstar life. They either don’t understand what you do or they just don’t respect you as an equal. After all, you don’t spend a lot of time coding and working on the product. You only need to present the work of others. Simple, isn’t it? Our lives can look fancy to the outside and jealousy runs deep.

This can lead to a terrible gap. You end up as a promoter of a product and you lack the necessary knowledge that makes you confident enough to talk about it on stage. You’re seen as a sales guy by the audience and as a given by your peers. And it can be not at all your fault as your attempts to reach out to people in the company for information don’t yield any answers. Often it is fine to be “too busy” to tell you about a new feature and it should be up to you to find it as “the documentation is in the bug reports”.

Often your peers like to point out how great other companies are at presenting their products. And that whilst dismissing or not even looking at what you do. That’s because it is important for them to check what the competition does. It is less exciting to see how your own products “are being sold”.

How to escape this conundrum?

Frustration is the worst thing you can experience as an evangelist.

Your job is to get people excited and talk to another. To get company information out to the world and get feedback from the outside world to your peers. This is a kind of translator role, but if you look deep inside and shine a hard light on it, you are also selling things.

Bruce Lawson covered that in his talk about how he presents. You are a sales person. What you do though is sell excitement and knowledge, not a packaged product. You bring the angle people did not expect. You bring the inside knowledge that the packaging of the product doesn’t talk about. You show the insider channels to get more information and talk to the people who work on the product. That can only work when these people are also open to this. When they understand that any delay in feedback is not only seen as a disappointment for the person who asked the question. It is also diminishing your trustworthiness and your reputation and without that you are dead on stage.

In essence, do not mention the product without context. Don’t show the overview slides and the numbers the press and marketing team uses. Show how the product solves issues, show how the product fits into a workflow. Show your product in comparison with competitive products, praising the benefits of either.

And grow a thick skin. Our jobs are tiring, they are busy and it is damn hard to keep up a normal social life when you are on the road. Each sting from your peers hurts, each “oh crap, now the sales pitch starts” can frustrate you. You’re a person who hates sales pitches and tries very hard to be different. Being thrown in the same group feels terribly hurtful.

It is up to you to let that get you down. You could also concentrate on the good, revel in the excitement you see in people’s faces when you show them a trick they didn’t know. Seeing people grow in their careers when they repeat what they learned from you to their bosses.

If you aren’t excited about the product, stop talking about it. Instead work with the product team to make it exciting first. Or move on. There are many great products out there.

Categorieën: Mozilla-nl planet

Rob Hawkes: Leaving Pusher to work on ViziCities full time

Mozilla planet - zo, 12/10/2014 - 02:00

On the 7th of November I'll be leaving my day job heading up developer relations at Pusher. Why? To devote all my time and effort toward ensuring ViziCities gets the chance it very much deserves. I'm looking to fund the next 6–12 months of development and, if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).

I'm no startup guru (I often feel like I'm making this up as I go), all I know is that I have a vision for ViziCities and, as a result of a year talking with governments and organisations, I'm beyond confident that there's demand for what ViziCities offers.

Want to chat? Send me an email at rob@vizicities.com. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.

Leaving your day job. Are you crazy?

Probably. I certainly don't do things by halves and I definitely thrive under immense pressure with the distinct possibility of failure. I've learnt that life isn't fulfilling for me unless I'm taking a risk with something unknown. I'm obsessed with learning something new, whether in programming, business or something else entirely. The process of learning and experimentation is my lifeblood, the end result of that is a bonus.

I think quitting your day job without having the funding in place to secure the next 6 to 12 months counts as immense pressure, some may even call it stupid. To me it wasn't even a choice; I knew I had to work on ViziCities so my time at Pusher had to end, simple. I'm sure I'll work the rest out.

Let me be clear. I thoroughly enjoyed my time at Pusher, they are the nicest bunch of people and I'm going to miss them dearly. My favourite thing about working at Pusher was being around the team every single day. Their support and advice around my decision with ViziCities has really helped other the past few weeks. I wish them all the best for the future.

As for my future, I'm absolutely terrified about it. That's a good thing, it keeps me focused and sharp.

So what's the plan with ViziCities

Over the past 18 months ViziCities has evolved from a disparate set of exciting experiments into a concise and deliberate offering that solves real problems for people. What has been learnt most over that time is that visualising cities in 3D isn't what makes ViziCities so special (though it's really pretty), rather it's the problems it can solve and the ways it can help governments, organisations and citizens. That's where ViziCities will make its mark.

After numerous discussions with government departments and large organisations worldwide it's clear that not only can ViziCities solve their problems, it's also financially viable as a long-term business. The beauty of what ViziCities offers is that people will always need tools to help turn geographic data into actionable results and insight. Nothing else provides this in the same way ViziCities can, both as a result of the approach but also as a result of the people working on it.

ViziCities now needs your help. I need your help. For this to happen it needs funding, and not necessarily that much to start with. There are multiple viable business models and avenues to explore, all of which are flexible and complementary, none of which compromise the open-source heart.

I'm looking to fund the next 6–12 months of development, and if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).

I'll be writing about the quest for funding in much more detail.

You can help ViziCities succeed

This is the part where you can help. I can't magic funds out of no where, though I'm trying my best. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.

Want to chat? Send me an email at rob@vizicities.com.

Categorieën: Mozilla-nl planet

James Long: Transducers.js Round 2 with Benchmarks

Mozilla planet - zo, 12/10/2014 - 02:00

A few weeks ago I released my transducers library and explained the algorithm behind it. It's a wonderfully simple technique for high-performant transformations like map and filter and was created by Clojure (mostly Rich Hickey I think).

Over the past week I've been hard at work polishing and benchmarking it. Today I published version 0.2.0 with a new API and completely refactored internals that make it easy to use and get performance that beats other popular utility libraries.

A Few Benchmarks

Benchmarking is hard, but I think it's worthwhile to post a few of them that backs up these claims. All of these were run on the latest version of node (0.10.32). First I wanted to prove how transducers devastates other libraries for large arrays. The test performs two maps and two filters. Here is the transducer code:

into([], compose( map(function(x) { return x + 10; }), map(function(x) { return x * 2; }), filter(function(x) { return x % 5 === 0; }), filter(function(x) { return x % 2 === 0; }) ), arr);

The same transformations were implemented in lodash and underscore, and benchmarked with an arr of various sizes. The graph below shows the time it took to run versus the size of arr, which starts at 500 and goes up to around 500,000. Here's the full benchmark (it outputs Hz so the y-axis is 1/Hz).

Once the array reaches around the size of 90,000, transducers completely blow the competition away. This should be obvious; we never need to allocate anything between transformations, while underscore and lodash always have to allocation an intermediate array.

Laziness would not help here, since we are eagerly evaluating the whole array.

Small Arrays

While it's not as dramatic, even with arrays as small as 1000 you will see performance wins. Here is the same benchmarks but only running it twice with a size of 1000 and 10,000:

_.map/filter (1000) x 22,302 ops/sec ±0.90% (100 runs sampled) u.map/filter (1000) x 21,290 ops/sec ±0.65% (96 runs sampled) t.map/filter+transduce (1000) x 26,638 ops/sec ±0.77% (98 runs sampled) _.map/filter (10000) x 2,277 ops/sec ±0.49% (101 runs sampled) u.map/filter (10000) x 2,155 ops/sec ±0.77% (99 runs sampled) t.map/filter+transduce (10000) x 2,832 ops/sec ±0.44% (99 runs sampled) Take

If you use the take operation to only take, say, 10 items, transducers will only send 10 items through the transformation pipeline. Obviously if I ran benchmarks we would also blow away lodash and underscore here because they do not lazily optimize for take (and transform all the array first and then runs take).

Laziness does buy you this short-curcuiting behavior, but we get it without explicitly being lazy.

immutable-js

The immutable-js library is fantastic collection of immutable data structures. They implement lazy transformations so you get a lot of perf wins with that. Even so, there is a cost to the laziness machinery. I implemented the same map->map->filter->filter transformation above in another benchmark which compares it with their transformations. Here is the output with arr sizes of 1000 and 100,000:

Immutable map/filter (1000) x 6,414 ops/sec ±0.95% (99 runs sampled) transducer map/filter (1000) x 7,119 ops/sec ±1.58% (96 runs sampled) Immutable map/filter (100000) x 67.77 ops/sec ±0.95% (72 runs sampled) transducer map/filter (100000) x 79.23 ops/sec ±0.47% (69 runs sampled)

This kind of perf win isn't a huge deal, and their transformations perform well. But we can apply this to any data structure. Did you notice how easy it was to use our library with immutable-js? View the full benchmark here.

Transducers.js Refactored

I just pushed v0.2.0 to npm with all the new APIs and performance improvements. Read more in the new docs.

You may have noticed the Cognitect, where Rich Hickey and other core maintainers of Clojure(Script) work, released their own JavaScript transducers library on Friday. I was a little bummed because I had just spent a lot of time refactoring mine, but I think I offer a few improvements. Internally, we basically converged on the exact same technique for implementing transducers, so you should find the same performance characteristics above with their library.

All of the following features are things you can find in my library transducers.js.

My library now offers several integration points for using transducers:

  • seq takes a collection and a transformer and returns a collection of the same type. If you pass it an array, you will get back an array. An iterator will give you back an iterator. For example:
// Filter an array seq([1, 2, 3], filter(x => x > 1)); // -> [ 2, 3 ] // Map an object seq({ foo: 1, bar: 2 }, map(kv => [kv[0], kv[1] + 1])); // -> { foo: 2, bar: 3 } // Lazily transform an iterable function* nums() { var i = 1; while(true) { yield i++; } } var iter = seq(nums(), compose(map(x => x * 2), filter(x => x > 4)); iter.next().value; // -> 6 iter.next().value; // -> 8 iter.next().value; // -> 10
  • toArray, toObject, and toIter will take any iterable type and force them into the type that you requested. Each of these can optionally take a transform as the second argument.
// Make an array from an object toArray({ foo: 1, bar: 2 }); // -> [ [ 'foo', 1 ], [ 'bar', 2 ] ] // Make an array from an iterable toArray(nums(), take(3)); // -> [ 1, 2, 3 ]

That's a very quick overview, and you can read more about these in the docs.

Collections as Arguments

All the transformations in transducers.js optionally take a collection as the first argument, so the familiar pattern of map(coll, function(x) { return x + 1; }) still works fine. This is an extremely common use case so this will be very helpful if you are transitioning from another library. You can also pass a context as the third argument to specify what this should be bound to.

Read more about the various ways to use transformations.

Laziness

Transducers remove the requirement of being lazy to optimize for things like take(10). However, it can still be useful to "bind" a collection to a set of transformations and pass it around, without actually evaluating the transformations. It's also useful if you want to apply transformations to a custom data type, get an iterator back, and rebuild another custom data type from it (there is still no intermediate array).

Whenever you apply transformations to an iterator it does so lazily. It's easy to convert array transformations into a lazy operation, just use the utility function iterator to grab an iterator of the array instead:

seq(iterator([1, 2, 3]), compose( map(x => x + 1), filter(x => x % 2 === 0))) // -> <Iterator>

Our transformations are completely blind to the fact that our transformations may or may not be lazy.

The transformer Protocol

Lastly, transducers.js supports a new protocol that I call the transformer protocol. If a custom data structure implements this, not only can we iterate over it in functions like seq, but we can also build up a new instance. That means seq won't return an iterator, but it will return an actual instance.

For example, here's how you would implement it in Immutable.Vector:

var t = require('./transducers'); Immutable.Vector.prototype[t.protocols.transformer] = { init: function() { return Immutable.Vector().asMutable(); }, result: function(vec) { return vec.asImmutable(); }, step: function(vec, x) { return vec.push(x); } };

If you implement the transformer protocol, now your data structure will work with all of the builtin functions. You can just use seq like normal and you get back an immutable vector!

t.seq(Immutable.Vector(1, 2, 3, 4, 5), t.compose( t.map(function(x) { return x + 10; }), t.map(function(x) { return x * 2; }), t.filter(function(x) { return x % 5 === 0; }), t.filter(function(x) { return x % 2 === 0; }))); // -> Vector [ 30 ]

I hope you give transducers a try, they are really fun! And unlike Cognitect's project, mine is happy to receive pull requests. :)

Categorieën: Mozilla-nl planet

Brett Gaylor: From Mozilla to new making

Mozilla planet - za, 11/10/2014 - 19:00

Yesterday was my last day as an employee of the Mozilla Foundation. I’m leaving my position as VP, Webmaker to create an interactive web series about privacy and the economy of the web.

I’ve had the privilege of being a “crazy Mofo” for nearly five years. Starting in early 2010, I worked with David Humphrey and researchers at the Center for Development of Open Technology to create Popcorn.js. Having just completed “Rip!”, I was really interested in mashups - and Popcorn was a mashup of open web technology questions (how can we make video as elemental an element of the web as images or links?) and formal questions about documentary (what would a “web native” documentary look like? what can video do on the web that it can’t do on TV?). That mashup is one of the most exciting creative projects I’ve ever been involved with, and lead to a wonderful amount of unexpected innovation and opportunity. An award winning 3D documentary by a pioneer of web documentaries, the technological basis of a cohort of innovative(and fun) startups, and a kick ass video creation tool that was part of the DNA of Webmaker.org - which this year reached 200,000 users and facilitated the learning experience of over 127,200 learners face to face at our annual Maker Party.

Thinking about video and the web, and making things that aim to get the best of both mediums, is what brought me to Mozilla - and it’s what’s taking me to my next adventure.

I’m joining my friends at Upian in Paris (remotely, natch) to direct a multi-part web series around privacy, surveillance and the economy of the web. The project is called Do Not Track and it’s supported by the National Film Board of Canada, Arte, and Bayerischer Rundfunk (BR), the Tribeca Film Institute and the Centre National du Cinéma. I’m thrilled by the creative challenge and humbled by the company I’ll be keeping - I’ve wanted to work with Upian since their seminal web documentary “Gaza/Sderot” and have been thrilled to watch from the sidelines as they’ve made Prison Valley, Alma, the MIT’s Moments of Innovation project, and the impressive amount of work they do for clients in France and around the world. These are some crazy mofos, and they know how to ship.

Fake it Till You Make it

Nobody knows what they’re doing. I can’t stress this enough.

— God (@TheTweetOfGod) September 20, 2014

Mozilla gave me a wonderful gift: to innovate on the web, to dream big, without asking permission to do so. To in fact internalize innovation as a personal responsibility. To hammer into me every day the belief that for the web to remain a public resource, the creativity of everyone needs to be brought to the effort. That those of us in positions of privilege have a responsibility to wake up every day trying to improve the network. It’s a calling that tends to attract really bright people, and it can illicit strong feelings of impostor syndrome for a clueless filmmaker. The gift Mozilla gave me is to witness first hand that even the most brilliant people, or especially the most brilliant people, are making it up every single day. That’s why the web remains as much an inspiration to me today as when I first touched it as a teenager. Even though smart people criticize sillicon valley’s hypercapitalism, or while governments are breeding cynicsm and misrust by using the network for surveillance, I still believe the web remains the best place to invent your future.

I’m very excited, and naturally a bit scared, to be making something new again. Prepare yourself - I’m going to make shit up. I’ll need your help.

Working With

source

“Where some people choose software projects in order to solve problems, I have taken to choosing projects that allow me to work with various people. I have given up the comfort of being an expert , and replaced it with a desire to be alongside my friends, or those with whom I would like to be friends, no matter where I find them. My history among this crowd begins with friendships, many of which continue to this day.

This way of working, where collegiality subsumes technology or tools, is central to my personal and professional work. Even looking back over the past two years, most of the work I’ve done is influenced by a deep desire to work with rather than on. ” - On Working With Instead of On

David Humphrey, who wrote that, is who I want to be when I grow up. I will miss daily interactions with him, and many others who know who they are, very much. "In the context of working with, technology once again becomes the craft I both teach and am taught, it is what we share with one another, the occasion for our time together, the introduction, but not the reason, for our friendship.”

Thank you, Mozilla, for a wonderful introduction. Till the next thing we make!

Categorieën: Mozilla-nl planet

Mozilla WebDev Community: Webdev Extravaganza – October 2014

Mozilla planet - vr, 10/10/2014 - 16:06

Once a month, web developers from across Mozilla don our VR headsets and connect to our private Minecraft server to work together building giant idols of ourselves for the hoards of cows and pigs we raise to worship as gods. While we build, we talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, view a recording of the meeting in Air Mozilla, or attempt to decipher the aimless scrawls that are the meeting notes. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Phonebook now Launches Dialer App

lonnen shared the exciting news that the Mozilla internal phonebook now launches the dialer app on your phone when you click phone numbers on a mobile device. He also warned that anyone who has a change they want to make to the phonebook app should let him know before he forgets all that he had to learn to get this change out.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

django-browserid 0.11 is out

I (Osmose) chimed in to share the news that a new version of django-browserid is out. This version brings local assertion verification, support for offline development, support for Django 1.7, and other small fixes. The release is backwards-compatible with 0.10.1, and users on older versions can use the upgrade guide to get up-to-date. You can check out the release notes for more information.

mozUITour Helper Library for Triggering In-Chrome Tours

agibson shared a wrapper around the mozUITour API, which was used in the Australis marketing pages on mozilla.org to trigger highlights for new features within the Firefox user interface from JavaScript running in the web page. More sites are being added to the whitelist, and more features are being added to the API to open up new opportunities for in-chrome tours.

Parsimonious 0.6 (and 0.6.1) is Out!

ErikRose let us know that a new version of Parsimonious is out. Parsimonious is a parsing library written in pure Python, based on formal Parsing Expression Grammars (PEGs). You write a specification for the language you want to parse in a notation similar to EBNF, and Parsimonious does the rest.

The latest version includes support for custom rules, which let you hook in custom Python code for handling cases that are awkward or impossible to describe using PEGs. It also includes a @rule decorator and some convenience methods on the NodeVisitor class that simplify the common case of single-visitor grammars.

contribute.json Wants More Prettyness

peterbe stopped by to show of the design changes on the contribute.json website. There’s more work to be done; if you’re interested in helping out with contribute.json, let him know!

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

Name IRC Nick Role Project Cory Price ckprice Web Production Engineer Various Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Leeroy was Broken for a Bit

lonnen wanted to let people know that Leeroy, a service that triggers Jenkins test runs for projects on Github pull requests, was broken for a bit due to accidental deletion of the VM that was running the app. But it’s fixed now! Probably.

Webdev Module Updates

lonnen also shared some updates that have happened to the Mozilla Websites modules in the Mozilla Module System:

Static Caching and the State of Persona

peterbe raised a question about the cache timeouts on static assets loaded from Persona by implementing sites. In response, I gave a quick overview of the current state of Persona:

  • Along with callahad, djc has been named as co-maintainer, and the two are currently focusing on simplifying the codebase in order to make contribution easier.
  • A commitment to run the servers for Persona for a minimum period of time is currently working it’s way through approval, in order to help ease fears that the Persona service will just disappear.
  • Mozilla still has a paid operations employee who manages the Persona service and makes sure it is up and available. Persona is still accepting pull requests and will review, merge, and deploy them when they come in. Don’t be shy, contribute!

The answer to peterbe’s original question was “make a pull request and they’ll merge and push!”.

Graphviz graphs in Sphinx

ErikRose shared sphinx.ext.graphviz, which allows you to write Graphviz code in your documentation and have visual graphs be generated from the code. DXR uses it to render flowcharts illustrating the structure of a DXR plugin.

Turns out that building giant statues out of TNT was a bad idea. On the bright side, we won’t be running out of pork or beef any time soon.

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

 

Categorieën: Mozilla-nl planet

Carsten Book: Mozilla Plugincheck – Its a Community Thing

Mozilla planet - vr, 10/10/2014 - 12:57

Hi,

A lot of people are using Mozilla’s Plugincheck Page to make sure all the Plugins like Adobe Flash are up-to-date.

Schalk Neethling has create a great Blog Post about Plugincheck here.

So if you are interested in contributing to Plugincheck check out Schalk’s Blogpost!

Thanks!

 

– Tomcat

Categorieën: Mozilla-nl planet

Carsten Book: The past, Current and future

Mozilla planet - vr, 10/10/2014 - 12:34

- The past -
I’m now about a year member of the A-Team (Automation and Tools Team) and also Fulltime Sheriff.

It was the end of a lot of changes personal and job wise. I moved from the Alps to the Munich Area to the City of Freising and (and NO i was not at the Beer Oktoberfest ;) and working as Fulltime Sheriff after my QA/Partner Build and Releng Duties.

Its awesome to be part of the Sheriff Team and was also awesome to get so much help from the Team like from Ed, Wes and Ryan to get started.

At one time i took over the Sheriff-Duty for my European Timezone and it was quite challenging having the responsible for all the Code Trees with Backouts etc and also later checkin-neededs :) What i really like as Sheriff is the work across the different Divisions at Mozilla and its exciting to work as Mozilla Sheriff too :)

One of main contribution beside getting started was helping creating some How-To Articles at https://wiki.mozilla.org/Sheriffing/How:To . I hope that will help also others to get involved into sheriffing.

- Current -

We just switched over to use Treeherder as new tool for Sheriffing.

Its quite new and so it feels (as everything like a new car there are this questions like how i do things i used to do in my old car etc) and there are still some bugs and things we can improve but we will get there. Also its a ideal time for getting involved into sheriffing like with hammering on Treeherder.

and that leads into ….:)

- The Future is You – !

As every Open Source Project Mozilla is heavily depended on Community Members like you and there is even the oppourtunity beside all the other Areas at Mozilla to work as Community Sheriff.So let us know if you want to be involved as Community-Sheriff. You are always welcome. You can find us in the #ateam Channel on irc.mozilla.org

For myself i’m planning to work beside my other tasks more in Community Building like blogging about Sheriffing and also taking more part in the Open Source Meetings in Munich.

– Tomcat

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Council Elections – Campaing and candidates

Mozilla planet - vr, 10/10/2014 - 11:59

We’re excited to announce that we have 7 candidates for the forth cycle
of the Council elections, scheduled for October 18th. The Council has
carefully reviewed the candidates and agrees that they are all
extremely strong candidates to represent the Mozilla Reps program and
the interests of Reps.

The candidates are:

Now, it is up to Reps to elect the candidates to fill the four available
seats for a 12-month term
.

As detailed in the wiki, we are now entering the “campaign” phase
of this election cycle. This means that for the next 7 days,
candidates will all have an opportunity to communicate their agenda,
plans, achievements as a Rep/Mentor, and personal strengths, to the
Mozilla Reps voting body. They are encouraged to use their personal
Mozilla Reps profile page, their personal website/blog, Mozilla wiki
page or any other channel that they see fit to post information
regarding your candidacy.

To guide them in this effort, the Council has prepared 6 questions
that each candidate is asked to answer. We had originally wanted to
have candidates go through mozmoderator but due lack of time we will
do this next election cycle. The questions are the following:

  • What are the top three issues that you would want the Council to address were you to join the Council?
  • What is in your view the Mozilla Reps program’s biggest strength and weakness?
  • Identify something that is currently not working well in the Mozilla Reps program and which you think could be easy to fix?
  • What past achievement as a Rep or Mentor are you most proud of?
  • What are the specific qualities and skills that you have that you think will help you be an effective Council member?
  • As a Mentor, what do you do to try to encourage your inactive Mentees to be active again?

In the spirit of innovation and to help bring a human face to the
election process, the Council would like to add a new element to the
campaign: video. This video is optional, but we strongly encourage
candidates to create one.

That’s it for now. As always, if you have any questions, please don’t
hesitate to ask the Council at

reps-council at mozilla dot com

We’ll be giving regular election updates throughout these next two weeks
so stay tuned!

And remember, campaigning ends and voting starts on October 18th!

Comments on discourse.

Categorieën: Mozilla-nl planet

Daniel Stenberg: internal timers and timeouts of libcurl

Mozilla planet - vr, 10/10/2014 - 08:29

wall clockBear with me. It is time to take a deep dive into the libcurl internals and see how it handles timeouts and timers. This is meant as useful information to libcurl users but even more as insights for people who’d like to fiddle with libcurl internals and work on its source code and architecture.

socket activity or timeout

Everything internally in libcurl is using the multi, asynchronous, interface. We avoid blocking calls as far as we can. This means that libcurl always either waits for activity on a socket/file descriptor or for the time to come to do something. If there’s no socket activity and no timeout, there’s nothing to do and it just returns back out.

It is important to remember here that the API for libcurl doesn’t force the user to call it again within or at the specific time and it also allows users to call it again “too soon” if they like. Some users will even busy-loop like crazy and keep hammering the API like a machine-gun and we must deal with that. So, the timeouts are mostly to be considered advisory.

many timeouts

A single transfer can have multiple timeouts. For example one maximum time for the entire transfer, one for the connection phase and perhaps even more timers that handle for example speed caps (that makes libcurl not transfer data faster than a set limit) or detecting transfers speeds below a certain threshold within a given time period.

A single transfer is done with a single easy handle, which holds a list of all its timeouts in a sorted list. It allows libcurl to return a single time left until the nearest timeout expires without having to bother with the remainder of the timeouts (yet).

Curl_expire()

… is the internal function to set a timeout to expire a certain number of milliseconds into the future. It adds a timeout entry to the list of timeouts. Expiring a timeout just means that it’ll signal the application to call libcurl again. Internally we don’t have any identifiers to the timeouts, they’re just a time in the future we ask to be called again at. If the code needs that specific time to really have passed before doing something, the code needs to make sure the time has elapsed.

Curl_expire_latest()

A newcomer in the timeout team. I figured out we need this function since if we are in a state where we need to be called no later than a certain specific future time this is useful. It will not add a new timeout entry in the timeout list in case there’s a timeout that expires earlier than the specified time limit.

This function is useful for example when there’s a state in libcurl that varies over time but has no specific time limit to check for. Like transfer speed limits and the like. If Curl_expire() is used in this situation instead of Curl_expire_latest() it would mean adding a new timeout entry every time, and for the busy-loop API usage cases it could mean adding an excessive amount of timeout entries. (And there was a scary bug reported that got “tens of thousands of entries” which motivated this function to get added.)

timeout removals

We don’t remove timeouts from the list until they expire. Like for example if we have a condition that is timing dependent, then we set a timeout with Curl_expire() and we know we should be called again at the end of that time.

If we wouldn’t add the timeout and there’s no socket activity on the socket then we may not be called again – ever.

When an internal state transition into something else and we therefore don’t need a previously set timeout anymore, we have no handle or identifier to the timeout so it cannot be removed. It will instead lead to us getting called again when the timeout triggers even though we didn’t really need it any longer. As we’re having an API that allows this anyway, this is already handled by the logic and getting called an extra time is usually very cheap and is not considered a problem worth addressing.

Timeouts are removed automatically from the list of timers when they expire. Timeouts that are in passed time are removed from the list and the timers following will then get moved to the front of the queue and be used to calculate how long the single timeout should be next.

The only internal API to remove timeouts that we have removes all timeouts, used when cleaning up a handle.

many easy handles

I’ve mentioned how each easy handle treats their timeouts above. With the multi interface, we can have any amount of easy handles added to a single multi handle. This means one list of timeouts for each easy handle.

To handle many thousands of easy handles added to the same multi handle, all with their own timeout (as each easy handle only show their closest timeout), it builds a splay tree of easy handles sorted on the timeout time. It is a splay tree rather than a sorted list to allow really fast insertions and removals.

As soon as a timeout expires from one of the easy handles and it moves to the next timeout in its list, it means removing one node (easy handle) from the splay tree and inserting it again with the new timeout timer.

Categorieën: Mozilla-nl planet

Raniere Silva: Mathml October Meeting

Mozilla planet - vr, 10/10/2014 - 05:00
Mathml October Meeting ../../../_images/mathml.jpg

This is a report about the Mozilla MathML October Meeting (see the announce). The topics of the meeting can be found in this PAD (local copy of the PAD). This meeting happens all at appear.in and because of that we don’t have a log.

The next meeting will be in November 14th (note that November 14th is Friday). Some countries will move to winter time and others to summer time so we will change the time and announce it later on mozilla.dev.tech.mathml. Please add topics in the PAD.

Leia mais...

Categorieën: Mozilla-nl planet

Erik Vold: What is the Jetpack/Add-on SDK?

Mozilla planet - vr, 10/10/2014 - 02:00

There are many opinions on this, and I think I’ve heard them all, but no one has worked on this project for as long as I have, so I’d like to write what I think the Jetpack/Add-on SDK is.

Originally the Jetpack prototype was developed as a means to make add-on development easier for web developers, I say this because it was both the impression that I got and it was one of the bullet points Aza Raskin listed for me in an email he sent to me asking me to be a project ambassador. This was very appealing to me at the time because I had no idea how to write add-ons back then. The prototype however provided chrome access from the beginning, which is basically the ability to do almost anything that you want with the browser and the system it runs on. So to my mind the Jetpack prototype was an on-ramp to add-on and Firefox development, because it also did not have the same power that add-ons had, it had subset of abilities.

When Jetpack graduated from being a prototype it was renamed to the Add-on SDK, and it included the seeds of something that was lacking in add-on development, sharable modules. These modules could be written using the new tech at the time, CommonJS, which is now widely used and commonplace. The reason for this as I understood it was both to make add-on development easier, and to make reviewing add-ons easier (because each version of a module would only need to be reviewed once). When I started writing old school add-ons I quickly saw the value in the first, and later when I became an AMO reviewer the deep value of the latter was also quickly apparent.

In order to make module development decentralized it was important to provide chrome access to those modules that need it, otherwise all of the SDK features would have to be developed and approved in-house by staffers, as is done with Google Chrome, which would not only hamper creativity, but also defeat the purpose for having a module system. This is our advantage over Google Chrome, not our weakness.

To summarize I feel that the Jetpack/Add-on SDK is this:

  1. An on-ramp to extension and Firefox development for web devs, with a shallow learning curve.
  2. A means for sharing code/modules, which reduces review time.
  3. A quicker way to develop add-ons than previous methods, because there is less to learn (see a chrome.manifest or bootstrap.js file or if you have doubts).
  4. A means for testing both add-ons and the browser itself (possibly the easiest way to write tests for add-ons and Firefox when used in combination with point 2).
  5. A more reliable way to write extensions than previous methods, because the platform code changes so much the modules system (point 2) can provide an abstraction layer such that developers can blissfully ignore platform changes, which reinforces point 3.
Categorieën: Mozilla-nl planet

Ben Kero: September ’14 Mercurial Code Sprint

Mozilla planet - do, 09/10/2014 - 23:15

A week ago I was fortunate enough to attend the latest code sprint of the Mercurial project. This was my second sprint with this project, and took away quite a bit from the meeting. The attendance of the sprint was around 20 people and took the form of a large group, with smaller groups splitting out intermittently to discuss particular topics. I had seen a few of the attendees before at a previous sprint I attended.

Joining me at the sprint were two of my colleagues Gregory Szorc (gps) and Mike Hommey (glandium). They took part in some of the serious discussions about core bugfixes and features that will help Mozilla scale its use of Mercurial. Impressively, glandium had only been working on the project for mere weeks, but was able to make serious contributions to the bundle2 format (an upcoming feature of Mercurial). Specifically, we talked to Mercurial developers about some of the difficulties and bugs we’ve encountered with Mozilla’s “try” repository due to the “tens of thousands of heads” and the events that cause a serving request to spin forever.

By trade I’m a sysadmin/DevOps person, but I also do have a coder hat that I don from time to time. Still though, the sprint was full of serious coders who seemingly worked on Mercurial full-time. There were attendees who had big named employers, some of whom would probably prefer that I didn’t reveal their identities here.

Unfortunately due to my lack of familiarity with a lot of the deep-down internals I was unable to contribute to some of the discussions. It was primarily a learning experience for me for both the process which direction-driving decisions are made for the project (mpm’s BDFL status) and all of the considerations that go into choosing a particular method to implement an idea.

That’s not to say I was entirely useless. My knowledge of systems and package management meant I was able to collaborate with another developer (kiilerix) to improve the Docker package building support, including preliminary work for building (un)official Debian packages for the first time.

I also learned about some infrequently used features or tips about Mercurial. For example, folks who come from a background of using git often complain about Mercurial’s lack of interactive rebase functionality. The “histedit” extension provides this feature. Much like many other features of Mercurial, this is technically “in core”, but not enabled by default. Adding a line in the “[extensions]” section your “hgrc” file such as “histedit =” enables it. It allows all the expected picking, folding, dropping, editing, or modifying commit messages.

Changeset evolution is another feature that’s been coming for a long time. It enables developers to safely modify history and be able to propagate those changes to any down/upstream clones. It’s still disabled by default, but is available as an extension. Gregory Szorc, a colleague of mine, has written about it before. If you’re curious you can read more about it here.

One of the features I’m most looking forward to is sparse checkouts. Imagine a la Perforce being able to only check out a subtree or subtrees of a repository using ‘–include subdir1/’ and –exclude subdir2/’ arguments during cloning/updating. This is what sparse checkouts will allow. Additionally, functionality is being planned to enable saved ‘profiles’ of subdirs for different uses. For instance, specifying the ‘–enable-profile mobile’ argument will allow a saved list of included and excluded items. This seems like a really powerful way of building lightweight build profiles for each different type of build we do. Unfortunately to be properly implemented it is waiting on some other code to be developed such as sharded manifests.

One last thing I’d like to tell you about is an upcoming free software project for Mercurial hosting named Kallithea. It was borne from the liberated code from the RhodeCode project. It is still in its infancy (version 0.1 as of the writing of this post), but has some attractive features for viewing repositories, such visualizations of changelog graphs, diffs, code reviews, a built-in editor, LDAP support, and even a JSON-RPC API for issue tracker integration.

In all I feel it was a valuable experience for me to attend that benefited both the Mercurial project and myself. I was able to lend some of my knowledge about building packages and familiarity with operations of large-scale hgweb serving, and was able to learn a lot about the internals of Mercurial and understand that even the deep core code of the project isn’t very scary.

I’m very thankful for my ability to attend and look forward to attending the next Sprint in the following year.

Categorieën: Mozilla-nl planet

Pagina's