Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Eric Shepherd: The Sheppy Report: September 19, 2014

Mozilla planet - mo, 22/09/2014 - 18:16

I’ve been working on getting a first usable version of my new server-side sample server project (which remains unnamed as yet — let me know if you have ideas) up and running. The goals of this project are to allow MDN to host samples that require a server side component (for example, examples of how to do XMLHttpRequest or WebSockets), and to provide a place to host samples that require the ability to do things that we don’t allow in an <iframe> on MDN itself. This work is going really well and I think I’ll have something to show off in the next few days.

What I did this week
  • Caught up on bugmail and other messages that came in while I was out after my hospital stay.
  • Played with JSMESS a bit to see extreme uses of some of our new technologies in action.
  • Did some copy-editing work.
  • Wrote up a document for my own reference about the manifest format and architecture for the sample server.
  • Got most of the code for processing the manifests for the sample server modules and running their startup scripts written. It doesn’t quite work yet, but I’m close.
  • Filed a bug about implementing a drawing tool within MDN for creating diagrams and the like in-site. Pointed to as an example of a possible way to do it. Also created a developer project page for this proposal.
  • Exchanged email with Piotr about the editor revamp. It’s making good progress.
Wrap up

I’m really, really excited about the sample server work. With this up and running (hopefully soon), we’ll be able to create examples for technologies we were never able to properly demonstrate in the past. It’s been a long time coming. It’s also been a fun, fun project!


Categorieën: Mozilla-nl planet

Curtis Koenig: The Curtisk report: 2014-09-21

Mozilla planet - mo, 22/09/2014 - 17:40

People wanna know what I do, so I am going to give this a shot, so each Monday I will make a post about the stuff I did in the previous week.

Idea shamlessly stolen from Eric Shepherd

What I did this week
  • MWoS: SeaSponge Project Proposal (Review)
  • Crusty Bugs data digging
  • security review (move along)
  • Firefox OS Sec discussion
  • sec triage process massaging
  • Firefox OS Security coordination
  • Vendor site review
    • testing plan for vendor site testing
    • testing coordination with team and vendor
  • CBT Training survey
  • security scan of [redacted]
Meetings attended this week Mon
  • Weekly Project Meeting
  • Web Bounty Triage
  • SecAutomation
  • Cloud Services Security Team
  • MWoS team Project meeting
  • Vendor testing call
  • Web Bug Triage
  • Security Open Mic
  • Grow Mozilla / Community Building
  • Computer Science Teachers Association (guest speaker)

Categorieën: Mozilla-nl planet

Christian Heilmann: Notes on my closing keynote of From the Front 2014

Mozilla planet - mo, 22/09/2014 - 16:32

These are some notes about my closing keynote at From the Front in Bologna, Italy last week. The overall theme of the event was “Temple of the DOM” thus I kept it Indiana Jones themed (one could say shoehorned, but I wasn’t alone with this).

from the front 2014 speakers

The slides are available on Slideshare.

Rubbing the Sankara Stones the wrong way – From the Front 2014 from Christian Heilmann

In Indiana Jones and the Temple of Doom the Sankara Stones are very powerful stones that can bring prosperity or destroy people, depending how they are used. When you bring the stones together they light up and in general all is very mystic and amazing. It gives the movie an adventure angle that can not explained and allows us to suspend our disbelief and see Indy as being capable of more than a normal human being.

A tangent: Blowing people’s mind is pretty easy. All you need to do is take a known concept and then make an assumption from it. For example, when you see Luigi from Super Mario Brothers and immediately recognise him, there is quite a large chance you have an older sibling. You were always the one who had to wait till your sibling failed in the game so it was your turn to play with “green Mario”. Also, if Luigi and Mario are the Mario brothers then Mario’s name is Mario Mario. Ponder this.

The holy trinity of web development

On the web we also have magical stones that we can put together and create good or evil. These are the standardised technologies of the web: HTML, CSS and JavaScript. These are available in every browser, understood by them without any extra compilation step, very well documented and easy to learn (but harder to master).

Back in 1999, Jeffrey Zeldman taught us all not to write tag-soup any longer and use the technologies of the web to build intelligent solutions that use them to their strengths. These are commonly referred to as the separation of concerns:

  • Structure (HTML and added extra-value semantics like Microformats)
  • Presentation (CSS, Images)
  • Behaviour (JavaScript)

Back then this was a very necessary stake in the ground, explaining that web development is not a random WYSIWYG result but something with a lot of planning and organisation behind it. The separation of concerns played all the different technologies to their strengths and also meant that one or two can fail and nothing will go wrong.

This also paved the way for the progressive enhancement idea that all you really need is a proper HTML document and the rest gets added when and if needed or – in the case of JavaScript – once it has been tested to be available and applicable.

The problems started when people with different agendas skewed the concept of the separation of concerns:

  • HTML and semantic markup enthusiasts advocated far too loudly for very clean markup, validation and adding things like Microformats. For engineers just trying to get something to show up in a browser this has always been a confusion as the tangible benefits of this are, well, not tangible. Browsers are very forgiving and will fix HTML for you and when there is no interface in browsers that surfaces the data in Microformats, why do it? Of course, I disagree and stated very often that semantic, clean markup is the good grammar of the web – you don’t need it, but it does make you much easier to understand and shows that you learned what you are doing. But that doesn’t really matter. Fact is that we continuously try to make people understand something we hold dear without giving them tangible benefits.
  • JavaScript enthusiasts, on the other hand, create far too much with JavaScript. This is a matter of control. You know JavaScript, you are happy seeing parts of an app or a page as objects and you want to instantiate them, inherit from them and re-use them. You don’t want to write much code but feel that generating it is the most clever way of using technology. Many JS enthusiasts also keep citing that browser differences are a real issue and that in JS they have the chance to test and fix problems browsers have. The fallacy here, of course, is that by doing that they also made the current and future browser issues their own.
  • CSS enthusiasts started to shoot against JavaScript as a tool when CSS became more powerful. Are animations and transitions behaviour or presentation? Should it be done in CSS or in JS where there is much more granular control? What about generated content? Where does this fall into? We can create whole drawings from one DIV element, but should we?

All of this, together with lots and lots of libraries promising us to solve all kind of cross-browser issues lead to the massively obese web we see today. An average web site size of almost 2MB would have blown our minds in the past, but these days it seems the right thing to do if you want to be professional and use the tools professionals use. Starting a vanilla HTML file feels like a hack – using a build script to start a boiler plate seems to be the intelligent, full stack development thing to do.

Best practice reminders, repeated

This is nothing new, of course.

Back in 2004, I wrote a self training course on Unobtrusive JavaScript trying to make people understand the need for separation of behaviour and look and feel. In 2005 I questioned the validity of three layers of separation as I worked on CMS and framework driven web products where I did not have full control over the markup but had to deal with the result of .NET 1.0 renderers.

Web technologies have always been a struggle for people to grasp and understand. JavaScript is very powerful whilst being a very loosely architected language compared to C or Java. The ability to use inline styling and scripting always tempted people to write everything in one document rather than separating it out into several which allows for caching and re-use. That way we always created bloated, hard to maintain documents or over-used scripts and style sheets we don’t control and understand.

It seems the epic struggle about what technology to use for what is far from over and we still argue until we are blue in the face if an animation should be done in CSS or in JavaScript and whether static HTML or deferred loading and creation of markup using template engines is the fastest way to go.

So what can we do to stop this endless debate?

The web has moved on a lot since Zeldman laid down the law and I think it is time to move on with it. We have to understand that not everything is readily enhanceable. We also have standard definitions that just seem odd and could have very much been better with our input. But we, the people who know and love the web, were too busy fighting smaller fights and complaining about things we should have taken for granted a while ago:

  • There will always be marketing materials or commercial training programs that get everything wrong we stand for. Mentioning them or trying to debunk them will just get more people to look at them. Yes, I do consider W3Schools part of this. We make these obsolete and unnecessary by creating better resources, not by telling people about their dangers.
  • Browsers will always get things wrong and no, there will not be an amazing future where all browsers are ever-green and users upgrade all the time.
  • Materials by standards bodies like this “Standards for Web Applications on Mobile: current state and roadmap” will always be verbose and seem academic in their wording. That’s what a standard is. There can not be wiggle room that’s why it sounds far more complex than we think it is.
  • There will always be people who use a certain technology for things we consider inappropriate. A great example I saw lately was a Mandelbrot fractal renderer creating a span for each pixel written in SASS and needing 5 minutes to compile.

A fault tolerant web? Think again

One of the great things of old about the web was that it was fault tolerant. Meaning that if something breaks, you can provide a fallback or the browser ignores it. There were no broken interfaces.

This changed when multimedia became a larger part of HTML5. Of course, you can use a fallback image for a CANVAS element (and you should as these get shown as thumbnails on Facebook for example) but it isn’t the same thing as you don’t add a CANVAS for the fun of it but as an interactive part of the page.

The plain fallback case does not quite cut it any longer.

Take a simple example of an image in the page:

<img src="meh.jpg" alt="cute kitten photo">

This is cool. If the image can not be loaded or rendered, the browser shows the alternative text provided in the alt attribute (no, it is not a tag). In most browsers these days, this is just a text display. You even have full control in JavaScript knowing if the image wasn’t loaded and you could provide a different fallback:

var img = document.querySelector('img'); img.addEventListener('error', function(ev) { if (this.naturalWidth === 0 && this.naturalHeight === 0) { console.log('Image ' + this.src + ' not loaded'); } }, false);

With video, it is slightly different. Take the following example:

<video controls> <source src="dynamicsearch.mp4" type="video/mp4"> </source> <a href="dynamicsearch.mp4"> <img src="dynamicsearch.jpg" alt="Dynamic app search in Firefox OS"> </a> <p>Click image to play a video demo of dynamic app search</p> </video>

If the browser is not capable of supporting HTML5 video, we get a fallback image (again, great for indexing by Facebook and others). However, these browsers are not that likely to be in use any longer. The more interesting question is what happens when the browser can not play the video because the video codec is not supported? What end users get now is a grey box with the grace of a Java Applet that failed to load.

How to find out that the video playback failed? You’d expect an error handler on the video would do it, right? Well, not according to the specs which ask for an error handler on the last source element in the video element. That means that if you want to have the alternative content in the video element to show up when the video can not be played you need the following code:

var v = document.querySelector('video'), sources = v.querySelectorAll('source'), lastsource = sources[sources.length-1]; lastsource.addEventListener('error', function(ev) { var d = document.createElement('div'); d.innerHTML = v.innerHTML; v.parentNode.replaceChild(d, v); }, false);

Codec detection is incredibly flaky and hard as it is on OS level of the hardware and not fully in the control of the browser. That’s probably also the reason why the return value of the canPlayType() method of a video element (which is meant to tell you if a video format is supported) returns “maybe”, “probably” or an empty string. A coy method, that one.

It is the web, deal with it!

We could get very annoyed with this, or we can just deal with it. In my 18 years of web development I learned to take things like that in stride and I am actually happy about the quirky issues of the web. It makes it a constantly changing and interesting environment to be in.

I really think Mattias Petter Johansson of Spotify nailed it when he answered on Quora to someone why JavaScript is the only language in a browser:

Hating JavaScript is like hating the Internet.
The Internet is a cobweb of different technologies cobbled together with duct tape, string and chewing gum. It’s not elegantly designed in any way, because it’s more of a growing organism than it is a machine constructed with intent.

This is also why we should stop trying to make people love the web no matter what and force our ideas down their throats.

Longevity? Meh!

One of the main things we keep harping on about is the lovely longevity of the web. Whether it is Microsoft’s first web page still working in browsers now after 20 years or the web being the only platform with backwards compatibility and forward enhancement – we love to point out that we are in for the long game.

Sadly, this argument means nothing to developers who currently work in the mobile app space where being first out of the door is the most important part and people know that two months down the line nobody is going to be excited about your game any more. This is not sustainable, and reminds me of other fast-moving technologies that came and went. So let’s not waste our time trying to convince people who already subscribed to an idea of creating consumable software with a very short shelf-life.

I put it this way:

If you enable people world-wide to get a good experience and solve a problem they have, I like it. The technology you use is not the important part. How much you lock them in is. Don’t lock people in.

Let’s analyse our own behaviour

A lot of the bloat and repetitiveness of the web seems to me to stem from three mistakes we make:

  • we optimise prematurely
  • we tend to strive for generic solutions for very specific problems.
  • we build stop-gap solutions to use upcoming technology before it is ready and become dependent on those

A glimpse at the state of the componentised web seems to validate this. Web Components are amazingly necessary for the future of apps on the web platform, but they aren’t ready yet. Many of these frameworks give me great solutions right now and the effort I have to put in to learn them will make it hard for me to ever switch away from them. We’ve been there before: just try to find a junior JavaScript developer that knows the DOM instead of using jQuery for everything.

The cool new thing now are static HTML pages as they run fast, don’t take many resources and are very portable. Except that we already have 298 different generators to choose from if we want to create them. Or, we could write static HTML if all we have is a few sites. But where’s the fun in that?

Fredrik Noren had a great article about this lately called On Generalisation and put it quite succinctly:

Generalization is, in fact, prediction. We look at some things we have and we predict that any current and following entities in that group will look and behave sufficiently similar in the future to what we have now. We predict that we can write something that will cater to all, or most, of its future needs. And that makes perfect sense, if you just disregard one simple fact of life: humans are awful at predicting the future!

So let’s stop trying to build for an assumed problematic future that probably will never come and instead be thankful for what we have right now.

Such amazing times we live in

If you play with the web these days and you leave your “everything is broken, I must fix it!” hat off, it is amazing how much fun you can have. The other day I wrote – a quick app that allows you to drag and drop images into your browser and get a zip of thumbnails back. All without a server in between, all working offline and written on a plane without a web connection using only the developer tools built into the browser these days.

We have an amazing amount of new events, sensors and data to play with. For example, reading out the ambient light around a laptop is a simple event handler:

window.addEventListener('devicelight', function(e) { var lv = e.value; // lv is the light in lux });

You can use this to switch from a dark on light to a light on dark display. Or you could detect a 0 and know that the end user is currently covering their camera with their hands and provide a very simple hand gesture interface that way. This sensor is always on and you don’t need to have the camera enabled. How cool is that?

Are there other sensors or features in devices you’d like to have? Please ask on the feedback form about Open Web Apps and you can be part of the next iteration of web interaction.

Developer tools across browsers moved on beyond the view-source functionality and all of them now offer timelines, step-by-step debugging, network information and even device or screen emulation and visual editors for colours and animations. Most also offer some sort of in-built editor and remote debugging of devices. If you miss something there, here is another channel to tell the makers about that.

It is a big, fragmented world out there

The next big boom of the web is not in the Western world and on laptops and desktops that are connected with massively fast lines. We live in a mobile world and the predictability of what device our end users will have is gone. Surveys in Android usage showed 18,796 different devices in use already and both Mozilla’s and Google’s reach into emerging markets with under $100 devices means the light-weight web is going to be a massive thing for all of us. This is why we need to re-think our ways.

First of all, offline first should be our mantra. There is no steady connection we can rely on. Alex Feyerke has a great talk about this.

Secondly, we need to ensure that our solutions run smoothly on very low end devices. For this, there are a few tricks and developer tools give us great insight into where we waste memory and framerate. Angelina Fabbro has a great talk about that.

In general, the web is and stays an amazingly useful resource now more than ever. Tools like Github, JSFiddle, JSBin, CodePen and many others allow us to build things together and be in constant communication. Together.js (built into JSFiddle as the ‘collaboration’ button) allows us to code together with a text or voice chat and seeing each other’s cursors. This is an incredible opportunity to make coding more human and help another whilst we develop instead of telling each other how we should develop.

Let’s use the web to work on things together. Don’t wait to build the perfect solution. Share it early, take on advice and pull requests and together we can build much better products.

Categorieën: Mozilla-nl planet

Soledad Penades: 2014

Mozilla planet - mo, 22/09/2014 - 16:31

I accidentally ended up attending 2014–it wasn’t my initial intent, but someone from Mozilla who was going to be at the Hacker Lounge couldn’t make it for personal reasons, and he asked me to join in, so I did!

Turns out I'll be in @jsconfeu after all! Look for me at the @mozilla hacker lounge and ask all the questions!

— ǝlosɹǝdns (@supersole) September 13, 2014

I hung around the lounge for a while every day, but at times it was so full of people that I just went downstairs and talked hacks & business while having coffee, or simply attended some of the talks instead. The following are notes from the talks I attended and from random conversations on the Hallway and Hacker Lounge tracks ;)

We have @mozilla stickers at the lounge @jsconfeu … And they're going away fast!

— ǝlosɹǝdns (@supersole) September 13, 2014

Parallel JavaScript by Jaswanth Sreeram

After having heard about it during the “Future JS” session at the Extensible Web Summit, this one seemed most exciting to me! Data crunching in JS via “invisible” translation to OpenCL? Yay! Power save of 8x, speed increases of 6x thanks to the power of the GPU! Also it is already available in Firefox Nightly.

The browser can compile your PJS code to run in your FAST and very parallel GPU-up to 6x faster, 8x energy reduction

— ǝlosɹǝdns (@supersole) September 13, 2014

I got so excited that I started building a test on the same day to try and trigger the parallel code path, but the performance is 2x slower than traditional sequential code. I spoke to Jaswanth in one of the breaks and explained him my issue, he said that the code needs to be complex enough for the “paralleliser” to get in action, and there was a certain amount of work involved in determining this, so that might be the reason why performance is so bad.

Parallel JavaScript API looking cool! DATACRUNCHING IN JS! Available in @FirefoxNightly already @jsconfeu

— ǝlosɹǝdns (@supersole) September 13, 2014

Still, existing PJS examples seem a bit too contrived to explain/demonstrate to people why it is so cool in a nutshell, so I would be interested in getting to the right function that triggers parallelism and is not overly complex—things with matrices just get over the head of people who are not used to this kind of data manipulation and the rest of the example just “does not compute” in their mind.

What Harry Potter can teach us about JavaScript by Sara Robinson

I went to this one because the title seemed intriguing. Basically, if people like something they will talk about it on the Internet, and also: regionalisms and variations to better target the market are important.

This rang a tiny little bell for me as it sounded a bit like the work we’re doing at Moz by working closely with communities where Firefox OS is launching–each launch is different as the features are specific to each market.

Bookwise, I am not overly convinced about adaptations that try to adapt the work and convert it so that it conveys something that is not initially being conveyed in the work itself. E.g. in France there was a strong push for highlighting the teaching/learning/school concept and so the book title was translated into something like “Harry Potter and the Wizard’s SCHOOL”. I’m totally OK with good translations that have to change some character name in order for it to still sound funny, but trying to change the meaning of the book is offlimits for me–I think the metaphor with JS didn’t quite work here.

We’re struggling to keep up (a brief story of browser security features) by Frederik Braun

I was expecting more scare and more in-depth tech from this one! Frederik, step it up! (Disclaimer: Frederik works at Mozilla so we’re colleagues and hence the friendly complaint).

Keeping secrets with JavaScript: an introduction to the WebCrypto API by Tim Taubert

I also went to this talk by another fellow German Mozillian (seems like the Berlin office has a thing for security and privacy… which makes total sense). It was a good introduction to how all the pieces fit together. After the talk there were some discussions in the “hallway track” about whether everyone developer should know, or not, cryptography and up to which extent. I have mixed feelings: it is hard enough to mess with it and render it useless (but still think you’re safe, even if you’re not), so maybe we need better libraries/tooling that make it easy to not to mess with it. Or maybe we need easier crypto. I definitely think anyone handling data should know about cryptography. If you’re a purely front-end person and only doing things such as CSS… well, maybe you can go a long way without knowing your SHAs from your MD5s…

Monster Audio-Visual demos in a TCP packet by Matthieu Henry ‘p01′

I went to this one expecting a whole bunch of demoscene tricks but I ended coming back from the forest of dropped jaws and “OMG it’s just 4K” utterances, which was fun anyway! It’s always entertaining to see people’s minds being blown up, although I expected a bit of new material from p01 too.

Usefulness of Uselessness by Brad Bouse

I saw Brad at CascadiaJS in Vancouver past year and he was entertaining, but maybe I wasn’t in the right mood. This talk, in contrast, was way more focused on a simple message: do something useless. Do more useless stuff. Useless stuff is actually useful.

So now I’m giving myself free reign to do more useless stuff. Not that I wasn’t already, but now it is CONSCIOUSLY USELESS and just because.

The meaning of words by Stephan Seidt

Speaking about usefulness… I don’t know if this is because it was the last talk and I was developing a massive headache, but I found it a bit of a gimmick. Maybe if I watched that in other time and moment I’d find it more impressive, but it didn’t quite work for me. Other people clapped to it, so I guess it did work for them.

Javascript for Everybody by Marcy Sutton

This one was a really moving talk on how we should not break accessibility with JavaScript. It’s not just about ARIA roles in mark-up, it’s also about the things we create live, and about patching our frameworks of choice so they help less experienced developers be accessible by default, thus improving the ecosystem.

After the talk I was left with this persistent sensation that I wasn’t doing the right thing in my code, which prompted me to review it and file bugs. Uuuurgh (and yaaaay, thanks for calling us out).

This is bigger than us: Building a future for Open Source by Lena Reinhard

Lena made a very compelling talk about why you should analyse your project and get worried if it is not diverse, because it won’t survive for long, as monocultures are fragile and prone to disappear.

Communities start diverse by default, but each incident makes the community less diverse, as people abstain from participating ever again. Do you care about your community? then you need to ensure it keeps being diverse.

Important message from @ffffux: "diversity is the default. If it's not diverse, it's broken" @jsconfeu

— ǝlosɹǝdns (@supersole) September 14, 2014

A note about diversity not only being “having women”, but about having people who are representative of your population. Also it is not only about having a representation of the developers that use your code but a representation of the USERS that use the code the developers use–and this is way more important than we usually deem it to be, as the ratio tends to be 1 developer per 400 users.

Yet another demonstration of team Hoodie‘s high human standards :-)

(I’m also very excited that I got to meet and speak to a few of them during the conf, but sadly not the doge in their twitter account avatar–although it would have been weird to have him speak, but who knows what offline can enable?)

Server-less applications powered by Web Components by Sébastien Cevey

Sébastien had been at the Web Components session at the Extensible Web Summit, but he didn’t share as much as he did during this talk.

First he asked the audience how many people had heard about Web Components before; I’d estimate about 40% of people raised their hands. Then he asked them how many had actually used web components and I’d say the number of raised hands was just 5% of the audience.

Basically they had a series of status dashboards rendered with “horrible PHP” and other horrors of legacy code, and they didn’t want to have this mashup of front-end/back-end code because it was unmaintainable. So they set to rewrite the whole thing with Web Components, and so they did.

In the process they came up with a bit of metalanguage to connect the whole thing together, and some metamagic too, and finally they managed to have the whole thing running on the front-end. With just one big caveat: you have to be logged in The Guardian’s VPN to access the dashboards because the auth seemed to be taking place client-wise, and heh heh.

I was looking at the diagram of the web components they were using and the whole message passing chart and maybe it was because it was a bunch of information all of a sudden but I had the same experience of metamagic overdose I get with these all-declarative approaches to web components: some elements send messages to other elements by detecting them in the same document, like for example the modules that needed config would try to detect a config element in the tree, and use it. Maybe I didn’t understand it correctly, but this seemed akin to a global variable :-/

I still need to get my thoughts in order re: the all declarative web components pattern, but I think that one major reason for them not working for me is that the DOM is a hierarchical structure, so when people tuck several elements into it without any hierarchical relation between them, but still things happen magically and the elements interact with each other without hardly any way to know, I feel something’s not quite right there.

Another interesting take-away was that they were able to include other modular components into their component. For example they used a google chart element, and Paper components. I guess there is another minor unmentioned caveat here, and it is that it worked because they used Polymer to build their components, so the 2-way data binding worked seamlessly :-P

Using the web for music production and for live performances by Jan Monschke

I had seen Jan’s earlier talk at Scotland JS which was similar to this one but less cool–this time he convinced his brother and a friend to connect to his online collaborative audio workstation so we could see them playing live via WebRTC, and then he arranged the tracks they had recorded remotely from different points in the country. It was way more engaging and spectacular!

Then he also made a demo with an iPad and a home-made web audio app for live performances, which was really cool–you don’t need to program native code in order to build audiovisual apps! It is super awesome, come to think of it!

He still hasn’t fixed the things that I found “not OK” (as discussed in my Scotland JS post) but he is aware now of them! So maybe we might collaborate on The Definitive Collaborative Editor!

The Linguistic Relativity of Programming Languages by Jenna Zeigen

I didn’t catch this one in its entirety, but I got a few take-aways:

To all language snobs:

  • stop criticising
  • let other people use whatever language they’re comfortable with
  • languages that do not evolve will become obsolete

and a fantastic motto:

let’s keep JavaScript weird.

I think this was Jenna’s first talk. I want to see more!

Abusing phones to make the internet of things by Jan Jongboom

Jan works in Telenor and contributes heavily to Firefox OS just as his coworker Sergi that I had been following on the internets for a while and that I met at the Mozilla Summit past year. I thought I’d finally meet Jan when I went to Amsterdam for GOTO, but it wasn’t meant to be. I then happened to find him in the Hacker Lounge and he gave me an advance of the talk he was going to give later on, which promised to be super exciting. And it was!

Jan was a very entertaining speaker, and delighted the audience with both technical prowess and loads of jokes, including his own take on Firefox OS competition—Jan OS:

OK @FirefoxOSFeed here's the competition – JanOS @jsconfeu

— ǝlosɹǝdns (@supersole) September 14, 2014

Basically he took away the UI layer in Firefox OS and got full root access to do whatever he pleases with the phones which, when the screen is not used, have an extremely very long lasting battery life (in the WEEKS scale). So they are effectively integrated autonomous computers that come in cheaper than Raspberry Pi and similar. He showed some practical examples of “Things” for the Internet of Things, such as building custom GPS trackers to keep track of where one of his very easy-to-get-lost friends was via Push Notifications, or a cheap wireless contactless (via using the proximity sensor) doorbell that can play a custom sound using Bluetooth speakers.

This reminds me that I wanted to ask someone of the people in QA if they had any spare old phone they’re not testing with any more so I could rip its guts apart, but maybe now it’s not such a good idea–I would need to come up with a project!

GIFs vs Web Components by Glen Maddern

I finally could watch Glen’s talk! He gave it too at CascadiaJS but I was hiding on my room preparing for mine so I couldn’t join in the GIF celebration.

The most important takeaway is: GIF with a hard G.

As it should be.

And then, in no less serious terms:

GIFs are important.

Do not impose your conventions or the conventions of your framework of choice onto other potential users—just talk HTML/DOM/JS. He initially started building the <x-gif> component with Polymer, then got requests to port it to Angular, to React, to… every imaginable framework, but adoption wasn’t really catching up, and translating to each framework made him have to learn about each framework and its mannerisms, so it was a really long and nonproductive process, until he realised that it’s better if your component is generic and not tied to any framework.

Finally, lesson learnt: Polymer !== Web Components.

Know your /’s by Lindsay Eyink

This was a good closing talk and I’m so grateful that it wasn’t super overloaded with FEELINGS but with pragmatism and common sense, and a call to have more common sense out there–specially in planning departments that try to foster artificial constructs such as the “Silicon Roundabout” and etc.

Take away: don’t try to copy Silicon Valley. Be your own city, with your idiosyncrasies and differences. That’s what makes your environment unique, and what attracts people from other places, not that you make a bad copy of Silicon Valley but with expensive gas and rent.

All in all—yet another great! For many more years!

flattr this!

Categorieën: Mozilla-nl planet

Mozilla WebDev Community: Beer and Tell – September 2014

Mozilla planet - mo, 22/09/2014 - 16:30

September’s Beer and Tell has come and gone.

A practical lesson in the ephemeral nature of networks interrupted the live feed and the recording, but fear not! A wiki page archives the meeting structure and this very post will lay plain the private ambitions of the Webdev cabal.

Mike Cooper: GMR.js

Mythmon is a Civilization V enthusiast, but multiplayer games are difficult — games can last a dozen hours or more. The somewhat archaic play-by-mail format removes the simultaneous, continuous time commitment, and the Giant Multiplayer Robot service abstracts away the other hassles of coordinating turns and save game files.

GMR provides a client for Windows only, so Mythmon created GMR.js to provide equivalent functionality cross platform with Node.js. It presents an interactive command line UI. This enables participation from a Steam Box and other non-windows platforms.

Bramwelt: pewpew

Trevor Bramwell, summer intern for the Web Engineering team, presenting a homebrew clone of space invaders he calls pewpew. He built is using PhaserJS as an exercise to better understand prototypal inheritance. You can follow along as he develops it by playing the live demo on gh-pages.

Cvan: honeyishrunktheurl

Chris Van shared two new takes on the classic url shortener. The first is written in go, with configuration stored in JSON on the server. It was used as an exercise for learning go. The second is an html page that handles the redirect on the server side.

He intends to put them into production on a side project, but hasn’t found a suitable domain name.

Cvan: honeyishrunktheurl

Chris Van held the stage for a second demo. He showed how the CSS order property can be used to cheaply rearrange DOM nodes without destroying and re-rendering new nodes. An accompanying blog post delves into the details. The post is worth a read, since it covers some limitations of the technique that came up in discussion during the demo.

Lonnen: Alonzo, pt II

Last time he joined us, Lonnen was showing off a scheme interpreter he was writing in Haskell called Alonzo. This month Alonzo had a number of new features, including variable assignment, functions, closures, and IO. Next he’ll pursue building a standard library and adding a test suite.

If you’re interested in attending the next Beer and Tell, sign up for the mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Categorieën: Mozilla-nl planet

Daniel Stenberg: week #3

Mozilla planet - mo, 22/09/2014 - 14:40

I won’t keep posting every video update here, but I mostly wanted to mention that I’ve kept posting a weekly video over at youtube basically explaining what’s going on right now within my dearest projects. Mostly curl and some Firefox stuff.

This week: libcurl server cert verification API got a bashing at SEC-T, is HTTP for UDP a good idea? How about adding HTTP cache support to libcurl? HTTP/2 is getting deployed as we speak. Interesting curl bug when used by XBMC. The patch series for Firefox bug 939318 is improving slowly – will it ever land?

Categorieën: Mozilla-nl planet

Erik Vold: Add-on Directionless

Mozilla planet - mo, 22/09/2014 - 02:00

At the moment there is no one at Mozilla “in charge” that has awareness of the add-on community’s plight. It’s sadly true that Mozilla has been divesting in add-on support for awhile now. To be fair, I think the inital divestments were good ones, the Add-on Builder website was a money pit for example. The priority has shifted to web apps, and the old team is working mostly on the marketplace (which is no longer for add-ons, and is now just for apps), and the Add-on SDK team is now mostly working on Firefox DevTool projects.

At the moment we have only a few staffers working on and the SDK, and none of us have an authority to make decisions or end debates, there is a tech lead for the SDK but that is not a position that has the authority to make directional decisions, or decide how staffers prioritize their work and spend their time, each person’s manager will be in charge of that, and our managers are DevTool and Marketplace people..

Either we all agree on a direction or we don’t.

Categorieën: Mozilla-nl planet

Chris McAvoy: Hand Crafted Open Badges Display

Mozilla planet - snein, 21/09/2014 - 22:35

Earning an Open Badge is easy, there’s plenty of places that offer them, with more issuers signing up every day. Once you’ve earned an open badge, you can push it to your backpack, but what if you want to include the badge on your blog, or your artisanal hand crafted web page?

You could download the baked open badge and host it on your site. You could tell people it’s a baked badge, but using that information isn’t super easy. Last year, Mike Larsson had a great idea to build a JS library that would discover open badges on a page, and make them dynamic so that a visitor to the page would know what they were, not just a simple graphic, but a full-blown recognition for a skill or achievement.

Since his original prototype, the process of baking a badge has changed, plus Atul Varma built a library to allow baking and unbaking in the browser. This summer, Joe Curlee and I took all these pieces, prototypes and ideas and pulled them together into a single JS library you can include in a page to make the open badges on that page more dynamic.

There’s a demo of the library in action on Curlee’s Github. It shows a baked badge on the page, when you click the unbake button, it takes the baked information from the image and makes the badge dynamic and clickable. We added the button to make it clear what was happening on the page, but in a normal scenario, you’d just let the library do it’s thing and transform the badges on the page automatically. You can grab the source for the library on Github, or download the compiled / minified library directly.

There’s lot’s more we can do with the library, I’ll be writing more about it soon.

Categorieën: Mozilla-nl planet

John O'Duinn: San Francisco Car Culture: Unusual Jaguar XK8 paint job

Mozilla planet - snein, 21/09/2014 - 22:27

Found this earlier this month while on the way to work. The color scheme really threw me off, so at first I couldn’t even tell it was a Jaguar. I remain speechless.

Categorieën: Mozilla-nl planet

Mark Surman: You did it! (maker party)

Mozilla planet - snein, 21/09/2014 - 14:41

This past week marked the end of Maker Party 2014. The results are well beyond what we expected and what we did last year — 2,2513 learning events in 86 countries. If you we’re one of the 5,000+ teachers, librarians, parents, Hivers, localizers, designers, engineers and marketing ninjas who contributed to Webmaker over the past few months, I want to say: Thank you! You did it! You really did it!


What did you do? You taught over 125,000 people how to make things on the web — which is the point of the program and an important end in itself. At the same time, you worked tirelessly to build out and expand Webmaker in meaningful ways. Some examples:

  • Mozilla India organized over 250 learning events in the past two months, showing the kind of scale and impact you can get with well organized corps of volunteers.
  • Countries including Iran, New Zealand, and Sweden held their first ever Maker Party, adding to the idea that Webmaker is a truly global effort.
  • Tools and curriculum focused on mobile were added into the Webmaker suite — AppMaker was launched in June and was well received in Maker Parties around the world.
  • Over 300 partners orgs including major library and after school networks participated, bringing even more skilled teachers and mentors into our community.
  • New and innovative ways to teach the web in a very low touch manner rolled out, including a Firefox snippet that let you hack our home page x-ray goggles style.
  • Webmaker teamed up with Mozilla’s policy team, with a sub-campaign for Net Neutrality teach-ins plus a related reddit AMA.

It’s important to say: these things add up to something. Something big. They add up to a better Webmaker — more curriculum, better tools, a larger network of contributors. These things are assets that we can build on as we move forward. And you made them.

You did one other thing this summer that I really want to call out — you demonstrated what the Mozilla community can be when it is at its best. So many of you took leadership and organized the people around you to do all the things I just listed above. I saw that online and as I traveled to meet with local communities this summer. And, as you did this, so many of you also reached out an mentored others new to this work.You did exactly what Mozilla needs to do more of: you demonstrated the kind of commitment, discipline and thoughtfulness that is needed to both grow and have impact at the same time. As I wrote in July, I believe we need simultaneously drive hard on both depth and scale if we want Webmaker to work. You showed that this was possible.

Celebrating at MozFest East Africa

Celebrating at MozFest East Africa

So, if you were one of the 5000+ people who contributed to Webmaker during Maker Party: pat yourself on the back. You did something great! Also, consider: what do you want to do next? Webmaker doesn’t stop at the end of Maker Party. We’re planning a fall campaign with key partners and networks. We’re also moving quickly to expand our program for mentors and leaders, including thinking through ideas like Webmaker Clubs. These are all things that we need your help with as we build on the great work of the past few months.

Filed under: education, mozilla, webmakers
Categorieën: Mozilla-nl planet

Arky: Noto Fonts Update

Mozilla planet - snein, 21/09/2014 - 09:42

Google Internationalization team released new update of Noto Fonts this week. The update brings numerous new features enhancements. Please read the project release notes for the full list of changes.

You can preview the fonts and download them at

Google Noto project logo
Testing fonts on Firefox OS device

It is very simple to test the Noto fonts on a Firefox OS device. Just copy the the font files into /system/fonts folder and reboot the device. Don't forget to back-up the existing fonts on device first.

Am writing this blog post in Bangkok, So I am going to use Thai Noto fonts in these instructions. Connect your Firefox OS device to the computer with a USB cable. Make sure to turn on developer settings to enable debugging via USB.

# Backup the existing Thai font $ adb pull /system/fonts/DroidSansThai.ttf # Remount the /system partition as read-write $ adb remount /system # Remove the font on the device $ adb shell rm /system/fonts/DroidSansThai # Unzip the previously downloaded Thai font package $ unzip # Push to Firefox OS device $ adb push NotoSansThai-Regular.ttf /system/fonts # Reboot the phone. Test your localization by selecting your language #in Language settings menu or navigating to local language webpage with browser app. $ adb reboot
Wait, All I see is Tofu?

If you see square blocks (lovingly referred as Tofu) instead of characters, that means the font file for your language is missing. Please double check the steps, if everything fails restore the previously copy of your font file.

What is Font Tofo, firefox OS screenshot

Happy Hacking!

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 33 beta4 to beta5

Mozilla planet - sn, 20/09/2014 - 16:43

  • 36 changesets
  • 81 files changed
  • 990 insertions
  • 1572 deletions

ExtensionOccurrences cpp21 js10 java10 h6 in3 html3 cc3 xml2 mozbuild2 ini2 txt1 nsi1 mn1 list1 jsm1 css1

ModuleOccurrences mobile16 netwerk13 layout7 media6 browser6 gfx5 toolkit4 security3 dom3 js2 widget1 extensions1 caps1

List of changesets:

Sylvestre LedruPost Beta 4: disable EARLY_BETA_OR_EARLIER a=me - abf1c1e6b222 Aaron KlotzBug 937306 - Improvements to WinUtils::WaitForMessage. r=jimm, a=sylvestre - 60aecc9d11ab Richard NewmanBug 1065523 - Part 1: locale picker screen displays short locale display name, not capitalized region-decorated name. r=nalexander, a=sledru - cea1db6ec4ac Jason DuellBug 966713 - Intermittent test_cookies_read.js times out. r=mcmanus, a=test-only - bd8bbb683257 Brian HackettBug 1061600 - Fix PropertyWriteNeedsTypeBarrier. r=jandem, a=abillings - 025117f71163 Bobby HolleyBug 1066718 - Get sIOService before invoking ReadPrefs. r=bz, a=sledru - 262de5944a01 Richard NewmanBug 1045087 - Remove Product Announcements integration points from Fennec. r=mfinkle, a=sledru - c0ba357c4c89 Richard NewmanBug 1045085 - Remove main Product Announcements code. r=mcomella, a=lmandel - d5ed7dd8f996 Oscar PatinoBug 1064882 - Receive RTCP SR's on recvonly streams for A/V sync. r=jesup, a=sledru - e99eaafdbda1 Matt WoodrowBug 1044129 - Don't crash if ContainerLayer temporary surface allocation fails. r=jrmuizel, a=sledru - 11e34dc2f591 Eric FaustBug 1033873 - "Differential Testing: Different output message involving __proto__". r=jandem, a=sledru - 2dbe6d8a5c30 Mo ZanatyBug 1054624 - Fix high-packet-loss problems with H.264 WebRTC calls. r=jesup, a=lmandel - 75eddbd6dc80 Michal NovotnyBug 1056919 - Crash in memcpy | mozilla::net::CacheFileChunk::OnDataRead(mozilla::net::CacheFileHandle*, char*, tag_nsresult). r=honzab, a=sledru - 62d020eff891 Stephen PohlBug 1065509: Bump maximum download size from 35 MB to 70 MB in stub installer. r=rstrong, a=lmandel - e85a6d689148 Cameron McCormackBug 1041512 - Mark intrinsic widths dirty on a style change even if the frame hasn't had its first reflow yet. r=dbaron, a=abillings - dafe68644b45 Andrea MarchesiniBug 1064481 - URLSearchParams should encode % values correcty. r=ehsan, a=lmandel - f44f06112715 Jonathan WattBug 1067998 - Fix OOM crash in gfxAlphaBoxBlur::Init on large blur surface. r=Bas, a=sylvestre - 023a362fab21 Matt WoodrowBug 1037226 - Don't crash when surface allocation fails in BasicCompositor. r=Bas, a=sledru - 9dd2e1834651 Margaret LeibovicBug 996753 - Telemetry probes for settings pages. r=liuche, a=sledru - 8d7b3bfaf3ab Margaret LeibovicBug 996753 - Telemetry probes for changing settings and hitting back. r=liuche, a=sledru - 3504f727e58c Margaret LeibovicBug 1063128 - Make sure all preferences have keys. r=liuche, a=sledru - e981cc82a3e5 Margaret LeibovicBug 1058813 - Add telemetry probe for clicking sync preference. r=liuche, a=sledru - 340bddec5bf5 Honza BambasBug 1065478 - POSTs are coming from offline application cache. r=jduell, a=sledru - 6c39ccb686a5 Honza BambasBug 1066726 - Concurrent HTTP cache read and write issues. r=michal, r=jduell, a=sledru - 8a1cffa4c130 David KeelerBug 1066190 - Ensure that pinning checks are done for otherwise overridable errors. r=mmc, a=sledru - 1e3320340bd2 Wes JohnstonBug 1063896 - Loop over all url list, not just ones with metadata. r=lucasr, a=sledru - 792d0824a8f0 Blair McBrideBug 1039028 - Show license info for OpenH264 plugin. r=irving, a=sledru - 01411f43df67 Drew WillcoxonBug 1066794 - Make the search suggestions popup on about:home/about:newtab more consistent with the main search bar's popup. r=MattN, a=sledru - 44cc9f25426d Drew WillcoxonBug 1060888 - Autocomplete drop down list item should not be copied to the search fields when mouse over the list item. r=MattN, a=sledru - 6975bbd6c73a Richard NewmanBug 1057247 - Increase favicon refetch time to four hours. r=mfinkle, a=sledru - 515fa121e700 Mats PalmgrenBug 1067088 - Use aBorderArea when not skipping any sides (e.g. ::first-letter), not the joined border area. r=roc, a=sledru - af1dbe183e3d Ryan VanderMeulenBacked out changeset af1dbe183e3d (Bug 1067088) for bustage. - f5ba94d7170d Honza BambasBug 1000338 - nsICacheEntry.lastModified not properly implemented. r=michal, a=sledru - b88069789828 Benjamin SmedbergBug 1063052 - In case a user ends up with unpacked chrome, on update use omni.ja again by removing chrome.manifest. r=rstrong, r=glandium, sr=dbaron, a=lmandel - 2dce6525ddfe Mats PalmgrenBug 1067088 - Use aBorderArea when not skipping any sides (e.g. ::first-letter), not the joined border area. r=roc a=sledru - 9f2dc7a2df34 Nick AlexanderBug 996753 - Workaround for Fx33 not having AppConstants.Versions. r=rnewman, a=bustage - 7cd3ae0255ec

Categorieën: Mozilla-nl planet

Mozilla Firefox: Die Version 32.0.2 behebt Browser-Abstürze - Softonic DE

Nieuws verzameld via Google - sn, 20/09/2014 - 03:08

Mozilla Firefox: Die Version 32.0.2 behebt Browser-Abstürze
Softonic DE
Wir haben den detaillierten Einblick in gespeicherte Passwörter zusammengefasst. Zusammen mit Firefox 32 hat Mozilla die neue Beta-Version des Browsers zur Verfügung gestellt, die offizielle Veröffentlichung von Firefox 33 plant das Unternehmen für den ...

Google Nieuws
Categorieën: Mozilla-nl planet

Kevin Ngo: Building the Marketplace Feed

Mozilla planet - sn, 20/09/2014 - 02:00
New front page of the Firefox Marketplace.

The "Feed", the new feature I spent the last three months grinding out for the Firefox Marketplace. The Feed transforms the FirefoxOS app store to provide an engaging and customized app discovery experience by presenting fresh user-tailored content on every visit. The concept was invented by Liu Liu, a Mozilla design intern I briefly hung out with last year. Well, it quickly gained traction by getting featured on Engadget, presented at the Mozilla Summit, and shown off on prototypes at Mobile World Congress. With more traction came more pressure to ship. We built that ship, and it sailed on time.

Planning Phase

The whole concept had a large scope so we broke it into four versions. For the first version, we focused on getting initial content pushing into the Feed. We planned to build a curation tool for our editorial team to control the content of the Feed, with the ability to tailor different content for different countries. Users would then be able to see fresh and new content on the homepage every day, thus increasing engagement.

Curation Tools

The final product for the Curation Tool, used by the editorial team to feature content and tailor different content for different countries.

Some time was spent on the feature requirements and specifications as well as the visual design. We mainly had three engineers: myself, Chuck, and Davor. It was a slow start. Chuck started early on the technical architecture, building out some APIs and documentation. A few months later, I started work on the curation tool. My initial work was actually done off-the-grid in an RV in the middle of Alaska.

But then the project to optimize the app for the $25 FirefoxOS smartphone took over. Once the air cleared, we all gathered for a work week hosted at Davor's place in Fort Myers, FL (since I was already in FL for a vacation).

Building Phase

In early June, we had a solid start during the work week with each of the three of us on working on separate components: Chuck on the backend API, Davor on the Feed visuals, me on the curation tools. The face-to-face over remote working was nice. I could ask any question that came to mind about unclear requirements, quickly ask for an API endpoint to be whipped up, or check on how the visuals were looking.

After the foundations were in place, I transitioned to working on all parts of the feature from ElasticSearch, the API, the JS powering the curation tools and the Feed, to the CSS for the newer layout of the site. It was a fun grind for a couple of months just writing and refactoring tons of code, getting dirty with ElasticSearch, optimizing the backend.

The launch date was set for late August. A couple of weeks out, the feature was there, but there were some bugs to iron out and tons of polish to be done. The most sketchy part was having no design mocks for desktop-sized screens so I had to improvise. The last week and a half was a sprint. I split my day up into 6 hours, a break to play some tournament poker at the poker club, and then 2 hours at night. Throw in a late Friday, and we made it to the finish line.

Backend Bits

The Feed needed to have an early focus on scalability. We should be able to add more and more factors (such as where they're from, what device they're using, apps they previously installed, content they "loved") that tailor the Feed towards each user. ElasticSearch lets us easily dump a bunch of stuff into it, and we can do a quick query weighing in all of those factors. We cache the results behind a CDN, throw in couple of layers of client-side caching, and we got a stew going.

Figuring out how to relate data between different indices was a difficult decision. I had several options in managing relations, but chose to manually denormalize the data and manage the relations myself. This allowed for flexibility and fine-tuned optimizations without having to wrestle with ElasticSearch under-the-hood. We had three indices to relate:

  • Apps
  • Feed Elements - an individual piece of the Feed such as a featured app or a collection of apps that can contain accompanying descriptions or images. Many-to-many relationship with Apps.
  • Feed Items - a wrapper around Feed Elements containing metadata (region, category, carrier) that helps determine to whom the Feed Element should be displayed to. Many-to-many relationship with Feed Elements.

This meant three ES queries overall (and 0 database queries). I used the new ElasticSearch Python DSL to help construct queries. Wrote a query (weighing all of the factors about a user) to fetch Feed Items, a query to fetch all of the Feed Elements to attach to the Feed Items, and a query to fetch all of the Apps to attach to the Feed Elements. We throw all of the data to our Django Rest Framework serializers to deserialize the final response.

A complication was having to filter out apps, such as when an app is not public, or when an app is banned from the user's region, or when an app is not supported on the user's device. With time constraints, I wrote the filtering code in the view. But after the launch, I was able to consolidate our project's app filtering code into single query factory, use that, and tweak our serializers to handle filtered apps.

Frontend Bits

With the Feed being made of visual blocks and components, encapsulated template macros kept things clean. These reusable macros could be reused by our curation tools to display previews of the Feed on-the-fly for content curators. We used Isotope.js to arrange the layout of these visual blocks.

Our CSS eventually turned messy, having everything being namespaced by the CSS class .feed, and falling under the trap of OOP-style CSS. A good style guide to follow in the future is @fat's CSS style guide for Medium.

And since a lot of the code is shared between the frontend of the Firefox Marketplace and the curation tools, I'm currently trying to get our reusable frontend assets managed under Bower.


The Feed is only going to grow in features. It's a breath of fresh air in comparison to what used to be a never-changing front page on the app store. I've seen new apps I've never even knew we had and some are actually fun like Astro Alpaca. Hope everyone likes it!

Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Mozilla lanza “Miniaturas” patrocinadas en Firefox

Mozilla planet - fr, 19/09/2014 - 22:12

Mozilla acaba de lanzar las miniaturas patrocinadas en Firefox Nightly. El mosaico con miniaturas son los recuadros con enlaces a páginas web, que aparecen cuando se abre una nueva pestaña en Firefox. Estos se dividen en tres tipos:

  • Miniaturas de directorio: aparecen a los nuevos usuarios de Firefox para sugerirles sitios de interés, los cuales se construyen a partir de las miniaturas y sitios  mas populares entre los usuarios de Firefox,  luego, poco a poco van siendo remplazados por miniaturas históricas basadas en los sitios más visitados por el usuario.
  • Miniaturas mejoradas: para usuarios con miniaturas existentes (históricos) en su página de nueva pestaña, ahora la imagen de la previsualización será reemplazada por una de mayor calidad, obtenida a través del sitio o de un asociado. Las páginas mostradas en estas miniaturas, son aquellas registradas en el historial del usuario.
  • Miniaturas patrocinadas: es cualquier miniatura que incluya un sitio con un acuerdo comercial con Mozilla y se denota como Sponsored.

Si te preocupa la privacidad, no te inquietes, pues sólo la información de la miniatura en una página de nueva pestaña es recolectada, con el fin de ofrecer sitios mas interesantes a nuevos usuarios de Firefox y mejorar las recomendaciones a usuarios existentes. Toda esa información es recopilada y no incluye ninguna manera de distinguir al usuario, pues solo se recogen los datos necesarios para asegurarse que los recuadros envían valor a nuestros usuarios y socios comerciales.

¿A donde van los datos compartidos?

Los datos son transmitidos directamente a Mozilla y ésta es almacenada en los servidores de Mozilla. Para todo tipo de Miniaturas, Mozilla está compartiendo números a los socios como: cantidad de impresiones, clics, fijaciones y ocultamiento del contenido recibido.

¿Cómo lo desactivo?

Puedes desactivarlo haciendo clic en el ícono del engranaje, ubicado en la esquina superior derecha de una página de nueva pestaña y seleccionando Clásico para mostrar las Miniaturas no mejoradas, o el modo Vacío que desactiva esta característica por completo.

Fuente: Mozilla Hispano

Categorieën: Mozilla-nl planet

Ben Hearsum: New update server has been rolled out to Firefox/Thunderbird Beta users

Mozilla planet - fr, 19/09/2014 - 16:31

Yesterday marked a big milestone for the Balrog project when we made it live for Firefox and Thunderbird Beta users. Those with a good long term memory may recall that we switched Nightly and Aurora users over almost a year ago. Since then, we’ve been working on and off to get Balrog ready to serve Beta updates, which are quite a bit more complex than our Nightly ones. Earlier this week we finally got the last blocker closed and we flipped it live yesterday morning, pacific time. We have significantly (~10x) more Beta users than Nightly+Aurora, so it’s no surprise that we immediately saw a spike in traffic and load, but our systems stood up to it well. If you’re into this sort of thing, here are some graphs with spikey lines:
The load average on 1 (of 4) backend nodes:

The rate of requests to 1 backend node (requests/second):

Database operations (operations/second):

And network traffic to the database (MB/sec):

Despite hitting a few new edge cases (mostly around better error handling), the deployment went very smoothly – it took less than 15 minutes to be confident that everything was working fine.

While Nick and I are the primary developers of Balrog, we couldn’t have gotten to this point without the help of many others. Big thanks to Chris and Sheeri for making the IT infrastructure so solid, to Anthony, Tracy, and Henrik for all the testing they did, and to Rail, Massimo, Chris, and Aki for the patches and reviews they contributed to Balrog itself. With this big milestone accomplished we’re significantly closer to Balrog being ready for Release and ESR users, and retiring the old AUS2/3 servers.

Categorieën: Mozilla-nl planet

Soledad Penades: Extensible Web Summit Berlin 2014: my lightning talk on Web Components

Mozilla planet - fr, 19/09/2014 - 13:36

I was invited to join and give a lightning talk at the Extensible Web Summit that was held in Berlin past week, as part of the whole series of events.

The structure of the event consisted in having a series of introductory lightning talks to “set the tone” and then the rest would be a sort of unconference where people would suggest topics to talk about and then we would build a timetable collaboratively.

My lightning talk

The topic for my talk was… Web Components. Which was quite interesting because I have been working/fighting with them and various implementations in various levels of completeness at the same time lately, so I definitely had some things to add!

I didn’t want people to get distracted by slides (including myself) so I didn’t have any. Exciting! Also challenging.

These are the notes I more or less followed for my minitalk:

When I speak to the average developer they often cannot see any reason to use Web Components

The question I’m asked 99% of the time is “why, when I can do the same with divs? with jQuery even? what is the point?”

And that’s probably because they are SO hard to understand
  • The specs–the four of them– are really confusing / dense. Four specs you need to understand to totally grasp how everything works together.
  • Explainer articles too often drink the Kool Aid, so readers are like: fine, this seems amazing, but what can it do for me and why should I use any of this in my projects?
  • Libraries/frameworks built on top of Web Components hide too much of their complexity by adding more complexity, further confusing matters (perhaps they are trying to do too many things at the same time?). Often, people cannot even distinguish between Polymer and Web Components: are they the same? which one does what? do I need Polymer? or only some parts?
  • Are we supposed to use Web Components for visual AND non visual components? Where do you draw the line? How can you explain to people that they let you write your own HTML elements, and next thing you do is use an invisible tag that has no visual output but performs some ~~~encapsulated magic~~~?
And if they are supposed to be a new encapsulation method, they don’t play nicely with established workflows–they are highly disruptive both for humans and computers:
  • It’s really hard to parse in our minds what the dependencies of a component (described in an HTML import) and all its dependencies (described in possibly multiple nested HTML imports) are. Taking over a component based project can easily get horrible.
  • HTML imports totally break existing CSS/JS compression/linting chains and workflows.
  • Yes, there is Vulcaniser, a tool from Polymer that amalgamates all the imports into a couple of files, but it still doesn’t feel quite there: we get an HTML and a CSS file that still need a polyfill to be loaded.
  • We need this polyfill for using HTML imports, and we will need it for a while, and it doesn’t work with file:/// urls because it makes a hidden XMLHttpRequest that no one expects. In contrast, we don’t need one for loading JS and CSS locally.
  • HTML Imports generate a need for tools that parse the imports and identify the dependencies and essentially… pretend to be a browser? Doesn’t this smell like duplication of efforts? And why two dependency loading systems? (ES6 require and HTML imports)
There’s also a problem with hiding too much complexity and encapsulating too much:
  • Users of “third party” Web Components might not be aware of the “hell” they are conjuring in the DOM when said components are “heavyweight” but also encapsulated, so it’s hard to figure out what is going on.
  • It might also make components hard to extend: you might have a widget that almost does all you need except for one thing, but it’s all encapsulated and ooh you can’t hook on any of the things it does so you have to rewrite it all.
  • Perhaps we need to discuss more about use cases and patterns for writing modular components and extending them.
It’s hard to make some things degrade nicely or even just make them work at all when the specs are not fully implemented in a platform–specially the CSS side of the spec
  • For example, the Shadow DOM selectors are not implemented in Firefox OS yet, so a component that uses Shadow DOM needs double styling selectors and some weird tricks to work in Gaia (Firefox OS’ UI layer) and platforms that do have support for Shadow DOM
And not directly related to Web Components, but in relation to spec work and disruptive browser support for new features:
  • Spec and browser people live in a different bubble where you can totally rewrite things from one day to the other. Throw everything away! Change the rules! No backwards compatibility? No problem!
  • But we need to be considerate with “normal” developers.
  • We need to understand that most of the people cannot afford to totally change their chain or workflows, or they just do not understand what we are getting at (reasons above)
  • Then if they try to understand, they go to mailing lists and they see those fights and all the politics and… they step back, or disappear for good. It’s just not a healthy environment. I am subscribed to several API lists and I only read them when I’m on a plane so I can’t go and immediately reply.
  • If the W3C, or any other standardisation organisation wants to attract “normal” developers to get more diverse inputs, they/we should start by being respectful to everyone. Don’t try to show everyone how superclever you are. Don’t be a jerk. Don’t scare people away, because then only the loud ones stay, and the quieter shy people, or people who have more urgent matters to attend (such as, you know, having a working business website even if it’s not using the latest and greatest API) will just leave.
  • So I want to remind everyone that we need to be considerate of each others’ problems and needs. We need to make an effort to speak other people’s language–and specially technical people need to do that. Confusing or opaque specs only lead to errors and misinterpretations.
  • We all want to make the web better, but we need to work on this together!

With thanks to all the people whose brain I’ve been picking lately on the subject of Web Components: Angelina, Wilson, Francisco, Les, Potch, Fred and Christian.

flattr this!

Categorieën: Mozilla-nl planet

Daniel Stenberg: Using APIs without reading docs

Mozilla planet - to, 18/09/2014 - 23:27

This morning, my debug session was interrupted for a brief moment when two friends independently of each other pinged me to inform me about a talk at the current SEC-T conference going on here in Stockholm right now. It was yet again time to bring up the good old fun called libcurl API bashing. Again from the angle that users who don’t read the API docs might end up using it wrong.

I managed to see the talk off the live youtube feed, but it isn’t a stable url/video so I can’t link to it here now, but I will update this post with a link as soon as I have one!

The specific libcurl topic at hand once again mostly had the CURLOPT_VERIFYHOST option in focus, with basically is the same argument that was thrown at us two years ago when libcurl was said to be dangerous. It is not a boolean. It is an option that takes (or took) three different values, where 2 is the secure level and 0 is disabled.

SEC-T on curl API

(This picture is a screengrab from the live stream off youtube, I don’t have any link to a stored version of it yet. Click it for slightly higher resolution.)

Speaker Meredith L. Patterson actually spoke for quite a long time about curl and its options to verify server certificates. While I will agree that she has a few good points, it was still riddled with errors and I think she deliberately phrased things in a manner to make the talk good and snappy rather than to be factually correct and trying to understand why things are like they are.

The VERIFYHOST option apparently sounds as if it takes a boolean (accordingly), but it doesn’t. She says verifying a certificate has to be a Yes/No question so obviously it is a boolean. First, let’s be really technical: the libcurl options that take numerical values always accept a ‘long’ and all documentation specify which values you can pass in. None of them are boolean, not by actual type in the C language and not described like that in the man pages. There are however language bindings running on top of libcurl that may use booleans for the values that take 0 or 1, but there’s no guarantee we won’t add more values in a future to numerical options.

I wrote down a few quotes from her that I’d like to address.

“In order for it to do anything useful, the value actually has to be set to two”

I get it, she wants a fun presentation that makes the audience listen and grin cheerfully. But this is highly inaccurate. libcurl has it set to verify by default. An application doesn’t have to set it to anything. The only reason to set this value is if you’re not happy with checking the cert unconditionally, and then you’ve already wondered off the secure route.

“All it does when set to to two is to check that the common name in the cert matches the host name in the URL. That’s literally all it does.”

No, it’s not. It “only” verifies the host name curl connects to against the name hints in the server cert, yes, but that’s a lot more than just the common name field.

“there’s been 10 versions and they haven’t fixed this yet [...] the docs still say they’re gonna fix this eventually [...] I wanna know when eventually is”

Qualified BS and ignorance of details. Let’s see the actual code first: it ignores the 1 value and returns an error and thus leaves the internal default 2, Alas, code that sets 1 or 2 gets the same effect == verified certificate. Why is this a problem?

Then, she says she really wants to know when “eventually” is. (The docs say “Future versions will…”) So if she was so curious you’d think she would’ve tried to ask us? We’re an accessible bunch, on mailing lists, on IRC and on twitter. No she didn’t ask.

But perhaps most importantly: did she really consider why it returns an error for 1? Since libcurl silently accepted 1 as a value for something like 10 years, there are a lot of old installations “out there” in the wild, and by returning an error for 1 we try to make applications notice and adjust. By silently accepting 1 without errors, there would be no notice and people will keep using 1 in new applications as well and thus when running such an newly written application with an older libcurl – you’d be back to having the security problem again. So, we have the error there to improve the situation.

“a peer is someone like you [...] a host is a server”

I’m a networking guy since 20+ years and I’m not used to people having a hard time to understand these terms. While perhaps there are rookies out in the world who don’t immediately understand some terms in the curl option names, should we really be criticized for that? I find that a hilarious critique. Also, these names were picked 13 years ago and we have them around for compatibility and API stability.

“why would you ever want to …”

Welcome to the real world. Why would an application author ever want to have these options to something else than just full check and no check? Because people and software development is a large world with many different desires and use case scenarios and curl is more widely used and abused than what many people consider. Lots of people have wanted something else than just a Yes/No to server cert verification. In fact, I’ve had many users ask for even more switches and fine-grained ways to fiddle with verification. Yes/No is a lay mans simplified view of certificate verification.

SEC-T curl slide

(This picture is the slide from the above picture, just zoomed and straightened out a bit.)

API age, stability and organic growth

We started working on libcurl in spring 1999, we added the CURLOPT_SSL_VERIFYPEER option in October 2000 and we added CURLOPT_SSL_VERIFYHOST in August 2001. All that quite a long time ago.

Then add thousands of hours, hundreds of hackers, thousands of applications, a user count that probably surpasses one billion users by now. Then also add the fact that option names are sticky in the way we write docs, examples pop up all over the internet and everyone who’s close to the project learns them by name and spirit and we quite simply grow attached to them and the way they work. Changing the name of an option is really painful and cause of a lot of confusion.

I’ve instead tried to more and more emphasize the functionality in the docs, to stress what the options do and how to do server cert verifications with curl the safe way.

I can’t force users to read docs. I can’t forbid users to blindly assume something and I’m not in control of, nor do I want to affect, the large population of third party bindings that exist for using on top of libcurl to cater for every imaginable programming language – and some of them may of course themselves have documentation problems and what not.

Would I change some of the APIs and names for options we have in libcurl if I would redo them today? Yes I would.

So what do we do about it?

I think this is the only really interesting question to take from all this. Everyone wants stable APIs. Everyone wants sensible and easy to understand APIs and as we can see they should also basically be possible to figure out without reading any documentation. And yet the API has to be powerful and flexible enough to be really useful for all those different applications.

At this point where we have these options that we do, when you’ve done your mud slinging and the finger of blame is firmly pointed at us. How exactly do you suggest we move forward to fix these claimed problems?

Taking it personally

Before anyone tells me to not take it personally: curl is my biggest hobby and a project I’ve spent many years and thousands of hours on. Of course I take it personally, otherwise I would’ve stopped working in the project a long time ago. This is personal to me. I give it my loving care and personal energy and then someone comes here and throw ill-founded and badly researched criticisms at me. I think criticizers of open source projects should learn to discuss the matters with the projects as their primary way instead of using it to make their conference presentations become more feisty.

Categorieën: Mozilla-nl planet

Luke Wagner: asm.js on

Mozilla planet - to, 18/09/2014 - 22:03

I was excited to see that asm.js has been added to as “Under Consideration”. Since asm.js isn’t a JS language extension like, say, Generators, what this means is that Microsoft is currently considering adding optimizations to Chakra for the asm.js subset of JS. (As explained in my previous post, explicitly recognizing asm.js allows an engine to do a lot of exciting things.)

Going forward, we are hopeful that, after consideration, Microsoft will switch to “Under Development” and we are quite happy to collaborate with them and any other JS engine vendors on the future evolution of asm.js.

On a more general note, it’s exciting to see that there has been across-the-board improvements on asm.js workloads in the last 6 months. You can see this on or by loading up the Dead Trigger 2 demo on Firefox, Chrome or (beta) Safari. Furthermore, with the recent release of iOS8, WebGL is now shipping in all modern browsers. The future of gaming and high-performance applications on the web is looking good!

Categorieën: Mozilla-nl planet

Kartikaya Gupta: Maker Party shout-out

Mozilla planet - to, 18/09/2014 - 17:48

I've blogged before about the power of web scale; about how important it is to ensure that everybody can use the web and to keep it as level of a playing field as possible. That's why I love hearing about announcements like this one: 127K Makers, 2513 Events, 86 Countries, and One Party That Just Won't Quit. Getting more people all around the world to learn about how the web works and keeping that playing field level is one of the reasons I love working at Mozilla. Even though I'm not directly involved in Maker Party, it's great to see projects like this having such a huge impact!

Categorieën: Mozilla-nl planet