mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Julien Vehent: SSL/TLS for the Pragmatic

Mozilla planet - to, 20/11/2014 - 06:26

Tonight I had the pleasure to present "SSL/TLS for the Pragmatic" to the fine folks of Bucks County Devops. It was a fun evening, and I want to thank the organizers, Mike Smalley & Ben Krein, for the invitation.

It was a great opportunity to summarize 18 months of work at Mozilla on building the Server Side TLS Guidelines. By the feedback I received tonight, and on several other occasions, I think we've achieved the goal of building a document that is useful to operations people, and made TLS just a little easier to understand.

We are not, however, anywhere done with the process of teaching TLS to the Internet. Stats speak for themselves, with 70% of sites still supporting SSLv3, 86% enabling RC4, and about 32% still not preferring PFS over RSA handshakes. But things are getting better every day, and ongoing efforts may bring safe defaults in Linux servers as soon as Fedora 21. We live in exciting times!

The slides from my talk are below, and on github as well. I hope you enjoy them. Feel free to share your comments at julien[at]linuxwall.info.

Categorieën: Mozilla-nl planet

Mozilla verruilt Google voor Yahoo - NOS

Nieuws verzameld via Google - to, 20/11/2014 - 03:11

NU.nl

Mozilla verruilt Google voor Yahoo
NOS
Google heeft voor Mozilla afgedaan als zoekmachine. Voor FireFoxgebruikers in de Verenigde Staten is Yahoo straks de standaard zoekmachine. FireFox is de browser van Mozilla. Mozilla heeft een vijfjarig contract getekend met Yahoo. Mozilla en Google ...
Firefox gaat Yahoo als standaard zoekmachine gebruikenNUtech

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Giorgio Maone: s/http(:\/\/(?:noscript|flashgot|hackademix)\.net)/https\1/

Mozilla planet - to, 20/11/2014 - 00:16

I’m glad to announce noscript.net, flashgot.net and hackademix.net have been finally switched to full, permanent TLS with HSTS

Please do expect a smörgåsbord of bugs and bunny funny stuff :)

Categorieën: Mozilla-nl planet

Andreas Gal: Yahoo and Mozilla Form Strategic Partnership

Mozilla planet - wo, 19/11/2014 - 22:56

SUNNYVALE, Calif. and MOUNTAIN VIEW, Calif., Wednesday, November 19, 2014 – Yahoo Inc. (NASDAQ: YHOO) and Mozilla Corporation today announced a strategic five-year partnership that makes Yahoo the default search experience for Firefox in the United States on mobile and desktop. The agreement also provides a framework for exploring future product integrations and distribution opportunities to other markets.

The deal represents the most significant partnership for Yahoo in five years. As part of this partnership, Yahoo will introduce an enhanced search experience for U.S. Firefox users which is scheduled to launch in December 2014. It features a clean, modern and immersive design that reflects input from the Mozilla team.

“We’re thrilled to partner with Mozilla. Mozilla is an inspirational industry leader who puts users first and focuses on building forward-leaning, compelling experiences. We’re so proud that they’ve chosen us as their long-term partner in search, and I can’t wait to see what innovations we build together,” said Marissa Mayer, Yahoo CEO. “At Yahoo, we believe deeply in search – it’s an area of investment, opportunity and growth for us. This partnership helps to expand our reach in search and also gives us an opportunity to work closely with Mozilla to find ways to innovate more broadly in search, communications, and digital content.”

“Search is a core part of the online experience for everyone, with Firefox users alone searching the Web more than 100 billion times per year globally,” said Chris Beard, Mozilla CEO. “Our new search strategy doubles down on our commitment to make Firefox a browser for everyone, with more choice and opportunity for innovation. We are excited to partner with Yahoo to bring a new, re-imagined Yahoo search experience to Firefox users in the U.S. featuring the best of the Web, and to explore new innovative search and content experiences together.”

To learn more about this, please visit the Yahoo Corporate Tumblr and the Mozilla blog.

About Yahoo

Yahoo is focused on making the world’s daily habits inspiring and entertaining. By creating highly personalized experiences for our users, we keep people connected to what matters most to them, across devices and around the world. In turn, we create value for advertisers by connecting them with the audiences that build their businesses. Yahoo is headquartered in Sunnyvale, California, and has offices located throughout the Americas, Asia Pacific (APAC) and the Europe, Middle East and Africa (EMEA) regions. For more information, visit the pressroom (pressroom.yahoo.net) or the Company’s blog (yahoo.tumblr.com).

About Mozilla

Mozilla has been a pioneer and advocate for the Web for more than a decade. We create and promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to discover, experience and connect to the Web on computers, tablets and mobile phones. For more information please visit https://www.mozilla.com/press

Yahoo is registered trademark of Yahoo! Inc. All other names are trademarks and/or registered trademarks of their respective owners.


Filed under: Mozilla
Categorieën: Mozilla-nl planet

Monty Montgomery: Daala Demo 6: Perceptual Vector Quantization (by J.M. Valin)

Mozilla planet - wo, 19/11/2014 - 21:12

Jean-Marc has finished the sixth Daala demo page, this one about PVQ, the foundation of our encoding scheme in both Daala and Opus.

(I suppose this also means we've finally settled on what the acronym 'PVQ' stands for: Perceptual Vector Quantization. It's based on, and expanded from, an older technique called Pyramid Vector Quantization, and we'd kept using 'PVQ' for years even though our encoding space was actually spherical. I'd suggested we call it 'Pspherical Vector Quantization' with a silent P so that we could keep the acronym, and that name appears in some of my slide decks. Don't get confused, it's all the same thing!)

Categorieën: Mozilla-nl planet

David Dahl: Encryptr: ‘zero knowledge’ essential information storage

Mozilla planet - wo, 19/11/2014 - 20:30

Encryptr is one of the first “in production” applications built on top of Crypton. Encryptr can store short pieces of text like passwords, credit card numbers and other random pieces of information privately, in the cloud. Since it uses Crypton, all data that is saved to the server is encrypted first, making even a server compromise an exercise in futility for the attacker.

A key feature is that you can run Encryptr on your phone as well as your desktop and all data is available in each place immediately. Have a look:


The author of Encryptr, my colleague Tommy @therealdevgeeks, has recently blogged about building Encryptr. I hope you give it a try and send him feedback through the Github project page.


Categorieën: Mozilla-nl planet

New Search Strategy for Firefox: Promoting Choice & Innovation

Mozilla Blog - wo, 19/11/2014 - 19:16
Ten years ago, we built Firefox to keep the Internet in each of our hands — to create choice and put people in control of their lives online. Our focus has been on building products that drive the competition, energy … Continue reading
Categorieën: Mozilla-nl planet

Christian Heilmann: Simple things: styling ordered lists

Mozilla planet - wo, 19/11/2014 - 19:09

This blog started as a scratch pad of simple solutions to problems I encountered. So why not go back to basics?

It is pretty easy to get an ordered list into a document. All you have to do is add an OL element with LI child elements:

<ol> <li>Collect underpants</li> <li>???</li> <li>Profit</li> </ol>

But what if you want to style the text differently from the numbers? What if you don’t like that they end with a full stop? The generated numbers of the OL are somewhat of that dark magic browsers do for us (something we work on dragging into the sunlight with ShadowDOM).

In order to make those more style-able in the past you had to add another element to get a hook:

<ol class="oldschool"> <li><span>Collect underpants</span></li> <li><span>???</span></li> <li><span>Profit</span></li> </ol>

.oldschool li { color: green; } .oldschool span { color: lime; }

Which is kind of a terrible hack and doesn’t quite scale as you may never know who edits your list. With newer browsers we have a better way of doing that using CSS counters. The browser support is ridiculously good, so there should be no excuse for us not to use them:

counters

Using counter, you keep the HTML structure:

<ol class="counter"> <li>Collect underpants</li> <li>???</li> <li>Profit</li> </ol>

You then reset the counter for each of the lists with this class:

.counter { counter-reset: list; }

This means each list will start at 1 and not go on through the document tree. You then get rid of the list style and style the list item like you want to. In this case we give it a colour and we position it relative. This allows us to position other, new content in there and contain it to the list item:

.counter li { list-style: none; position: relative; color: lime; }

Once you hid the normal numbering with the list-style: none; you can create your own numbers using counter and generated CSS content:

.counter li::before { counter-increment: list; content: counter(list) '.'; position: absolute; top: 0px; left: -1.2em; color: green; }

If you wanted to remove the full stop, all you need to do is remove it in the CSS. You have now full styling control over these numbers, for example you can animate them slightly moving from one colour to another and zoom out a bit:

demo animation of the effect

.animated li::before { transition: 0.5s; color: green; } .animated li:hover::before { color: white; transform: scale(1.5); }

Counters allow you for a lot of different times of numbering. You can for example add a leading zero by using counter(list,decimal-leading-zero), you can use Roman numerals with counter(list,lower-roman) or even Greek ones with counter(list,lower-greek).

If you want to see all of that in action, check out this Fiddle:

Pretty simple, and quite powerful. Here are some more places to read up on this:

Categorieën: Mozilla-nl planet

Jonathan Watt: Converting Mozilla's SVG implementation to Moz2D - part 2

Mozilla planet - wo, 19/11/2014 - 17:22

This is part 2 of a pair of posts describing my work to convert Mozilla's SVG implementation to directly use Moz2D. Part 1 provided some background information and details about the process. This post will discuss the performance benefits of the conversion of the SVG code and future work.

Benefits

For the most part the performance improvements from the conversion to Moz2D were gradual; as code was incrementally converted, little by little gfxContext overhead was avoided. On doing an audit of our open SVG performance bugs it seems that painting performance is no longer one of the reasons that we perform poorly, except for when we us cairo backed DrawTargets (Linux, Windows XP and other Windows versions with blacklisted drivers), and with the exception of one bug that needs further investigation. (See below for the issues that still do causes SVG performance problems.)

Besides the incremental improvements, there have been a couple of interesting perf bumps that are worth mentioning.

The biggest perf bump by far came when I converted the code that does the actual filling and stroking of SVG geometry to directly use a DrawTarget. The time taken to render this map then dropped from about 15s to 1-2s on my Mac. On the same machine Chrome Canary shows a blank window for about 5s, and then takes a further 20s to render. Now, to be honest, this improvement will be down to something pathological that has been removed rather than being down to avoiding Thebes overhead. (I haven't got to the bottom of exactly what that was yet.) The DrawTarget object being drawn to is ultimately the same object, and Thebes overhead isn't likely to be more than a few percent of any time spent in this code. Nevertheless, it's still a welcome win.

Another perf bump that came from the Moz2D conversion was that it enabled us to cache path objects. When using Thebes, paths are built up using gfxContext API calls and the consumer never gets to touch the resulting path. This prevents the consumer from keeping hold of the path and reusing it in future. This can be a disadvantage when the path is reused frequently, especially when D2D is being used where path creation is relatively expensive. Converting to Moz2D has allowed the SVG code to hold on to the path objects that it creates and reuse them. (For example, in addition to their obvious use during rasterization, paths might be reused for bounds calculations (think invalidation areas, objectBoundingBox content, getBBox() calls) and hit-testing.) Caching paths made us noticeably more responsive on this cool data visualization (temporarily mirrored here while the site is down) when mousing over the table rows, and gave us a +25% boost on this NYT article, for example.

For those of you that are interested in Talos, I did take a look at the SVG test data, but the unfortunately frequent up-and-down of unrelated regressions and wins makes it impossible to use that to show any overall impact of Moz2D conversion on the Talos tests. (Since the beginning of the year the times on Windows have improved slightly while on Mac they have regressed slightly.) The incremental nature of most of the work also unfortunately meant that the impact of individual patches couldn't usually be distinguished from the noise in Talos' results. One notable exception was the change to make SVG geometry use a path object directly which resulted in an improvement in the region of 6% for the svg_opacity suite on Windows 7 and 8.

Other than the performance benefits, several parts of the SVG implementation that were pretty messy and hard to get into and debug have become a lot more approachable. This has already allowed me to fix various SVG bugs that would otherwise have taken a lot longer to work out, and I hope it makes the code easier to approach for devs who aren't so familiar with it.

One final note on performance for any of you that will do your own testing to compare build - note that the enabling of e10s and tiled layers has caused significant changes in performance characteristics. You might want to turn those off.

Future SVG work

As I noted above there are still SVG performance issues unrelated to graphics speed. There are three sources of significant SVG performance issues that can make Mozilla perform poorly on SVG relative to other implementations. There is our lack of hardware acceleration of SVG filters; there's the issue of display list overhead dwarfing painting on SVGs that contain huge numbers of elements (display lists being an implementation detail, and one that gave us very large wins in many other cases); and there are a whole bunch of "strange" bugs that I expect are related to our layers infrastructure that are causing us to over invalidate (and thus do work painting when we shouldn't need to).

Currently these three issues are not on a schedule, but as other higher priority Mozilla work gets ticked of I expect we'll add them.

Future Moz2D work

The performance benefits from the Moz2D conversion on the SVG code do seem to have been positive enough that I expect that we will continue converting the rest of layout in the future. As usual, it will all depend on relative priorities though.

One thing that we should do is audit all the code that creates DrawTargets to check for backend type compatibility. Mixing hardware and software backed DrawTargets when we don't need to can cause us to unwittingly be taking big performance hits due to readback from and/or upload to the GPU. I fixed several instances of mismatch that I happened to notice during the conversion work, and in one case accidentally introduced one which fortunately was caught because it caused a 10-25% regression in a specific Talos test. We know that we still have outstanding bugs on this (such as bug 944571) and I'm sure there are a bunch of cases that we're unaware of.

I mentioned above that painting performance is still a significant issue on machines that fall back to using cairo backed DrawTargets. I believe that the Graphics team's plan to solve this is to finish the Skia backend for Moz2D and use that on the platforms that don't support D2D.

There are a few things that need to be added to Moz2D before we can completely get rid of gfxContext. The main thing we're missing is push-group API on DrawTarget. This is the main reason that gfxContexts actually wraps a stack of DrawTargets, which has all sorts of irritating fallout. Most annoying it makes it hazardous to set clip paths or transforms directly on DrawTargets that may be accessed via a wrapping gfxContext before the DrawTarget's clip stack and transform has been restored, and why I had to continue passing gfxContexts to a lot of code that now only paints directly via the DrawTarget.

The only Moz2D design decision that I've found myself to be somewhat unhappy with is the decision to make patterns relative to user-space. This is what most other hardware accelerated libraries do, but I don't think it's a good fit for 2D browser rendering. Typically crisp rendering is very important to web content, so we render patterns assuming a specific user-space to device-space transform and device space pixel alignment. To maintain crisp rendering we have to make sure that patterns are used with the device-space transform that they were created for, and having to do this manually can be irksome. Anyway, it's a small detail, but something I'll be discussing with the Graphics guys when I see them face-to-face in a couple of weeks.

Modulo the two issues above (and all the changes that I and others had made to it over the last year) I've found the Moz2D API to be a pleasure to work with and I feel the SVG code is better performing and a lot cleaner for converting to it. Well done Graphics team!

Tags:
Categorieën: Mozilla-nl planet

Jonathan Watt: Converting Mozilla's SVG implementation to Moz2D - part 1

Mozilla planet - wo, 19/11/2014 - 17:18

One of my main work items this year was the conversion of the graphics portions of Mozilla's SVG implementation to directly use Moz2D APIs instead of using the old gfxContext/gfxASurface Thebes APIs. This pair of posts will provide some information on that work. This post will give some background and information on the conversion process, while part 2 will provide some discussion about the benefits of the work and what steps we might want to carry out next.

For background on why Mozilla is building Moz2D (formerly called Azure) and how it can improve Mozilla's performance see some of the earlier posts by Joe, Bas and Robert.

Early Moz2D development

When Moz2D was first being put together it was initially developed and tested as an alternative rendering backend for Mozilla's implementation of HTML <canvas>. Canvas was chosen as the initial testbed because its drawing is largely self contained, it requires a relatively small number of features from any rendering backend, and because we knew from profiling that it was being particularly impacted by Thebes/cairo overhead.

As Moz2D started to become more stable, Thebes' gfxContext class was extended to allow it to wrap a Moz2D DrawTarget (prior to that it was backed only by an instance of a Thebes gfxASurface subclass, in turn backed by a cairo_surface_t). This might seem a bit strange since, after all, Moz2D is supposed to replace Thebes, not be wrapped by it adding yet another layer of abstraction and overhead. However, it was an important step to allow the Graphics team to start testing Moz2D on Mozilla's more complicated, non-canvas, rendering scenarios. It allowed many classes of Moz2D bugs and missing Moz2D features to be worked on/out before beginning a larger effort to convert the masses of non-canvas rendering code to Moz2D.

In order to switch any of the large number of instances of gfxContext to be backed by a DrawTarget, any code that might encounter that gfxContext and try to get a gfxASurface from it had to be updated to handle DrawTargets too. For example, lots of forks in the code had to be added to BasicLayerManager, and gfxFont required a new GlyphBufferAzure class to be written. As this work progressed some instances of Thebes gfxContexts were permanently flipped to being backed by a Moz2D DrawTarget, helping keep working Moz2D code paths from regressing.

SVG, the next Guinea pig

Towards the end of 2013 it was felt that Moz2D was sufficiently ready to start thinking about converting Mozilla's layout code to use Moz2D directly and eliminate its use of gfxContext API. (The layout code being the code that decides where and how most things are placed on the screen, and by far the biggest consumer of the graphics code.) Before committing a lot of engineering time and resources to a large scale conversion, Jet wanted to convert a specific part of the layout code to ensure that Moz2D could meet its needs and determine what performance benefits it could provide to layout. The SVG code was chosen for this purpose since it was considered to be the most complicated to convert (if Moz2D could work for SVG, it could work for the rest of layout).

Stage 1 - Converting all gfxContexts to wrap a DrawTarget

After drawing up a rough list of the work to convert the SVG code to Moz2D I got stuck in. The initial plan was to add code paths to the SVG code to check for and extract DrawTargets from gfxContexts that were passed in (if the gfxContext was backed by one) and operate directly on the DrawTarget in that case. (At some future point the Thebes forks could then be removed.) It soon became apparent that these forks were often not how we would want the code to be structured on completion of Moz2D conversion though. To leverage Moz2D more effectively I frequently found myself wanting to refactor the code quite substantially, and in ways that were not compatible with the existing Thebes code paths. Rather than spending months writing suboptimal Moz2D code paths only to have to rewrite things again when we got rid of the Thebes paths I decided to save time in the long run and first make sure that any gfxContexts that were passed into SVG code would be wrapping a DrawTarget. That way maintaining Thebes forks would be unnecessary.

It wasn't trivial to determine which gfxContexts might end up being passed to SVG code. The complexity of the code paths and the virtually limitless permutations in which Web content can be combined meant that I only identified about a dozen gfxContexts that could not end up in SVG code. As a result I ended up working to convert all gfxContexts in the Mozilla code. (The small amount of additional work to convert the instances that couldn't end up in SVG code allowed us to reduce a whole bunch of code complexity (and remove a lot of then dead code) and simplified things for other devs working with Thebes/Moz2D.)

Ensuring that all the gfxContexts that might be passed to SVG code would be backed by a DrawTarget turned out to be quite a task. I started this work when relatively few gfxContexts had been converted to wrap a DrawTarget so unsurprisingly things were a bit rough. I tripped over several Moz2D bugs at this point. Mostly though the headaches were caused by the amount of code that assumed gfxContexts wrapped and could provide them with a gfxASurface/cairo_surface_t/platform library object, possibly getting or then passing those objects from/to seemingly far corners of the Mozilla code. Particularly challenging was converting the image code where the sources and destinations of gfxASurfaces turned out to be particularly far reaching requiring the code to be converted incrementally in 34 separate bugs. Doing this without temporary performance regressions was tricky.

Besides preparing the ground for the SVG conversion, this work resulted in a decent number of performance improvements in its own right.

Stage 2 - Converting the SVG code to Moz2D

Converting the SVG code to Moz2D was a lot more than a simple case of switching calls from one graphics API to another. The stateful context provided by a retained mode API like Thebes or cairo allows consumer code to set context state (for example, fill pattern, or anti-alias mode) in points of the code that can seem far removed from other code that takes an action (for example, filling a path) that relies on that state having been set. The SVG code made use of this a lot since in many cases (for example, when passing things through for callbacks) it simplified the code to only pass a context rather than a context and some state to set.

This wouldn't have been all that bad if it wasn't for another fundamental difference between Thebes/cairo and Moz2D -- in Moz2D paths and patterns are relative to user-space, whereas in Thebes/cairo they are relative to device-space. Whereas with Thebes we could set a path/pattern and then change the transform before drawing (perhaps, say, to apply a clip in a different space) and the position of the path/pattern would be unaffected, with Moz2D such a transform change would change (and thus break) the rendering. This, incidentally, was why the SVG code was expected to be the hardest area to switch to Moz2D. Partly for historic reasons, and partly because some of the features that SVG supports lead it to, the SVG code did a lot of setting state, changing transforms, setting some more state and then drawing. Often the complexity of the code made it difficult to figure out which code could be setting relevant state before a transform change, requiring more involved refactoring. On the plus side, sorting this out has made parts of the code significantly easier to understand, and has been something I've wanted to find the time to do for years.

Benefits and next steps

To continue reading about the performance benefits of the conversion of the SVG code and some possible next steps continue to part 2.

Tags:
Categorieën: Mozilla-nl planet

Giorgio Maone: Avast, you’re kidd… killing me - said NoScript >:(

Mozilla planet - wo, 19/11/2014 - 14:20

If NoScript keeps disappearing from your Firefox, Avast! Antivirus is likely the culprit.
It’s gone Berserk and mass-deleting add-ons without a warning.
I’m currently receiving tons of reports by confused and angry users.
If the antivirus is dead (as I’ve been preaching for 7 years), looks like it’s not dead enough, yet.

Categorieën: Mozilla-nl planet

Christian Heilmann: What I am looking for in a guest writer on this blog

Mozilla planet - wo, 19/11/2014 - 11:58

Simple: go try guest writing someplace else. This is my personal blog and if I am interested in something, I come to you and do it interview style in order to point to your work or showcase something amazingly cool that you have done.

anteater-sound-of-music

Please, please, please with cherry on top, stop sending me emails like this one:

Hi,

I’m {NAME}, a freelance writer/education consultant. I found “Christian Heilmann” on a Google search and thought I would contact you to see if you would like to work with me. I own a website on Job Application Service that I’m currently promoting for myself. I thought we could benefit each other somehow? If you are interested, I’d be happy to write a very high-quality article for your site and get a couple permanent links from it? While your website is benefiting from my high-quality article, I’m getting links from your site, making this proposition mutually beneficial.
Shall I write an article that matches your niche and send it across for your review or do you need me to write on a particular topic that interests you and your readers, I’m open to any topic, thoughts please?
If this does not interest you, I am sorry to have bothered you. Have a good day! If this does great I hope we can build a long-term business relationship together! If you wish to have a chat on the phone please let me know your phone number and when a good time to call is :) If you’d like, I can share samples with you.
Regards,
{FIRSTNAME}

I am very happy you know how to enter a name in Google and find the blog of that person. That’s a good start. Nobody got hurt, you didn’t overdo it with the research or spent too much effort before asking for my phone number and pointing out just how much you would get out of this “mutually beneficial relationship”. Seriously, I would love to be a fly on the wall when you try dating.

I’ve worked hard on this blog, that’s why it has some success or is at least found. Go work on yours yourself. That’s how it should be. A blog is you. Just like this one is mine.

Categorieën: Mozilla-nl planet

Gervase Markham: BMO show_bug Load Times 2x Faster Since January

Mozilla planet - wo, 19/11/2014 - 11:42

The load time for viewing bugs on bugzilla.mozilla.org has got 2x faster since January. See this tweet for graphical evidence.

If you are looking for a direction in which to send your bouquets, glob is your man.

Categorieën: Mozilla-nl planet

David Rajchenbach Teller: The Future of Promise

Mozilla planet - wo, 19/11/2014 - 11:36
If you are writing JavaScript in mozilla-central or in an add-on, or if you are writing WebIDL code, by now, you have probably made use of Promise. You may even have noticed that we now have several implementations of Promise in mozilla-central, and that things are moving fast, and sometimes breaking. At the moment, we have two active implementations of Promise: (as well as a little code using an older, long deprecated, implementation of Promise) This is somewhat confusing, but the good news is that we are working hard at making it simpler and moving everything to DOM Promise. General Overview Many components of mozilla-central have been using Promise for several years, way before a standard was adopted, or even discussed. So we had to come up with our implementation(s) of Promise. These implementations were progressively folded into Promise.jsm, which is now used pervasively in mozilla-central and add-ons. In parallel, Promise were specified, submitted for standardisation, implemented in Firefox, and finally standardised. This is the second implementation we call DOM Promise. This implementation is starting to be used in many places on the web. Having two implementations of Promise with the same feature set doesn’t make sense. Fortunately, Promise.jsm was designed to match the API of Promise that we believed would be standardised, and was progressively refactored and extended to follow these developments, so both APIs are almost identical. Our objective is to move entirely to DOM Promise. There are still a few things that need to happen before this is possible, but we are getting close. I hope that we can get there by the end of 2014. Missing pieces Debugging and testing At the moment, Promise.jsm is much better than DOM Promise in two aspects:
  • it is easier to inspect a promise from Promise.jsm for debugging purposes (not anymore, things have been moving fast while I was writing this blog entry);
  • Promise.jsm integrates nicely in the test suite, to make sure that uncaught errors are reported and cause test failures.
In both topics, we are hard at work bringing DOM Promise to feature parity with Promise.jsm and then some (bug 989960, bug 1083361). Most of the patches are in the pipeline already.
API differences
  • Promise.jsm offers an additional function Promise.defer, which didn’t make it to standardization.
This function may easily be written on top of DOM Promise, so this is not a hard blocker. We are going to add this function to a module `PromiseUtils.jsm`.
  • Also, there is a slight bug in DOM Promise that gives it a slightly unexpected behavior in a few edge cases. This should not hit developers who use DOM Promise as expected, but this might surprise people who know the exact scheduling algorithm and expect it to be consistent between Promise.jsm and DOM Promise.

Oh, wait, that’s fixed already.

Wrapping it up Once we have done all of this, we will be able to replace Promise.jsm with an empty shell that defers all implementations to DOM Promise. Eventually, we will deprecate and remove this module. As a developer, what should I do? For the moment, you should keep using Promise.jsm, because of the better testing/debugging support. However, please do not use Promise.defer. Rather, use PromiseUtils.defer, which is strictly equivalent but is not going away.
We will inform everyone once DOM Promise becomes the right choice for everything. If your code doesn’t use Promise.defer, migrating to DOM Promise should be as simple as removing the line that imports Promise.jsm in your module.
Categorieën: Mozilla-nl planet

Doug Belshaw: Native apps, the open web, and web literacy

Mozilla planet - wo, 19/11/2014 - 10:07

In a recent blog post, John Gruber argues that native apps are part of the web. This was in response to a WSJ article in which Christopher Mims stated his belief that the web is dying; apps are killing it. In this post, I want to explore the relationship between native apps and web literacy. This is important as we work towards a new version of Mozilla’s Web Literacy Map. It’s something that I explored preliminarily in a post earlier this year entitled What exactly is ‘the mobile web’? (and what does it mean for web literacy?). This, again, was in response to Gruber.

Native app

This blog focuses on new literacies, so I’ll not be diving too much into technical specifications, etc. I’m defining web literacy in the same way as we do with the Web Literacy Map v1.1: ‘the skills and competencies required to read, write and participate on the web’. If the main question we’re considering is are native apps part of the web? then the follow-up question is and what does this mean for web literacy?

Defining our terms

First of all, let’s make sure we’re clear about what we’re talking about here. It’s worth saying right away that 'app’ is almost always used as a shorthand for 'mobile app’. These apps are usually divided into three categories:

  1. Native app
  2. Hybrid app
  3. Web app

From this list, it’s probably easiest to describe a web app:

A web application or web app is any software that runs in a web browser. It is created in a browser-supported programming language (such as the combination of JavaScript, HTML and CSS) and relies on a web browser to render the application. ( Wikipedia)

It’s trickier to define a native app, but the essence can be seen most concretely through Apple’s ecosystem that include iOS and the App Store. Developers use a specific programming language and work within constraints set by the owner of the ecosystem. By doing so, native apps get privileged access to all of the features of the mobile device.

A hybrid app is a native app that serves as a 'shell’ or 'wrapper’ for a web app. This is usually done for the sake of convenience and, in some cases, speed.

The boundary between a native app and a web app used to be much more clear and distinct. However, the lines are increasingly blurred. For example:

  • APK files (i.e. native apps) can be downloaded from the web and installed on Android devices.
  • Developments as part of Firefox OS mean that web technologies can securely access low-level functionality of mobile devices (e.g. camera, GPS, accelerometer).
  • The specifications for HTML5 and CSS3 allow beautiful and engaging web apps to be used offline.
Web literacy and native apps

As a result of all this, it’s probably easier these days to differentiate between a native app and a web app by talking about ecosystems and silos. Understanding it this way, a native app is one that is built specifically using the technologies and is subject to the constraints of a particular ecosystem. So a developer creating an app for Apple’s App Store would have to go through a different process and use a different programming language than if they were creating one for Google’s Play Store. And so on.

Does this mean that we need to talk of a separate 'literacy’ for each ecosystem? Should we define 'Google literacy’ as the skills and competencies required to read, write and participate in Google’s ecosystem? I don’t think so. While there may be variations in the way things are done within the different ecosystems, these procedural elements do not constitute 'literacy’.

What we’re aiming for with the Web Literacy Map is a holistic overview of the skills and competencies people require when using the web. I think at this juncture we’ve got a couple of options. The first would be define 'the web’ more loosely to really mean 'the internet’.

This is John Gruber’s preferred option. He thinks we should focus less on web browsers (i.e. HTML) and more on the connections (i.e. HTTP). For example, in a 2010 talk he pointed out a difference between 'web apps’ and 'browser apps’. His argument rested on a technical point, which he illustrated with an example. When a user scrolls through their timeline using the Twitter app for iPhone, they’re not using a web browser, but they are using HTTP technologies. This, said Gruber, means that ecosystems such as Apple’s and the web are not in opposition to one another.

While this is technically correct, it’s a red herring. HTML does matter because the important thing here is the open web. Check out Gruber’s sleight of hand in this closing paragraph:

Arguments about “open” and “closed” often devolve into unresolvable cross-talk where the two sides have different definitions of what open and closed really mean. But the weird thing about a truly open platform is that its openness allows closed things to be built on top of it. In broad strokes, that’s why GNU/GPL software isn’t “open” in the way that BSD software is (and why Richard Stallman outright rejects the term “open source”). If you expand your view of “the web” from merely that which renders inside the confines of a web browser to instead encompass all network traffic sent over HTTP/S, the explosive growth of native mobile apps is just another stage in the growth of the web. Far from killing it, native apps have made the open web even stronger.

I think Gruber needs to read up on enclosure and the Commons. To use a 16th-century English agricultural metaphor, the important thing isn’t that the grass is growing in the field, it’s that it’s been fenced off and people are excluded.

A way forward

A second approach is to double-down on what makes the web different and unique. Mozilla’s mission is to promote openness, innovation & opportunity on the web and the Web Literacy Map is a platform for doing this. Even if we don’t tie development of the Web Literacy Map explicitly to the Mozilla manifesto it’s still a key consideration. Therefore, when we talking about 'web literacy’ it’s probably more accurate to define it as 'the skills and competencies required to read, write and participate on the open web.

What do we mean by the 'open web’? While Tantek Çelik approaches it from a technical standpoint, I like Brad Neuberg’s (2008) focus on the open web as a series of philosophies:

Decentralization - Rather than controlled by one entity or centralized, the web is decentralized – anyone can create a web site or web service. Browsers can work with millions of entities, rather than tying into one location. It’s not the Google or Microsoft Web, but rather simply the web, an open system that anyone can plug into and create information at the end-points.
Transparency - An Open Web should have transparency at all levels. This includes being able to view the source of web pages; having human-readable network identifiers, such as URLs; and having clear network entry points, such as HTTP and REST exposes.
Hackability - It should be easy to lash together and script the different portions of this web. MySpace, for example, allows users to embed components from all over the web; Google’s AdSense, another example, allows ads to be integrated onto arbitrary web pages. What would you like to hack together, using the web as a base?
Openness - Whether the protocols used are de facto or de-jure, they should either be documented with open specifications or open code. Any entity should be able to implement these standards or use this code to hook into the system, without penalty of patents, copyright of standards, etc.
From Gift Economies to Free Markets - The Open Web should support extreme gift economies, such as open source and Wikis, all the way to traditional free market entities, such as Amazon.com and Google. I call this Freedom of Social Forms; the tent is big enough to support many forms of social and economic organization, including ones we haven’t imagined yet. Third-Party Integration - At all layers of the system third-parties should be able to hook into the system, whether creating web browsers, web servers, web services, etc.
Third-Party Innovation - Parties should be able to innovate and create without asking the powers-that-be for permission.
Civil Society and Discourse - An open web promotes both many-to-many and one-to-many communication, allowing for millions of conversations by millions of people, across a range of conversation modalities.
Two-Way Communication - An Open Web should allow anyone to assume three different roles: Readers, Writers, and Code Hackers. Readers read content, Writers write content, and Code Hackers hack new network services that empower the first two roles.
End-User Usability and Integration - One of the original insights of the web was to bind all of this together with an easy to use web browser that was integrated for ease of use, despite the highly decentralized nature of the web. The Open Web should continue to empower the mainstream rather than the tech elite with easy to use next generation browsers that are highly usable and integrated despite having an open infrastructure. Open should not mean hard to use. Why can’t we have the design brilliance of Steve Jobs coupled with the geek openness of Steve Wozniak? Making them an either/or is a false dichotomy.

Conclusion

The Web Literacy Map describes the skills and competencies required to read, write and participate on the open web. But it’s also prescriptive. It’s a way to develop an open attitude towards the world:

Open is a willingness to share, not only resources, but processes, ideas, thoughts, ways of thinking and operating. Open means working in spaces and places that are transparent and allow others to see what you are doing and how you are doing it, giving rise to opportunities for people who could help you to connect with you, jump in and offer that help. And where you can reciprocate and do the same.

Native apps can mitigate against the kind of reciprocity required for an open web. In many ways, it’s the 21st century enclosure of the commons. I believe that web literacy, as defined and promoted through the Web Literacy Map, should not consider native apps part of the open web. Such apps may be built on top of web technologies, they may link to the open web, but native apps are something qualitatively different. Those who want to explore what reading, writing and participating means in closed ecosystems have other vocabularies – provided by media literacy, information literacy, and digital literacy – with which to do so.

Comments? Questions? Direct them here: doug@mozillafoundation.org or discuss this post in the #TeachTheWeb discussion forum

Categorieën: Mozilla-nl planet

Wladimir Palant: "Unloading" frame scripts in restartless extensions

Mozilla planet - wo, 19/11/2014 - 08:51

The big news is: e10s is coming to desktop Firefox after all, and it was even enabled in the nightly builds already. And while most of the times the add-ons continue working without any changes, this doesn’t always work correctly. Plus, using the compatibility shims faking a single-process environment might not be the most efficient approach. So reason enough for add-on authors to look into the dreaded and underdocumented message manager and start working with frame scripts again.

I tried porting a simple add-on to this API. The good news: the API hasn’t changed since Firefox 17, so the changes will be backwards-compatible. And the bad news? Well, there are several.

  • Bug 1051238 means that frame scripts are cached — so when a restartless add-on updates the old frame script code will still be used. You can work around that by randomizing the URL of your frame script (e.g. add "?" + Math.random() to it).
  • Bug 673569 means that all frame scripts run in the same shared scope prior to Firefox 29, so you should make sure there are no conflicting global variables. This can be worked around by wrapping your frame script in an anonymous function.
  • Duplicating the same script for each tab (originally there was only a single instance of that code) makes me wonder about the memory usage here. Sadly, I don’t see a way to figure that out. I assume that about:memory shows frame scripts under the outOfProcessTabChildGlobal entry. But due to the shared scope there is no way to see individual frame scripts there.
  • Finally, you cannot unload frame scripts if your restartless extension is uninstalled or disabled. messageManager.removeDelayedFrameScript() will merely make sure that the frame script won’t be injected into any new tabs. But what about tabs that are already open?

Interestingly, it seems that Mark Finkle was the only one to ask himself that question so far. The solution is: if you cannot unload the frame script, you should at least make sure it doesn’t have any effect. So when the extension unloads it should send a "myaddon@example.com:disable" message to the frame scripts and the frame scripts should stop doing anything.

So far so good. But isn’t there a race condition? Consider the following scenario:

  • An update is triggered for a restartless extension.
  • The old version is disabled and broadcasts “disable” message to the frame scripts.
  • The new version is installed and starts its frame scripts.
  • The “disable” message arrives and disabled all frame scripts (including the ones belonging to the new extension version).

The feedback I got from Dave Townsend says that this race condition doesn’t actually happen and that loadFrameScript and broadcastAsyncMessage are guaranteed to affect frame scripts in the order called. It would be nice to see this documented somewhere, until then it is an implementation detail that cannot be relied on. The work-around I found here: since the frame script URL is randomized anyway (due to bug 1051238), I can send it along with the “disable” message:

messageManager.broadcastAsyncMessage("myaddon@example.com:disable", frameScriptURL);

The frame script then processes the message only if the URL matches its own URL:

addMessageListener("myaddon@example.com:disable", function(message) { if (message.data == Components.stack.filename) { ... } });
Categorieën: Mozilla-nl planet

Meeting Notes: Thunderbird: 2014-11-18

Thunderbird - wo, 19/11/2014 - 05:00
Thunderbird meeting notes 2014-11-18

Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

(partial list)
rkent
florian
wsmwk
jcranmer
mmelin
aceman
theone
clokep
roland

Action items from last meetings
  • wsmwk: Get the Thunderbird 38 bugzilla flag created
    • not heard from standard8
Critical Issues
  • Several critical bugs keeping us from moving from 24 to 31. Please leave these here until they’re confirmed fixed.
    • Frequent POP3 crasher bug 902158 On current aurora and beta?
      • we won’t have data/insight till next week, assuming the relevant builds are built. crash-stats for nightly builds is not useful – direct user testing of the fixed build is required
    • Self-signed certs are having difficulty bug 1036338 SOLVED! REOPENED according to the bug?
    • Auto-complete bugs?bug 1045753 waiting for esr approval bug 1043310 Waiting for review, still
    • Auto-complete improvements (bug 970456, bug 1091675, bug 118624) – some of those could go into esr31
    • bug 1045958TB 31 jp does not display folder pane with OS X

Why are we throttled? Because 1) waiting for TB 31.3, and 2) still hoping for bug 1045958 3) need auto-complete bug approved. Dec 1/2 is now release date.

Upcoming Round Table wsmwk
  • got Penelope (pre-OSE eudora) removed from AMO
  • shepherding [1] bug 1045958]]TB 31 jp does not display folder pane with OS X
  • secured potential release drivers
  • “Get Involved” is broken for Thunderbird. TB isn’t offered at https://www.mozilla.org/en-US/contribute/signup/, and it’s unclear who in Thunderbird gets notified. In contact with

Larissa Shapiro

jcranmer
  • Looking into eliminating the comm-central build system bug 1099430
  • Trying to see if I can get some whines set up to listen for potential TB compat changes (e.g., checking the addon-compat keyword)
  • We have telemetry on Nightly for the first time since TB 28!
  • Irving is working through reviews of bug 998191 \o/
clokep
  • Google Talk does not work on comm-central due to disabling of RC4 SSL ciphers, see bug 1092701
    • Some of the security guys have contacted Googlers, apparently an internal ticket was opened with the XMPP team
  • Filed a bug (with a patch) to have firefoxtree work with comm-central, this will automatically add a tag (called “comm”) to the tip of comm-central bug 1100692 and is useful if you’re playing with bookmarks instead of mq
  • Still haven’t fully been able to get Additional Chat Protocols to compile…(Windows still failing)
  • WebRTC work is waiting for reviews from Florian
mkmelin
  • lots of reviews, queue almost empty, yay!
  • finally landed bug 970456
  • bug 1074125 fixed, plan to handle some of the m-c encoding removals next
  • bug 1074793 fixed, for tb we need to set a pref for it to take affect (bug 1095893 awaiting review)
aceman
  • revived effort on 4GB+ folders together with rkent
  • landed a rewrite of the attachment reminder engine: bug 938829 (NOT for ESR). Please mark regressions against that bug. First one is bug 1099866.
Support team
  • [roland] working on Thunderbird profile article because a new volunteer contributor from the SUMO buddy program rewrote it! will review changes!
Action Items
  • wsmwk: Thunderbird start page for anniversary, with localizations
  • wsmwk: Get Involved, get the “Thunderbird path” operating
Retrieved from “https://wiki.mozilla.org/index.php?title=Thunderbird/StatusMeetings/2014-11-18&oldid=1034573

Categorieën: Mozilla-nl planet

Raniere Silva: MathML November Meeting

Mozilla planet - wo, 19/11/2014 - 03:00
MathML November Meeting

Note

Sorry for the delay to write this.

This is a report about the Mozilla MathML November IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in January 7th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Note

Yes. Our December meeting was cancelled. =(

Leia mais...

Categorieën: Mozilla-nl planet

Christie Koehler: Happy 10th Birthday, MozillaWiki!

Mozilla planet - wo, 19/11/2014 - 02:16

Last Monday, Firefox turned 10 years old. Thunderbird turns 10 on 7 December.

This week we celebrate another birthday: MozillaWiki turns 10 on Wednesday, 18 November!

I’m immensely proud of our wiki, its ten year history, and of all the work Mozillians do to make MozillaWiki a hub of collaboration and a living memory for the Mozilla Project.

To show our appreciation for your efforts over the last decade, the MozillaWiki team has created a 10th Birthday badge.

MozillaWiki 10th Birthday BadgeMozillaWiki 10th Birthday Badge

All you need to do to join in the celebration and claim the badge is log in to MozillaWiki. Once you’ve done that, you’ll see a link to claim the badge at the top of the page. Don’t have a MozillaWiki account? No worries! Create one during this Birthday celebration and you can claim the badge too.

A bit of MozillaWiki history

Before I talk about all the good work we’ve done, and what we have planned for the remainder of this year and beyond, let’s take a quick stroll through the last 10 years. Thank you Internet Archive for hosting these snapshots of the wiki!

July 2004

The earliest snapshot I could find of the domain wiki.mozilla.org was from July 2004. It looks like we were hosting separate wiki installations, which may or may not have been Mediawiki.

wik.mozilla.org July 2004wik.mozilla.org July 2004 wiki.mozilla.org/GeckoDev August 2004wiki.mozilla.org/GeckoDev August 2004 November-December 2004

According to WikiApiary, the current installation of MozillaWiki was created on 18 November 2004. The closest snapshot to this date in the Internet Archive is 11 December 2004:

MozillaWiki December 2004MozillaWiki December 2004 April 2005

By April 2005, the wiki had been upgraded, had a new theme (Cavendish), and had started using Apache rewrite rules to make the url pretty (e.g. no index.php).

Mozilla Wiki, April 2005Mozilla Wiki, April 2005 August 2008

Three years later, in April 2008, we were still rockin’ the Cavendish theme and the Main Page had some more content, including links to the weekly project call that continues to this day.

MozillaWiki August 2008MozillaWiki August 2008 December 2010

We started tracking releases in December 2007 (see version). Here’s what the Releases page looked like in December 2010.

MozillaWiki December 2010 Releases pageMozillaWiki December 2010 Releases page May 2011

In May 2011, after 6 years of service, Cavendish was retired as the default skin and replaced with GMO.

MozillaWiki May 2011 - New GMO skinMozillaWiki May 2011 – New GMO skin July 2012

A year later, July 2012, MozillaWiki looked much the same.

MozillaWiki July 2012MozillaWiki July 2012 July 2013

By July 2013, the Main Page was edited to include a few recent changes, but otherwise looked very similar.

MozillaWiki July 2013MozillaWiki July 2013 August 2014

By August 2014, the revitalization of the MozillaWiki was in full swing and we were preparing for a major update to both the skin (GMO to Vector) as well as the underlying software (Mediawiki 1.19 to 1.23). We also had made significant changes to the content of the Main Page based on results of our recent user survey.

MozillaWiki August 2013MozillaWiki August 2013 November 2014

Here’s what the wiki looks like today, 17 November, the day before it’s birthday. We’re running a slightly modified Vector skin and Mediawiki 1.23.x branch.

MozillaWiki November 2014MozillaWiki November 2014 MozillaWiki today Pages, visitors and accounts

As of 16 November, MozillaWiki has 115,912 pages, all public, and nearly 10k uploaded files. About 630 people per month, on average, log in and make contributions to the wiki. These include both staff and volunteers. Want to track these stats yourself? Visit Special:Statistics.

The number of daily visitors ranges from 9k-30k, with an average likely around 13-14k. Who are these visitors? According our analytics software we get visitors from all over the world, with the greatest concentration being from the US, Canada and UK.

The wiki has over 330,000 registered user accounts. I estimate that about 300k of these are inactive spam accounts, so the real number for user accounts is probably closer to 30,000.

What kinda of content is hosted on MozillaWiki?

All kinds of project activity is coordinated and recorded on the wiki. This includes activity related to our products: Firefox, Firefox OS, WebMaker, etc. It also includes community activities such as Reps, Firefox Student Ambassadors, etc. Most project activities have some representation on MozillaWiki. People also use the wiki to track projects and goals on an individual level. In this regard, it served as a place for Mozillians’ profiles long before we had mozillians.org.

The MozillaWiki isn’t setup for localized content now, but this hasn’t stopped our localized community from translating content. Every day a significant portion of account requests come from volunteers from regional communities and are often in a language other than English. In 2015, depending on resources available, we plan to significantly improve support for localized content on MozillaWiki.

2014 Accomplishments

This year we’ve made significant progress towards revitalizing MozillaWiki.

Accomplishments include:

  • Forming a team of dedicated volunteers to lead a revitalization effort.
  • Creating an About page for MozillaWiki that clarifies its scope and role in the project, including what is appropriate content and how to report issues.
  • Fixing years-old bugs that cause significant usability problems (table sorting, unavailability of Wikieditor, etc.).
  • Identifying a product owner for MozillaWiki and creating a Module for it, lead by a mix of staff and contributors.
  • Halting the creation of new spam and cleaning up significant amounts of spam content.
  • Upgrading Mediawiki from 1.19.x branch to 1.23.x branch AND changing the default theme without any significant downtime or disruptions to users.
  • Organizing a user survey and using those results to guide much of our roadmap, including the redesign of the Main Page and sidebar navigation.

Thank you everyone who has been a part of this work!

There’s still plenty to do, and many ways to contribute

We’ve made so much progress on the technical and infrastructure debt of MozillaWiki that we’re now ready to focus on improving content and collaboration mechnisms.

How can I help?

The are many ways you can help, and we have contribution opportunities for all kinds of skill levels and time commitments.

We’re working on documenting and organizing these contribution opportunities here: https://wiki.mozilla.org/MozillaWiki:Contribute so check that page often.

Join our mailing-list or community call

If you’d like to help us organize those opportunities, or have other ideas for improving the wiki, join one of our MozillaWiki Team communication channels or one of our community meetings. These meetings are held twice a month on Tuesday at 8:30 PST / 15:30 UTC. Our next meeting is 16 December. All who are interested in contributing to the wiki are welcome.

In the meantime, log in to MozillaWiki and celebrate its birthday with us by claiming the birthday badge!

Categorieën: Mozilla-nl planet

Christian Heilmann: We have a massive recruitment problem

Mozilla planet - ti, 18/11/2014 - 23:26

A few months ago, I flew over to see my parents for their 50th wedding anniversary. As some of you may know, I have a humble background. My dad was a coal miner and then factory worker and my mother has always been a home maker / housewife. I am the only one in my family that went to college and I skipped university as the thing to do was to make money in a job as soon as you are 18.

It was humbling and almost embarassing to have conversations with my family. Half of them are either unemployed or worried about their jobs. The rest are unhappy in their jobs but see no way to change that as they need the security. Finding joy in family life and leisure time is more important than enjoying the work. A job is a job, you got to do what you got to do and all that.

 you gotta do what you gotta do

That’s why it feels surreal to come back into “our world” and get offers for jobs I don’t want. A lot of them. Some with ridiculous amounts of money offered and most with perks that would make my family blush and sense a trap.

Why are we not recruiter compatible and vice versa?

We’re lucky to be that sought after and yet it seems there is no happy symbiosis between us and recruiters. On the contrary, as soon as you even mention recruiting most of us techies start ranting.

I feel uneasy doing that. I feel like an arrogant ass and I feel that we should be more grateful about the opportunities we get. The relationship of recruiter and job seeker should be high-fives and unicorns. Instead there is a massive sense of dread: “Oh god, another job offer, how tiring”.

There are reasons for our dismay:

  • A lot of headhunters/recruiters work on a commission basis and are measured by how many contacts they had that day. This leads to a scattergun approach and you get offers that are not “you” at all.
  • Many recruiters seem to just look for keywords and then send the offer out to you. That’s why you get Java positions when you have a JavaScript background. Just like you would send car mechanic jobs to a carpet expert.
  • Others go for company names. A great example was this recruiter trying to hire someone’s dog as a Java/Python developer.
  • Many recruiting sites are very pushy to get you into their database to show potential hiring companies just how many job searchers they have. This leads to very old and outdated profiles and you get offers for jobs you’ve done years ago. Basically they don’t want to find you a job, they want you as an ad.
  • People write ridiculous job descriptions and send them to us. In the past I wrote what kind of people I try to hire and once it went through HR and recruitment review something completely ridiculous ended up online. You’ve seen those. Asking people for two degrees, but not older than 20. Seven years experience in a half year old technology, and similar confusing points.
  • There is probably nothing more intrusive to someone who feels at home online to be called by somebody. Recruiters, however, seem to see the “personal touch” as the most important thing.

On the other side of this issue, we are not innocent either:

  • Instead of telling people why we didn’t want their offer, we just ignore them. There is no learning on either side.
  • We love our own tools and are not too interested in changing that. Every recruitment department I worked with needed a CV in a document format for filing and keeping. Instead of having one of those at hand we love to create online CVs and portfolios or point people to our GitHub account as “real people who would hire me find all they need there”. This is navel gazing and arrogant. If I want to go on a bus, I need a bus ticket. A macaroni picture with glitter on it saying “most amazing responsive bus ticket” will not get me on there. Have the tool for the task.
  • We don’t keep our presence up-to-date. If you’re not seeking, say so on LinkedIn. Have a template to send back to recruiters telling them “thanks, but no.”
  • We also shouldn’t create profiles of our dog on LinkedIn. This is a professional tool, if we don’t use these in a professional manner we shouldn’t be surprised that they go to the dogs.
  • Keep your skills up-to-date. If you never ever want to work with a certain product any longer, remove it from your online presence. That way keyword searchers don’t find you.

We need to communicate better

I feel there is a massive waste going on and an accumulation of frustration on both sides. We need to get better in helping another to make this the natural partnership it should be. I feel terrible hearing about friends not in our world who send out hundreds of applications and don’t get answers whilst we complain about people trying to offer us jobs. It feels almost unreal.

There are a few good ideas around and there is a start to clean this mess up. Joblint is a tool that comes to mind. It is an analysing tool that takes job descriptions and allows you to

“Test tech job specs for issues with sexism, culture, expectations, and recruiter fails”.

A lot of miscommunication could be avoided simply by using that.

Considering giving a helping hand

Maybe I should do something about this and use my time off to reach out and try to change something. I wonder if a workshop for recruiters about issues to avoid would be of interest? In any case, let’s try to be more understanding. Recruiters do their jobs the same way we do ours. By understanding their drives and goals, we can make both of our lives easier. By being arrogant and come across as divas we shouldn’t be surprised if job descriptions start calling out for rockstars, ninjas, gurus and mavens.

Let’s highlight the great experiences we had, and share what worked. Maybe that could be the lever we need to crack this nut open.

Categorieën: Mozilla-nl planet

Pages