mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 23 uur 11 min geleden

Jen Fong-Adwent: Broken Communication

zo, 07/12/2014 - 16:00
Here are the typical flows of communication from in real life to GPG
Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the month: November 2014

zo, 07/12/2014 - 13:31

Again, it’s that time of the month where we show gratitude to the best of the Reps program.

Flore Allemandou is one of the oldest mozillians around, being a Mozilla Rep for a long time too. In the last months she took lead of the WoMoz project coordinating our activities in this area. She also organized our presence at the Adacamp editions in Berlin and Bangalore.

floreAt the MozFest she was one of the most active Reps around, helping out by leading sessions from community Building and diversity to even flash around 80 Flame devices. As part of the Mobilizers in France she organized several event in Lyon and Paris helping with our Firefox OS Launch there.

As this wasn’t enough, she organized Mozilla’s presence at Open World forum and Code of war. Locasprint was also another of the last months events together with some great photos for the Firefox 10 Celebration.

Thank you Flore for your amazing work!

Don’t forget to congratulate her on Discourse!

Categorieën: Mozilla-nl planet

Adrian Gaudebert: Socorro: the Super Search Fields guide

zo, 07/12/2014 - 00:58

Socorro has a master list of fields, called the Super Search Fields, that controls several parts of the application: Super Search and its derivatives (Signature report, Your crash reports... ), available columns in report/list/, and exposed fields in the public API. Fields contained in that list are known to the application, and have a set of attributes that define the behavior of the app regarding each of those fields. An explanation of those attributes can be found in our documentation.

In this guide, I will show you how to use the administration tool we built to manage that list.

You need to be a superuser to be able to use this administration tool.

Understanding the effects of this list

It is important to fully understand the effects of adding, removing or editing a field in this Super Search Fields tool.

A field needs to have a unique Name, and a unique combination of Namespace and Name in database. Those are the only mandatory values for a field. Thus, if a field does not define any other attribute and keeps their default values, it won't have any impact in the application -- it will merely be "known", that's all.

Now, here are the important attributes and their effects:

  • Is exposed - if this value is checked, the field will be accessible in Super Search as a filter.
  • Is returned - if this value is checked, the field will be accessible in Super Search as a facet / aggregation. It will also be available as a column in Super Search and report/list/, and it will be returned in the public API.
  • Permissions needed - permissions listed in this attribute will be required for a user to be able to use or see this field.
  • Storage mapping - this value will be used when creating the mapping to use in Elasticsearch. It changes the way the field is stored. You can use this value to define some special rules for a field, for example if it needs a specific analyzer. This is a sensitive attribute, if you don't know what to do with it, leave it empty and Elasticsearch will guess what the best mapping is for that field.

It is, as always, a rule of thumb to apply changes to the dev/staging environments before doing so in production. And to my Mozilla colleagues: this is mandatory! Please always apply any change to stage first, verify it works as you want (using Super Search for example), then apply it to production and verify there.

Getting there

To get to the Super Search Fields admin tool, you first need to be logged in as a superuser. Once that's done, you will see a link to the administration in the bottom-right corner of the page.

Fig 1 - Admin link

Clicking that link will get you to the admin home page, where you will find a link to the Super Search Fields page.

Fig 2 - Admin home page

The Super Search Fields page lists all the currently known fields with their attributes.

Fig 3 - Super Search Fields page

Adding a new field

On the Super Search Fields page, click the Create a new field link in the top-right corner. That leads you to a form.

Fig 4 - New field button

Fill all the inputs with the values you need. Note that Name is a unique identifier to this field, but also the name that will be displayed in Super Search. It doesn't have to be the same as Name in database. The current convention is to use the database name but in lower case and with underscores. So for example if your field is named DOMIPCEnabled in the database, we would make the Name something like dom_ipc_enabled.

Use the documentation about our attributes to understand how to fill that form.

Fig 5 - Example data for the new field form

Clicking the Create button might take some time, especially if you filled the Storage mapping attribute. If you did, in the back-end the application will perform a few things to very that this change does not break Elasticsearch indexing. If you get redirected to the Super Search Fields page, that means the operation was successful. Otherwise, an error will be displayed and you will need to press the Back button of your browser and fix the form data.

Note that cache is refreshed whenever you make a change to the list, so you can verify your changes right away by looking at the list.

Editing a field

Find the field you want to edit in the Super Search Fields list, and click the edit icon to the right of that field's row. That will lead you to a form much like the New field one, but prefilled with the current attributes' values of that field. Make the appropriate changes you need, and press the Update button. What applies to the New field form does apply here as well (mapping checks, cache refreshing, etc. ).

Fig 6 - The edit icon

Deleting a field

Find the field you want to edit in the Super Search Fields list, and click the delete icon to the right of that field's row. You will be prompted to confirm your intention. If you are sure about what you're doing, then confirm and you will be done.

Fig 7 - The delete icon

The missing fields tool

We have a tool that looks at all the fields known by Elasticsearch (meaning that Elasticsearch has received at least one document containing that field) and all the fields known in the Super Search Fields, and shows a diff of those. It is a good way to see if you did not forget some key fields that you could use in the app.

To access that list, click the See the list of missing fields link just above the Super Search Fields list.

Fig 8 - The missing fields link

The list of missing fields provides a direct link to create the field for each row. It will take you to the New field form with some prefilled values.

Fig 9 - Missing fields page

Conclusion

I think I have covered it all. If not, let me know and I'll adjust this guide. Same goes if you think some things are unclear or poorly explained.

If you find bugs in this Super Search Fields tool, please use Bugzilla to report them. And remember, Socorro is free / "libre" software, so you can also go ahead and fix the bugs yourself! :-)

Categorieën: Mozilla-nl planet

Tarek Ziadé: 5 work week tips

za, 06/12/2014 - 08:05

Our Mozilla work week just ended with an amazing evening. We had a private Mackelmore concert. Just check out Twitter with the #mozlandia tag and feel the vibe. example.

When they got on stage I must admit I did not know who Mackelmore was. yeah sorry. I live in a Spotify-curated music world and I have no TV.

At some point they played a song that got me thinking: ooohhh yeah that song, ok.

Anyways.

During some conversations a lot of folks told me that they where overwhelmed by the work week and har a very hard time to keep up with all the events. Some of them were very frustrated and felt like they were completely disconnected.

I went through this a lot in the past but things improved throughout the years. This blog post collect a few tips.

1. List the folks you want to meet

This one is a given. Before you arrive, make a list of the folks you want to meet and the topics you want to talk about with them.

Make that list short. 10, no more.

2. Do not code

This is the worst thing to do: dive into your laptop and code. It's easy to do and time will fly by once you've started to code. People that don't know you well will be afraid of disturbing you.

Coding is not something to do during your work weeks. If you need a break from the crowd that's the next tip.

3. Listen to your body

A work week is intense for your body. By the end of the week you will look like a Zombie and you will not be able to fully enjoy what's happening. If you are coming for a far place, the jetlag is going to make the problem worse. If you're a partygoer that's not going to help either. All the food and drinks are not really helping.

I've seen numerous folks getting really sick on day 3 or 4 because they had intensive days at the beginning of the event. It's hard not to burn out.

Some (young) folks are doing fine on this. I know I am not. What I did for the Portland work week was to skip everything on day 2 starting at 5pm, ate a soup and went to sleep at 8pm. Skipping on all the cookies and beers and goodies gives your body a bit of rest :)

That gave me the energy I needed for day 3.

4. Don't hang with your team all the time

You talk to those folks all the time. Meet other folks, check out other sessions, etc.

This is especially important if your native langage is not English. I got trapped many time by this problem: just hanging with a few french guys.

5. Walk away from meetings

Don't be shy of walking away from meetings that don't bring you any value. Walk out discretly and politely. You are not in a meeting to read hackernews on your laptop. You can do this at home.

People won't get offended in the context of a work week - unless this is a vital team meeting or something.

What are your tips?

Categorieën: Mozilla-nl planet

Tarek Ziadé: DNS-Based soft releases

za, 06/12/2014 - 06:30

Firefox Hello is this cool WebRTC app we've landed in Firefox to let you video chat with friends. You should try it, it's amazing.

My team was in charge of the server side of this project - which consists of a few APIs that keep track of some session information like the list of the rooms and such things.

The project was not hard to scale since the real work is done in the background by Tokbox - who provide all the firewall traversal infrastructure. If you are curious about the reasons we need all those server-side bits for a peer-2-peer technology, this article is great to get the whole picture: http://www.html5rocks.com/en/tutorials/webrtc/infrastructure/

One thing we wanted to avoid is a huge peak of load on our servers on Firefox release day. While we've done a lot of load testing, there are so many interacting services that it's quite hard to be 100% confident. Potentially going from 0 to millions of users in a single day is... scary ? :)

So right now only 10% of our user base sees the Hello button. You can bypass this by tweaking a few prefs, as explained in many places on the web.

This percent is going to be gradually increased so our whole user base can use Hello.

How does it work ?

When you start Firefox, a random number is generated. Then Firefox ask our service for another number. If the generated number is inferior to the number sent by the server, the Hello button is displayed. If is superior, the button is hidden.

Adam Roach proposed to set up an HTTP endpoint on our server to send back the number and after a team meeting I suggested to use a DNS lookup instead.

The reason I wanted to use a DNS server was to rely on a system that's highly available and freaking fast. On the server side all we had to do is to add a new DNS entry and let Firefox do a DNS lookup - yeah you can do DNS lookups in Javascript as long as you are within Gecko.

Due to a DNS limitation we had to move from a TXT field to an A field - which returns an IP field. But converting IP to integer values is not a problem, so that worked out.

See https://wiki.mozilla.org/Loop/Load_Handling#Service_Soft_Start for all the details.

Generalizing the idea

I think using DNS as a distributed database for simple values like this is an awesome idea. I am happy I thought of this one :)

Based on the same technique, you can also set up some A/B testing based on the DNS server ability to send back a different value depending on things like a user location for example.

For example, we could activate a feature in Firefox only for people in Connecticut, or France or Europe.

We had a work week in Portland and we started to brainstorm on how such a service could look like, and if it would be practical from a client-side point of view.

The general feedback I had so far on this is: Hell yeah we want this!

To be continued...

Categorieën: Mozilla-nl planet

David Burns: Bugsy 0.4.0 - More search!

za, 06/12/2014 - 01:05

I have just released the latest version of Bugsy. This allows you to search bugs via change history fields and within a certain timeframe. This allows you to find things like bugs created within the last week, like below.

I have updated thedocumentation to get you started.

>>> bugs = bugzilla.search_for\ .keywords('intermittent-failure')\ .change_history_fields(["[Bug creation]"])\ .timeframe("2014-12-01", "2014-12-05")\ .search()

You can see the Changelog for more details..

Please raise issues on GitHub

Categorieën: Mozilla-nl planet

Mike Hommey: Using git to interact with mercurial repositories

vr, 05/12/2014 - 20:45

I was planning to publish this later, but after talking about this project to a few people yesterday and seeing the amount of excitement in response, I took some time this morning to tie a few loose ends and publish this now. Mozillians, here comes the git revolution.

Let me start with a bit of history. I am an early git user. I’ve been using git almost since its first release. I like it. A lot. I’ve contributed dozens of patches to git.

I started using mercurial when I got commit access to Mozilla repositories, much later. I don’t enjoy using mercurial much.

There are many tools to make git talk to mercurial. Most are called git-remote-hg because they use the git remote helpers infrastructure. All of them rely on having a local mercurial clone. When dealing with repositories like mozilla-central, it means storing more than 1.5GB of data just to talk to mercurial, on top of the git database.

So a few years ago, I started to toy with the idea to make git talk to mercurial directly. I got as far as being able to do a full clone of mozilla-central back then, in a reasonable amount of time. But I left it at that because I needed to figure out how to efficiently store all the metadata required to handle incremental updates/pulling, and didn’t have enough incentive to go forward: working with mercurial was not painful enough.

Fast forward to the beginning of this year. The mozilla-central repository is now much bigger than it used to be, and mercurial handles it much less smoothly than it used to when Mozilla switched to using it. That was enough to get me started again, but not enough to dedicate enough time to it.

Fast forward to a few weeks ago. Gregory Szorc poked dev-platform to know what kind of workflows people were using with git to work on Mozilla code. And I was really not satisfied with the answers. First, I was wondering why no-one was mentioning the existing tools. So I picked one, and tried.

Cloning mozilla-central took 12 hours and left me with a ~10GB .git directory. Running git gc –agressive for another 10 hours (my settings may have made gc take more time than it would have with the default configuration) brought it down to about 2.6GB, only 700MB of which is actual git data, the remainder being the associated mercurial repository. And as far as I understand it, the tool doesn’t really support our use of mercurial repositories, especially try (but I could be wrong, I didn’t really look too much).

That was the straw that broke the camel’s back. So after a couple weeks hacking, I now have something that can clone mozilla-central within 30 minutes on my machine (network transfer excluded). The resulting .git directory is around 1.5GB with the default git config, without running git gc. If you tweak the compression level in your git config, cloning takes a bit longer, and the repo takes about 1.1GB, And you can subsequently pull from mozilla-central. As well as pull from other branches without having to clone them from scratch. Push support is not there yet because it’s an early prototype, but I should be able to get that to work in the next couple weeks.

At this point, you may be wondering how you can use that thing. Here it comes:

$ git clone https://github.com/glandium/git-remote-hg $ export PATH=$PATH:$(pwd)/git-remote-hg

Note it requires having the mercurial code available to python, because git-remote-hg uses the mercurial code to talk the mercurial wire protocol. Usually, having mercurial installed is enough.

You can now clone a mercurial repository:

$ git clone hg::http://hg.mozilla.org/mozilla-central

If, like me, you had a local mercurial clone, you can do the following instead:

$ git clone hg::/path/to/mozilla-central-clone $ git remote set-url origin hg::http://hg.mozilla.org/mozilla-central

You can then use git fetch/pull like with git repositories:

$ git pull

Now, you can add other repositories:

$ git remote add inbound hg::http://hg.mozilla.org/integration/mozilla-inbound $ git remote update inbound

There are a few caveats, like the fact that it currently creates new remote branches essentially any time you pull something. But it shouldn’t disrupt anything.

It should be noted that while the contents are identical to the gecko-dev git repositories (the git tree object sha1s are identical, I checked), the commit SHA1s are different. For two reasons: gecko-dev also contains the CVS history, and hg-git, which is used to fill it adds some mercurial metadata to commit messages that git-remote-hg doesn’t add.

It is, however, possible to graft the CVS history from gecko-dev to a clone created with git-remote-hg. Assuming you have a remote for gecko-dev and fetched from it, you can do the following:

$ echo eabda6aae98d14c71d7e7b95a66896868ff9500b 3ec464b55782fb94dbbb9b5784aac141f3e3ac01 >> .git/info/grafts

Last note: please read the README file when you update your git clone of the git-remote-hg repository. As the prototype evolves, there might be things that you need to do to your existing clones, and it will be written there.

Categorieën: Mozilla-nl planet

Mike Hommey: Using C++ templates to prevent some classes of buffer overflows

vr, 05/12/2014 - 18:45

I recently found a small buffer overflow in Firefox’s SOCKS support, and came up with a nice-ish way to make it a C++ compilation error when it may happen with some template magic.

A simplfied form of the problem looks like this:

class nsSOCKSSocketInfo { public: nsSOCKSSocketInfo() : mData(new uint8_t[BUFFER_SIZE]) , mDataLength(0) {} ~nsSOCKSSocketInfo() { delete[] mData; } void WriteUint8(uint8_t aValue) { mData[mDataLength++] = aValue; } void WriteV5AuthRequest() { mDataLength = 0; WriteUint8(0x05); WriteUint8(0x01); WriteUint8(0x00); } private: uint8_t* mData; uint32_t mDataLength; static const size_t BUFFER_SIZE = 2; };

Here, the problem is more or less obvious: the third WriteUint8() call in WriteV5AuthRequest() will write beyond the allocated buffer size. (The real buffer size was much larger than that, and it was a different method overflowing, but you get the point)

While growing the buffer size fixes the overflow, that doesn’t do much to prevent a similar overflow from happening again if the code changes. That got me thinking that there has to be a way to do some compile-time checking of this. The resulting solution, at its core, looks like this:

template <size_t Size> class Buffer { public: Buffer() : mBuf(nullptr) , mLength(0) {} Buffer(uint8_t* aBuf, size_t aLength=0) : mBuf(aBuf), mLength(aLength) {} Buffer<Size - 1> WriteUint8(uint8_t aValue) { static_assert(Size >= 1, "Cannot write that much"); *mBuf = aValue; Buffer result(mBuf + 1, mLength + 1); mBuf = nullptr; mLength = 0; return result; } size_t Written() { return mLength; } private: uint8_t* mBuf; size_t mLength; };

Then replacing WriteV5AuthRequest() with the following:

void WriteV5AuthRequest() { mDataLength = Buffer<BUFFER_SIZE>(mData) .WriteUint8(0x05) .WriteUint8(0x01) .WriteUint8(0x00) .Written(); }

So, how does this work? The Buffer class is templated by size. The first thing we do is to create an instance for the complete size of the buffer:

Buffer<BUFFER_SIZE>(mData)

Then call the WriteUint8 method on that instance, to write the first byte:

.WriteUint8(0x05)

The result of that call is a Buffer<BUFFER_SIZE – 1> (in our case, Buffer<1>) instance pointing to &mData[1] and recording that 1 byte has been written.
Then we call the WriteUint8 method on that result, to write the second byte:

.WriteUint8(0x01)

The result of that call is a Buffer<BUFFER_SIZE – 2> (in our case, Buffer<0>) instance pointing to &mData[2] and recording that 2 bytes have been written so far.
Then we call the WriteUint8 method on that new result, to write the third byte:

.WriteUint8(0x00)

But this time, the Size template parameter being 0, it doesn’t match the Size >= 1 static assertion, and the build fails.
If we modify BUFFER_SIZE to 3, then the instance we run that last WriteUint8 call on is a Buffer<1> and we don’t hit the static assertion.

Interestingly, this also makes the compiler emit more efficient code than the original version.

Check the full patch for more about the complete solution.

Categorieën: Mozilla-nl planet

Adam Lofting: Fundraising testing update

vr, 05/12/2014 - 17:01

I wrote a post over on fundraising.mozilla.org about our latest round of optimization work for our End of Year Fundraising campaign.

We’ve been sprinting on this during the Mozilla all-hands workweek in Portland, which has been a lot of fun working face-to-face with the awesome team making this happen.

You can follow along with the campaign, and see how were doing at fundraising.mozilla.org

And of course, we’d be over the moon if you wanted to make a donation.

IMG_0373

These amazing people are working hard to build the web the world needs.

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Naming `Eval` Scripts With The `//# Sourceurl` Directive

vr, 05/12/2014 - 09:00

In Firefox 36, SpiderMonkey (Firefox's JavaScript engine) now supports the //# sourceURL=my-display-url.js directive. This allows developers to give a name to a script created by eval or new Function, and get better stack traces.

To demonstrate this, let's use a minimal version of John Resig's micro templater. The micro templater compiles template strings into JS source code that it passes to new Function, thus transforming the template string into a function.

function tmpl(str) { return new Function("obj", "var p=[],print=function(){p.push.apply(p,arguments);};" + // Introduce the data as local variables using with(){} "with(obj){p.push('" + // Convert the template into pure JavaScript str .replace(/[\r\t\n]/g, " ") .split("<%").join("\t") .replace(/((^|%>)[^\t]*)'/g, "$1\r") .replace(/\t=(.*?)%>/g, "',$1,'") .split("\t").join("');") .split("%>").join("p.push('") .split("\r").join("\\'") + "');}return p.join('');"); };

The details of how the template is converted into JavaScript source code isn't of import; what is important is that it dynamically creates new scripts via code evaluated in new Function.

We can define a new templater function:

var hello = tmpl("<h1>Hello, <%=name%></h1>");

And use it like this:

hello({ name: "World!" }); // "<h1>Hello, World!</h1>"

When we get an error, SpiderMonkey will generate a name for the evaled (or in this case, new Functioned) script based on the location where the call to eval (or new Function) occurred. For our concrete example, this is the generated name for the hello templater function's frame:

file:///Users/fitzgen/scratch/foo.js line 2 > Function

And here it is in the context of an error with the whole stack trace:

hello({ name: Object.create(null) }); // TypeError: can't convert Object to string // Stack trace: // anonymous@file:///Users/fitzgen/scratch/foo.js line 2 > Function:1:107 // @file:///Users/fitzgen/scratch/foo.js:28:3

Despite being a solid improvement over just "eval frame" or something of that sort, these stack traces can still be difficult to read. If there are many different templater functions, the value of the eval script's introduction location is further diminished. It is difficult to determine which of the many functions created by tmpl contains the thrown error, because they all end up with the same name, because they were all created at the same location.

We can improve this situation with the //# sourceURL directive.

Consider this version of the tmpl function adapted to use the //# sourceURL directive:

function tmpl(name, str) { return new Function("obj", "var p=[],print=function(){p.push.apply(p,arguments);};" + // Introduce the data as local variables using with(){} "with(obj){p.push('" + // Convert the template into pure JavaScript str .replace(/[\r\t\n]/g, " ") .split("<%").join("\t") .replace(/((^|%>)[^\t]*)'/g, "$1\r") .replace(/\t=(.*?)%>/g, "',$1,'") .split("\t").join("');") .split("%>").join("p.push('") .split("\r").join("\\'") + "');}return p.join('');" + "//# sourceURL=" + name); };

Note that the function takes a new parameter called name and appends //# sourceURL=<name> to the end of the generated JS code passed to new Function.

With this new version of tmpl, we create our templater function like this:

var hello = tmpl("hello.template", "<h1>Hello <%=name%></h1>");

Now SpiderMonkey will use the name given by the //# sourceURL directive, instead of using a name based on the introduction site of the script:

hello({ name: Object.create(null) }); // TypeError: can't convert Object to string // Stack trace: // anonymous@hello.template:1:107 // @file:///Users/fitzgen/scratch/foo.js:25:3

Giving the eval script a name makes it easier for us to debug errors originating from it, and we can give a different name to different scripts created at the same location.

The //# sourceURL directive is also particularly useful for dynamic code- and module-loaders, which fetch source text over the network and then eval it.

Additionally, in Firefox 37, evaled sources with a //# sourceURL directive will be debuggable and labeled with the name specified by the directive in the debugger.

Categorieën: Mozilla-nl planet

Gregory Szorc: A Crazy Day

vr, 05/12/2014 - 00:34

Today was one crazy day.

The build peers all sat down with Release Engineering and Axel Hecht to talk l10n packaging. Mike Hommey has a Q1 goal to fix l10n packaging. There is buy-in from Release Engineering on enabling him in his quest to slay dragons. This will make builds faster and will pay off a massive mountain of technical debt that plagues multiple groups at Mozilla.

The Firefox build system contributors sat down with a bunch of Rust people and we talked about what it would take to integrate Rust into the Firefox build system and the path towards shipping a Rust component in Firefox. It sounds like that is going to happen in 2015 (although we're not yet sure what component will be written in Rust). I consider it an achievement that the gathering of both groups didn't result in infinite rabbit holing about system architectures, toolchains, and the build people telling horror stories to wide-eyed Rust people about the crazy things we have to do to build and ship Firefox. Believe me, the temptation was there.

People interested in the build system all sat down and reflected on the state of the build system and where we want to go. We agreed to create a build mode optimized for non-Gecko developers that downloads pre-built binaries - avoiding ~10 minutes of C/C++ compile time for builds. Mark my words, this will be one of those changes that once deployed will cause people to say "I can't believe we went so long without this."

I joined Mark Côté and others to talk about priorities for MozReview. We'll be making major improvements to the UX and integrating static analysis into reviews. Push a patch for review and have machines do some of the work that humans are currently doing! We're tentatively planning a get-together in Toronto in January to sprint towards awesomeness.

I ended the day by giving a long and rambling presentation about version control, with emphasis on Mercurial. I can't help but feel that I talked way too much. There's just so much knowledge to cover. A few people told me afterwards they learned a lot. I'd appreciate feedback if you attended. Anyway, I got a few nods from people as I was saying some somewhat contentious things. It's always good to have validation that I'm not on an island when I say crazy things.

I hope to spend Friday chasing down loose ends from the week. This includes talking to some security gurus about another crazy idea of mine to integrate PGP into the code review and code landing workflow for Firefox. I'll share more details once I get a professional opinion on the security front.

Categorieën: Mozilla-nl planet

Wladimir Palant: A systematic approach to MDN documentation?

vr, 05/12/2014 - 00:23

Note: This blog post started as a rant about MDN which is sadly not very useful for add-on authors way too often. I tried to reformulate it in a neutral way. The point definitely isn’t blaming the people working hard on keeping that documentation up to date.

MDN has some great content. However, as far as extension development goes, maybe somewhat less content and more structure/quality would be beneficial. Yes, there are a few well-written overview articles. But quite frankly, I’ve seen them for the first time today — because most of the time I’ll get to MDN via a search engine. And if you take this route, there is a good chance to hit an article that pre-dates Firefox 4.0.

Don’t believe me? Try searching for a guide to write XPCOM components. You are bound to hit this book written more than a decade ago. Not only is it horribly outdated, it explains how one would create such a component in C++ — something that is completely impracticable for extensions in current Firefox versions.

The next search hit is only marginally better. The examples here have been somewhat updated to work in recent Firefox versions. It starts by sending people to an extension which is compatible with Firefox 3.0. It then explains the scary details of defining your own interface — yet most developers simply want to implement an existing interface. And as if the original document wasn’t already crappy enough, somebody also added instructions on integrating the whole thing into the Mozilla build system — all that before the actually relevant stuff of course. You might also end up in this troubleshooting guide which used to be pretty helpful — five years ago.

In fact, XPCOM Changes in Gecko 2.0 seems to be the only piece of documentation with a complete explanation, starting with the code of the component itself and ending with the way it is registered via chrome.manifest. Not that this document is comprehensible, being primarily a list of changes. And of course, that’s a place where nobody will come looking.

Another example: JavaScript Debugger API. The old API (ugly but well-documented) has been removed from Firefox, taking a look at the new one is overdue. There is an easy to find JS Debugger API Guide, linking to a seemingly complete JS Debugger API Reference. Great, right? But it is incomplete and outdated. In particular, the most important piece of information for extensions is missing: how do I find the existing global objects?

And then there is a document called Debugger-API, confusingly placed under Firefox Developer Tools. It took a while for me to realize that it isn’t related to the debugger tool in Firefox, it’s rather an autogenerated documentation of the JavaScript Debugger API. And this one seems to be complete and current! “Autogenerated” here means that somebody from the SpiderMonkey team didn’t want to mess with the wiki syntax (cannot blame them) and put a bunch of Markdown files into the SpiderMonkey source tree instead — these were then imported into MDN.

This isn’t even about the long list of nice functionality that was implemented but never documented. Like the XPCOM iterator helpers introduced four years ago. Or a way to listen to all events. Or a way to apply user stylesheets to individual documents. Yes, the dev-doc-needed queue is very long and there is only so many people who contribute to MDN (besides, they would also need to understand the change).

So, how can this be solved? I see a few options:

  • Marking documents as deprecated or obsolete isn’t sufficient, documents that aren’t useful need huge and very visible warnings at the very least. But frankly, why still keep them around?
  • Mozilla has great technical writers but they clearly cannot keep up. That’s not really surprising given how much Mozilla has grown. Maybe developers who actually wrote the changes could help beyond setting the dev-doc-needed flag? Sure, not everybody is a great writer. However, some documentation is already something that can be improved.
  • Some documents like the ones on XPCOM components and JavaScript modules can be checked automatically — a script can verify that the documented interface matches the current source code. If they don’t they can be flagged as needing an update (maybe even visible to MDN users and editors).

Anything else that can be done?

Categorieën: Mozilla-nl planet

Doug Belshaw: [IDEA] Webmaker Clubs: three legs to the stool

do, 04/12/2014 - 20:33

Yesterday, during the Mozilla work week, some comments made by my colleagues made me think about Webmaker Clubs using a new metaphor. I tested it out in a few conversations and it didn’t get shot down, so I’m recording it here to come back to.

Three legs to the Webmaker Clubs stool?

I thought about there being ‘three legs to the stool’ for Webmaker Clubs (name TBC):

  • Web literacy
  • Facilitation
  • Pedagogy/ethos

The Web Literacy Map (currently v1.1 but soon v2.0) provides the basis for curriculum and learning pathways. We can build off this in a fairly straightforward way - and in fact Laura Hilliger has begun to do just that.

What I feel we need is some kind of 'map’ for the other two legs of the stool. What what this look like for facilitation skills? What about for pedagogy/ethos?

I was asked for clarity on pedagogy/ethos. It probably needs a better name, but all I mean here are the approaches to teaching and learning that work well in blended learning environments. What works well when you’re mentoring people both online and offline? Also, in terms of ethos, what do we mean by 'working open’?

If we went ahead with this approach we’d need to get the help of the community to help build it – as we do with the Web Literacy Map. The great thing is that if we did it right, we could provide a handbook that works in most situations. It would need to be generic enough to be applicable everywhere, but specific enough to guide new mentors.

Comments? Questions? Email me: doug@mozillafoundation.org

Categorieën: Mozilla-nl planet

Pomax: Blogging during MozAllHandWorkWeek

do, 04/12/2014 - 20:25

and I realised that I completely forgot to update gh-blog. Gah

Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Mozilla lanzará Firefox para iOS

do, 04/12/2014 - 16:28

Con el lanzamiento de iOS 8 y los primeros pasos para una apertura de Apple a los navegadores que no usan Webkit como motor de renderizado, y después de haber anunciado que no lanzarían una versión de Firefox para ese sistema, Mozilla ha cambiado su postura y entrará en iOS.

firefox-ios

La principal causa por la que Mozilla decidió no entrar a iOS fue porque Apple imponía una serie de restricciones a navegadores de terceros, como por ejemplo: no podían ser el navegador por defecto. Además, impedía que los desarrolladores pudieran incluir sus propios motores de Javascript, haciendo que susnavegadores fueran más lentos que Safari. Pero con el arribo de Javascript Nitro Engine en iOS 8 la situación ha cambiado para bien de todos y muestra una apertura de Apple con respecto al tema.

El anuncio fue realizado por Jonathan Nightingale VP de Firefox y compartido por Lukas Blakk, Jefe de lanzamientos de Firefox en su cuenta en Twitter mientras realizaban un evento interno. “Necesitamos estar donde nuestros usuarios están, por eso vamos a llevar a Firefox a iOS”

firefox-ios-anuncio-twitter

Hace poco fue anunciado oficialmente esa postura y han dicho:

En Mozilla, ponemos al usuario de primero y queremos ofrecer una opción independiente para ellos en cualquier plataforma. Estamos en las etapas iniciales de experimentación con algo que permita a los usuarios de iOS poder elegir Firefox.

Estamos experimentando con un par de conceptos diferentes y cuando tengamos más, lo compartiremos.

Ahora, todas aquellas personas que usen iOS podrán contar con las características que ofrece Firefox y acabarán ganando al poder elegir entre varios navegadores dentro del ecosistema.

Fuentes: Mac Rumor, Applesfera, Mozilla Press Center

Categorieën: Mozilla-nl planet

Naoki Hirata: Question on being Open…

do, 04/12/2014 - 14:00

One of the things I am grateful for in working for Mozilla is the opportunity to learn.

Recently through various channels, I’ve learned about values, trust and integrity.  (Side note: I highly recommend the book that I am currently reading : The Speed of Trust)

Values are highly important to people and the company/culture.  ( Side note: I also found that for those that throw away the very value that they are trying to protect at all cost will find themselves very miserable. )

Maybe I am dumb; I don’t understand a few things still though and I need help understanding it after even having worked 4 years at the company.  These are the top two things that still cause me to wonder:

1) How does one stay open about protecting privacy/IP and how can we protect privacy/IP and still be open?  And without being miserable?

2) How do we stay open in a fast moving environment where we’re constantly busy?  The context of this is that I had been talking to a few coworkers and when we were talking, I happened to tell them something they weren’t aware of.  The expression “oh I wish I knew that sooner” was stated.  Blogs and emails can be missed often, and in some people’s cases skipped from being read; miss a meeting and you won’t hear about it.  Asking around and trying to get the source of truth sometimes is hard and doesn’t scale, etc. etc.  Everyone has a different way of working it seems… how do you get the source of truth and maintain doing the work load of going faster?


Filed under: Planet, QMO Tagged: Planet, QMO
Categorieën: Mozilla-nl planet

Pagina's