mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 uur 56 min geleden

Prashish Rajbhandari: For the love of Mozilla: #MozDrive

ma, 28/07/2014 - 17:00

Hello Everyone,

I assume many of you must be aware of the recent project that I’ve undertaken. It has already been few weeks since I announced it, but the lazy me had been procrastinating to announce it here. I have also had the opportunity to test the whole idea on a recent volunteering trip to Squaw Valley (more on that later).

On 1st August 2014, I will be embarking on a journey across the lower 48 US States to spread the word and the love about Mozilla.

In 25 days, I will:

- Travel across the lower 48 States and share Mozilla’s story, vision, and their mission with the people I meet along the way.

- Engage in a one-to-one interaction with the locals and document their stories for an epic MozDrive video.

- Share my journey through the help of social media, as I go about making a difference and positive impact in the society.

Read more about the campaign here.

And please follow the entire journey and share the page (Facebook, Twitter) within your community or wherever you can in the social media space. The whole idea is to spread Mozilla love far out wide in the physical as well as digital world.

The entire campaign is mostly sponsored by The Mozilla Foundation (really grateful to them). But, I will be financing my own food and misc during the entire journey. I personally wanted everyone to become a part of this journey in some way. You can financially support the campaign – here!

I need your full support during the entire journey.

Thanks everyone!

 

Here are few pics from my recent #MozDrive test in Wunderlust, Squaw Valley.

 

A little princess with a Firefox pin and a sticker. Smiles! #mozilla pic.twitter.com/WU2fdFiWbi

— MozDrive (@mozdrive) July 20, 2014

“We didn’t know that Mozilla was a non-profit company. Thanks for sharing. By the way, we love the Firefox Lounge.” pic.twitter.com/n5Q0rGH0a4 — MozDrive (@mozdrive) July 27, 2014

“I hope the wind moves you forward like a fox on fire” pic.twitter.com/vlO4wgtlSJ

— MozDrive (@mozdrive) July 20, 2014

See you on the other side!

‘Til then.


Filed under: Mozilla Tagged: mozdrive, mozilla, mozrep
Categorieën: Mozilla-nl planet

Just Browsing: Fastest Growing New Languages on Github are R, Rust and TypeScript (and Swift)

ma, 28/07/2014 - 16:36

While researching TypeScript’s popularity I ran across a post by Adam Bard listing the most popular languages on Github (as of August 30, 2013). Adam used the Google BigQuery interface to mine Github’s repository statistics.

What really interested me was not absolute popularity but which languages are gaining adoption. So I decided to use the same approach to measure growth in language popularity, by comparing statistics for two different time periods. I used exactly the same query as Adam and ran it for the first half of 2013 (January 1st through June 30th) and then for the first half of 2014 (more details about the exact methodology at the end of this post).

Results

Based on this analysis, the ten fastest growing languages on Github in the past year are:

At the risk of jeopardizing my (non-existent) reputation as a programming language guru, I’ll admit that several of these are unfamiliar to me. Eliminating languages with less than 1000 repos to weed out the truly obscure ones yields this revised ranking:

We are assuming that growth in Github repository count serves as a proxy for increasing popularity, but it seems unlikely that Pascal, CSS and TeX are experiencing a sudden renaissance. Some proportion of this change is due to increasing use of Github itself, and it seems likely that this effect is more marked for older, more established languages that are only now moving onto Github. If we focus on languages that have started to attract attention more recently, the biggest winners over the past year appear to be R, Rust and TypeScript.

Random thoughts What the hell is R?

The fastest growing newish language is one that was unfamiliar to me. According to Wikipedia, R is “a free software programming language and software environment for statistical computing and graphics.” Most of the developers around the office said they had heard of it but never used it. This is a great illustration of how specialized languages can gain traction without making much of an impact on the broader developer community.

Getting Rusty

Of the newer languages with C-like syntax, both Rust and Go are gaining adoption. Go has a headstart, but a lot of the commentary I’ve seen suggests that Rust is a better language. This is supported by its impressive 220% annual growth rate on Github.

Building a better JavaScript

Two transpile-to-JavaScript languages made it onto the list: TypeScript and CoffeeScript. Since JavaScript is the only language that runs in the browser, a lot of developers are forced to use it. But that doesn’t mean we have to like it. While CoffeeScript is still ahead, TypeScript has the advantage of strong typing (something many developers feel passionate about) in addition to a prettier syntax. If it keeps up its 100% year-on-year growth, it may catch up soon.

Dys-functional

According to an old saw, everyone always talks about the weather but no one ever does anything about it. The same could be said about functional languages. Programming geeks love them and insist that they lead to better quality code. But they are yet to break into mainstream usage, and not a single functional language figures in our top-20 list (although R and Rust have some characteristics of functional languages).

Swift kick

The language with the highest growth of all didn’t even show up on the list because it had no repositories at all in the first half of 2013. Only a few months after it was publicly announced, Swift already had nearly 2000 repos. While it is unlikely to keep up its infinite annual growth rate for long, it is a safe bet that Swift is destined to be very popular indeed.

Methodology

The data for 2013 and 2014 from BigQuery was imported into two CSV files and merged them into a single consolidated file using Bash:

$ cat results-20140723-094327.csv | sort -t , -k 1,1 > results1.csv $ cat results-20140723-094423.csv | sort -t , -k 1,1 > results2.csv $ join -o '1.1,2.1,1.2,2.2' -a 1 -a 2 -t, results1.csv results2.csv | awk -F ',' '{ if ($1) printf $1; else printf $2; print "," $3 "," $4 }'

The first two commands sort the CSV files by language name (the options -t , and -k 1,1 are needed to ensure that only the language name and not the comma delimiter or subsequent text is used for sorting). The join command takes the sorted output and merges it into a single consolidated file with the format:

Language1,Language2,RepoCount1,RepoCount2

If the language is present in both datasets then Language1 and Language2 are identical. If it isn’t, then one of them is empty. Either way we really want to merge these into one field, which is what the awk command does. (A colleague suggested using sed -r 's/^([^,]*),\1?/\1/', but I decided that awk—or pretty much anything—is easier to read and understand.)

I then imported the entire dataset into Google Spreadsheet. The “2014 Projected” column is the 2013 value increased by the overall growth rate in Github repository count for the top 100 languages. This is used as a baseline to compare the actual 2014 figure and calculate the growth rate, since it is most interesting to measure how fast a language is gaining adoption relative to the growth of Github itself.

Categorieën: Mozilla-nl planet

Roberto A. Vitillo: Regression detection for Telemetry histograms.

ma, 28/07/2014 - 16:22

tldr: An automatic regression detector system for Telemetry data has been deployed; the detected regressions can be seen in the dashboard.

Mozilla is collecting over 1,000 Telemetry probes which give rise to histograms, like the one in the figure below, that change slightly every day.

Average frame interval during any tab open/close animation (excluding tabstrip scroll).

Average frame interval during any tab open/close animation (excluding tabstrip scroll).

 

Until lately the only way to monitor those histogram was to sit down and literally stare the screen while something interesting was spotted. Clearly there was the need for an automated system which is able to discern between noise and real regressions.

Noise is a major challenge, even more so than with Talos data, as Telemetry data is collected from a wide variety of computers, configurations and workloads. A reliable mean of detecting regressions, improvements and changes in a measurement’s distribution is fundamental as erroneous alerts (false positives) tend to annoy people to the point that they just ignore any warning generated by the system.

I have looked at various methods to detect changes in histogram, like

  • Correlation Coefficient
  • Chi-Square Test
  • Mann-Whitney Test
  • Kolmogorov-Smirnov test of the estimated densities
  • One Class Support Vector Machine
  • Bhattacharyya Distance

Only the Bhattacharyya distance proved satisfactory for our data. There are several reasons why each of the previous methods fails with our dataset.

For instance a one class SVM wouldn’t be a bad idea if some distributions wouldn’t change dramatically over the course of time due to regressions and/or improvements in our code; so in other words, how do you define how a distribution should look like? You could just take the daily distributions of the past week as training set but that wouldn’t be enough data to get anything meaningful from a SVM. A Chi-Square test instead is not always applicable as it doesn’t allow cells with an expected count of 0. We could go on for quite a while and there are ways to get around those issues but the reader is probably more interested in the final solution. I evaluated how well those methods are actually at pinpointing some past known regressions and the Bhattacharyya distance proved to be able to detect the kind of pattern changes we are looking for, like distributions shifts or bin swaps, while minimizing the number of false positives.

Having a relevant distance metric is only part of the deal since we still have to decide what to compare. Should we compare the distribution of today’s build-id against the one from yesterday? Or the one from a week ago? It turns out that trying to mimic what an human would do yields a very accurate algorithm. If the variance of the distance between the histogram of the current build-id and the histograms of the past N build-ids is small enough and the distance between the histograms of the current build-id and the previous build-id is above a cutoff value K, a regression is reported. Furthermore, Histograms that don’t have enough data are filtered out and the cut-off values are determined empirically from past known regressions.

I am pretty satisfied with the detected regressions so far, for instance the system was able to correctly detect a regression caused by the OMTC patch that landed the 20st of May which caused a significant change in the the average frame interval during tab open animation:

newtab.

Average frame interval during tab open animation of about:newtab.

We will soon roll-out a feature to allow histogram authors to be notified through e-mail when an histogram change occurs. In the meantime you can have a look at the detected regressions in the dashboard.


Categorieën: Mozilla-nl planet

Hannah Kane: Maker Party Engagement: Week 2

ma, 28/07/2014 - 15:48

Two weeks in!

Let’s check in on our four engagement strategies.

First, some overall stats:

  • Events: 862 (up nearly 60% from the 541 we had last week, and more than a third of the way towards our goal of 2400)
  • Hosts: 347 (up >50% from 217 last week)
  • Expected attendees: 46,885 (up >75% from 25,930 last week)
  • Cities: 216 (goal is 450)

Note: I’ll start doing trend lines on these numbers soon, so we can see the overall shape.

Are there other things we should be tracking? For example, we have a goal of 70,000 Makes created through new user accounts, but I’m not sure if we have a way to easily get those numbers.

  • Webmaker accounts: 91,998 (I’m assuming “Users” on this dash is the number of account holders)
  • Contributors: If I understand the contributors dashboard correctly, we’re at 4,615, with 241 new this week.
  • Traffic: here’s the last three weeks. You can see we’re maintaining about the same levels as last week.

——————————————————————–

Engagement Strategy #1: PARTNER OUTREACH

  • # of confirmed event partners: 205 (5 new this week)
  • # of confirmed promotional partners: 63 (2 new this week)

We saw press releases/blog posts from these partners:

We also started engaging Net Neutrality partners by inviting them to join our global teach-ins.

——————————————————————–

Engagement Strategy #2: ACTIVE MOZILLIANS

  • Science Lab Global Sprint happened this week—I don’t yet know the total # of people who participated
  • Lots of event uploads this week from the Hive networks.

——————————————————————–

Engagement Strategy #3: OWNED MEDIA

  • Snippet: The snippet has generated nearly 350M impressions, >710K clicks, and >40,000 email sign-ups to date. We’ve nearly finalized some additional animal-themed icons to help prevent snippet fatigue, and have started drafting a two-email drip series for people who’ve provided their emails via the snippet (see the relevant bug).
  • Mozilla.org: In the first few days since the new Maker Party banner went live we saw a significant drop in Webmaker account conversions (as compared to the previous Webmaker focused banner). One likely cause is that, in addition to changing the banner itself, we also changed the target destination from Webmaker to Maker Party. We’ve rolled back the banner and target destination to the previous version, and are discussing iteration ideas here.

Analysis: We’ve learned quite a bit about which snippets perform best. The real test will be how many email sign-ups we can convert to Webmaker account holders.

——————————————————————–

Engagement Strategy #4: EARNED MEDIA

Planting seeds:

  • Mark had an interview with Press Trust of India, India’s premier news agency that has the largest press outreach in Asia.
  • Brett had an interview with The Next Web

TV/Video:

English:

What are the results of earned media efforts?

Here’s traffic coming from searches for “webmaker” and “maker party.” No boost here yet.

—–

SOCIAL (not one of our key strategies):

#MakerParty trendline: You can see the spike we saw last week has tapered off.


See #MakerParty tweets here: https://twitter.com/search?q=%23makerparty&src=typd

Some highlights:

Screen Shot 2014-07-21 at 3.11.53 PM Screen Shot 2014-07-21 at 3.12.27 PM Screen Shot 2014-07-21 at 3.12.58 PM Screen Shot 2014-07-21 at 3.13.49 PM Screen Shot 2014-07-21 at 3.14.19 PM Screen Shot 2014-07-21 at 3.15.59 PM Screen Shot 2014-07-23 at 3.35.32 PMScreen Shot 2014-07-24 at 4.06.45 PM

 


Categorieën: Mozilla-nl planet

Jennifer Boriss: Looking Ahead: Challenges for the Open Web

ma, 28/07/2014 - 14:35
At the end of this week, I’m moving on after six amazing years at Mozilla. On August 25, I’ll be joining Reddit - another global open source project – as their first […]
Categorieën: Mozilla-nl planet

Benjamin Kerensa: Until Next Year CLS!

ma, 28/07/2014 - 14:00
Bs7Qxr CMAAYtLa 300x199 Until Next Year CLS!

Community Leadership Summit 2014 Group Photo

This past week marked my second year helping out as a co-organizer of the Community Leadership Summit. This Community Leadership Summit was especially important because not only did we introduce a new Community Leadership Forum but we also introduced CLSx events and continued to introduce some new changes to our overall event format.

Like previous years, the attendance was a great mix of community managers and leaders. I was really excited to have an entire group of Mozillians who attended this year. As usual, my most enjoyable conversations took place at the pre-CLS social and in the hallway track. I was excited to briefly chat with the Community Team from Lego and also some folks from Adobe and learn about how they are building community in their respective settings.

I’m always a big advocate for community building, so for me, CLS is an event I try and make it to each and every year because I think it is great to have an event for community managers and builders that isn’t limited to any specific industry. It is really a great opportunity to share best practices and really learn from one another so that everyone mutually improves their own toolkits and technique.

It was apparent to me that this year there were even more women than in previous years and so it was really awesome to see that considering CLS is often times heavily attended by men in the tech industry.

I really look forward to seeing the CLS community continue to grow and look forward to participating and co-organizing next year’s event and possibly even kick of a CLSxPortland.

A big thanks to the rest of the CLS Team for helping make this free event a wonderful experience for all and to this years sponsors O’Reilly, Citrix, Oracle, Linux Fund, Mozilla and Ubuntu!

Categorieën: Mozilla-nl planet

Dave Huseby: How to Sanitize Thunderbird and Enigmail

ma, 28/07/2014 - 14:00
How to sanitize encrypted email to not disclose Thunderbird or Enigmail.
Categorieën: Mozilla-nl planet

Benjamin Kerensa: Mozilla at O’Reilly Open Source Convention

ma, 28/07/2014 - 03:48
IMG 20140723 161033 300x225 Mozilla at OReilly Open Source Convention

Mozililla OSCON 2014 Team

This past week marked my fourth year of attending O’Reilly Open Source Convention (OSCON). It was also my second year speaking at the convention. One new thing that happened this year was I co-led Mozilla’s presence during the convention from our booth to the social events and our social media campaign.

Like each previous year, OSCON 2014 didn’t disappoint and it was great to have Mozilla back at the convention after not having a presence for some years. This year our presence was focused on promoting Firefox OS, Firefox Developer Tools and Firefox for Android.

While the metrics are not yet finished being tracked, I think our presence was a great success. We heard from a lot of developers who are already using our Developer tools and from a lot of developers who are not; many of which we were able to educate about new features and why they should use our tools.

IMG 20140721 171609 300x168 Mozilla at OReilly Open Source Convention

Alex shows attendee Firefox Dev Tools

Attendees were very excited about Firefox OS with a majority of those stopping by asking about the different layers of the platform, where they can get a device, and how they can make an app for the platform.

In addition to our booth, we also had members of the team such as Emma Irwin who helped support OSCON’s Children’s Day by hosting a Mozilla Webmaker event which was very popular with the kids and their parents. It really was great to see the future generation tinkering with Open Web technologies.

Finally, we had a social event on Wednesday evening that was very popular so much that the Mozilla Portland office was packed till last call. During the social event, we had a local airbrush artist doing tattoos with several attendees opting for a Firefox Tattoo.

All in all, I think our presence last week was very positive and even the early numbers look positive. I want to give a big thanks to Stormy Peters, Christian Heilmann, Robyn Chau, Shezmeen Prasad, Dave Camp, Dietrich Ayala, Chris Maglione, William Reynolds, Emma Irwin, Majken Connor, Jim Blandy, Alex Lakatos for helping this event be a success.

Categorieën: Mozilla-nl planet

Kevin Ngo: Off-Camera Lighting with Two Strobes

ma, 28/07/2014 - 02:00
Had to grab my dad's Canon SL1, my old nifty-fifty, and install proprietary Canon RAW support for this measly shot.

One muggy evening in a dimly-lit garage. The sun had expired, and everything began to lose their supplied illumination. I saw my dear friend, Chicken, starting at me from the couch. A shipment had come in a few weeks ago, a second radio receiver for a wireless camera flash. I had two strobes (or flashes), two receivers, a camera and a trigger, a model, and darkness. It was time to do a brief experiment lighting a subject with two off-camera strobes.

Off-camera lighting allows for great creativity. Unlike an on-camera flash with obliterates any dimension in your images, having an off-camera flash (or better yet, two) puts a paintbrush of light into your hands. Rather than relying on natural light, using strobes puts the whole scene under your control. It's more meticulous, but the results are beyond reach of a natural light snapshot.

I've read a bit of David Hobby's Strobist, the de-facto guide for making use of off-camera flashes. There are several ways to trigger an off-camera flash. I use radio triggers. The CowboyStudio NPT-04 is a ridiculously cheap, but nails, trigger. Place the trigger on the hotshoe, and attach the receivers to the flashes. Make sure your camera is on manual and your shutter speed is below the maximum flash sync speed. It's also nice to a flash capable of manual operation (to set power).

A good workflow for getting a scene set up is to:

  • Start without any flashes. Lower camera exposure as much as possible while still keeping all detail and legibility in the image. We start with an underexposed image, and paint the light on.
  • Add one flash at a time, illuminating what you wish to illuminate.
  • Through trial-and-error (it gets faster over time), tweak camera exposure and flash power until the scene is lit as desired.
Example 1: The Dimly-Lit Garage

Normally, when I'm indoors, I like to bounce the flash off the ceiling for a nice even swath of light. But there's less dimension and it's a one-pony technique. So let's first start without any flashes:

It was dark, so they de-strobed.

It's...dark. But no worries, a lot of the detail is still legible. We see the entire model (even the dark right hand), and details in the background. I'm shooting RAW so I still have a lot of dynamic range, and we'll layer in light in the next step. Let's add one strobe:

A bit out-of-focus since I was using a manual lens, but the light is there.

We now have a strobe on camera right. It's pointed 45-degrees towards the subject from camera right. This one isn't manual so I'm forced to use it at full power, but it is not too overpowering. Since it's only one flash from one side, we see a lot of dimension on the model with the shadows on camera left.

However, too much dimension can sometimes not be flattering. We see lot of the model's wrinkles (hadn't had her beauty sleep). We can flatten the lighting by adding yet another strobe:

A perfect white complexion.

And we have a nice even studio lighting. This strobe is set up on 45-degrees camera left pointed towards the subject. This set up of two strobes behind the camera each pointed 45-degrees toward the subject is fairly common to achieve even lighting. Here's a rough photo of the setup (one strobe on the left, one strobe within the shelves):

Lighting setup.

Example 2: Out At Dusk

I took it outside to the backyard. The sun was done, so no hope for golden hour, but as strobie-doos, we don't need no sun. We can make our own sun. I placed our model, Chicken, in front of the garden on a little wooden stool. Then one flash for a sidelight fill on camera left:

The ol' Chicken in the headlights.

With one flash, it doesn't quick look natural, though it makes for a cool scene. I imagine Chicken waiting at home for her husband on the porch, and the headlights slowly rearing in towards her. So our model is now pretty well lit, but the background leaves something to be desired. We can creatively use our second flash to accent the background!

Manufacturing our own golden hour.

It looks like sunrise! I actually had to manually hold the flash pointing down 45-degrees on camera right towards the plants, while setting a 12-second timer on the manually-focused camera. It's times like these where a stand to hold the flash would come in handy. The second flash created a nice warm swath of light, overpowering the first flash to create a sense that the sun is on camera right.

From Here

Off-camera lighting gives complete dictatorship over the lighting of the scene. It can make night into day, or add an extremely dramatic punch. With no strobe or on-camera flash, you are forced to make do with what you have with little choice. With multiple flashes, lighting modifiers (umbrellas, boxes), there are no limits.

Photos taken with Pentax K-30 w/ Pentax-M 50mm f1.7.

Categorieën: Mozilla-nl planet

Florian Quèze: Firefox is awesome

zo, 27/07/2014 - 22:15

"Hey, you know what? Firefox is awesome!" someone exclaimed at the other end of the coworking space a few weeks ago. This piqued my curiosity, and I moved closer to see what she was so excited about.

When I saw the feature she was delighted of finding, it reminded me a similar situation several years ago. In high school, I was trying to convince a friend to switch from IE to Mozilla. The arguments about respecting web standards didn't convince him. He tried Mozilla anyway to please me, and found one feature that excited him.
He had been trying to save some images from webpages, and for some reason it was difficult to do (possibly because of context menu hijacking, which was common at the time, or maybe because the images were displayed as a background, …). He had even written some Visual Basic code to parse the saved HTML source code and find the image urls, and then downloaded them, but the results weren't entirely satisfying.
Now with Mozilla, he could just right click, select "View Page Info", click on the "Media" tab, and find a list of all the images of the page. I remember how excited he looked for one second, until he clicked a background image in the list and the preview stayed blank; he then clicked the "Save as" button anyway and… nothing happened. Turns out that "Save as" button was just producing an error in the Error Console. He then looked at me, very disappointed, and said that my Mozilla isn't ready yet.
After that disappointment, I didn't insist much on him using Mozilla instead of IE (I think he did switch anyway a few weeks or months later).

A few months later, as I had time during summer vacations, I tried to create an add-on for the last thing I could do with IE but not Firefox: list the hostnames that the browser connects to when loading a page (the add-on, View Dependencies, is on AMO). I used this to maintain a hosts file that was blocking ads on the network's gateway.
Working on this add-on project caused me to look at the existing Page Info code to find ideas about how to look through the resources loaded by the page. While doing this, I stumbled on the problem that was causing background image previews to not be displayed. Exactly 10 years ago, I created a patch, created a bugzilla account (I had been lurking on bugzilla for a while already, but without creating an account as I didn't feel I should have one until I had something to contribute), and attached the patch to the existing bug about this background preview issue.
Two days later, the patch was reviewed (thanks db48x!), I addressed the review comment, attached a new patch, and it was checked-in.
I remember how excited I was to verify the next day that the bug was gone in the next nightly, and how I checked that the code in the new nightly was actually using my patch.

A couple months later, I fixed the "Save as" button too in time for Firefox 1.0.

Back to 2014. The reason why someone in my coworking space was finding Firefox so awesome is that "You can click "View Page Info", and then view all the images of the page and save them." Wow. I hadn't heard anybody talking to me about Page Info in years. I did use it a lot several years ago, but don't use it that much these days. I do agree with her that Firefox is awesome, not really because it can save images (although that's a great feature other browsers don't have), but because anybody can make it better for his/her own use, and by doing so making it awesome for millions of other people now but also in the future. Like I did, ten years ago.

Categorieën: Mozilla-nl planet

Raniere Silva: MathML July Meeting

zo, 27/07/2014 - 05:00
MathML July Meeting

Note

Sorry for the delay in write this.

This is a report about the Mozilla MathML July IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

In the last 4 weeks the MathML team closed 4 bugs, worked in one other. This are only the ones tracked by Bugzilla.

The next meeting will be in July 14th at 8pm UTC (note that it will be in an different time from the last meating, more information below). Please add topics in the PAD.

Leia mais...

Categorieën: Mozilla-nl planet

Nikhil Marathe: ServiceWorkers in Firefox Update: July 26, 2014

zo, 27/07/2014 - 03:42
(I will be on vacation July 27 to August 18 and unable to reply to any comments. Please see the end of the post for other ways to ask questions and raise issues.)
It’s been over 2 months since my last post, so here is an update. But first,
a link to a latest build (and this time it won’t expire!). For instructions on
enabling all the APIs, see the earlier post.

Download buildsRegistration lifecycleThe patches related to ServiceWorker registration have landed in Nightly
builds! unregister() still doesn’t work in Nightly (but does in the build
above), since Bug 1011268 is waiting on review.
The state mechanism is not available. But the bug is easy to fix and
I encourage interested Gecko contributors (existing and new) to give it a shot.
Also, the ServiceWorker specification changed just a few days ago,
so Firefox still has the older API with everything on ServiceWorkerContainer.
This bug is another easy to fix bug.
FetchBen Kelly has been hard at work implementing Headers and some of them have
landed in Nightly. Unfortunately that isn’t of much use right now since the
Request and Response objects are very primitive and do not handle Headers.
We do have a spec updated Fetch API, with
Request, Response and fetch() primitives. What works and what doesn’t?
  1. Request and Response objects are available and the fetch event will hand
    your ServiceWorker a Request, and you can return it a Response and this will
    work! Only the Response(“string body”) form is implemented. You can of
    course create an instance and set the status, but that’s about it.
  2. fetch() does not work on ServiceWorkers! In documents, only the fetch(URL)
    form works.
  3. One of our interns, Catalin Badea has taken over implementing Fetch while
    I’m on vacation, so I’m hoping to publish a more functional API once I’m
    back.
postMessage()/getServiced()Catalin has done a great job of implementing these, and they are waiting for
review
. Unfortunately I was unable to integrate his patches into the
build above, but he can probably post an updated build himself.
PushAnother of our interns, Tyler Smith, has implemented the new Push
API
! This is available for use on navigator.pushRegistrationManager
and your ServiceWorker will receive the Push notification.
Cache/FetchStoreNothing available yet.
PersistenceCurrently neither ServiceWorker registrations, nor scripts are persisted or available offline. Andrea Marchesini is working on the former, and will be back from vacation next week to finish it off. Offline script caching is currently unassigned. It is fairly hairy, but we think we know how to do it. Progress on this should happen within the next few weeks.
DocumentationChris Mills has started working on MDN pages about ServiceWorkers.
Contributing to ServiceWorkersAs you can see, while Firefox is not in a situation yet to implement
full-circle offline apps, we are making progress. There are several employees
and two superb interns working on this. We are always looking for more
contributors. Here are various things you can do:
The ServiceWorker specification is meant to solve your needs. Yes, it
is hard to figure out what can be improved without actually trying it out, but
I really encourage you to step in there, ask questions and file
issues
to improve the specification before it becomes immortal.
Improve Service Worker documentation on MDN. The ServiceWorker spec introduces
several new concepts and APIs, and the better documented we have them, the
faster web developers can use them. Start
here
.
There are several Gecko implementation bugs, here ordered in approximately
increasing difficulty:
  • 1040924 - Fix and re-enable the serviceworker tests on non-Windows.
  • 1043711 - Ensure ServiceWorkerManager::Register() can always
    extract a host from the URL.
  • 1041335 - Add mozilla::services Getter for
    nsIServiceWorkerManager.
  • 982728 - Implement ServiceWorkerGlobalScope update() and
    unregister().
  • 1041340 - ServiceWorkers: Implement [[HandleDocumentUnload]].
  • 1043004 - Update ServiceWorkerContainer API to spec.
  • 931243 - Sync XMLHttpRequest should be disabled on ServiceWorkers.
  • 1003991 - Disable https:// only load for ServiceWorkers when Developer Tools are open.
  • Full list
Don’t hesistate to ask for help on the #content channel on irc.mozilla.org.
Categorieën: Mozilla-nl planet

Chris McDonald: Negativity in Talks

za, 26/07/2014 - 18:18

I was at a meetup recently, and one of the organizers was giving a talk. They come across some PHP in the demo they are doing, and crack a joke about how bad PHP is. The crowd laughs and cheers along with the joke. This isn’t an isolated incident, it happens during talks or discussions all the time. That doesn’t mean it is acceptable.

When I broke into the industry, my first gig was writing Perl, Java, and PHP. All of these languages have stigmas around them these days. Perl has its magic and the fact that only neckbeard sysadmins write it. Java is the ‘I just hit tab in my IDE and the code writes itself!’ and other comments on how ugly it is. PHP, possibly the most made fun of language, doesn’t even get a reason most of the time. It is just ‘lulz php is bad, right gaise?’

Imagine a developer who is just getting started. They are ultra proud of their first gig, which happens to be working on a Drupal site in PHP. They come to a user group for a different language they’ve read about and think sounds neat. They then hear speakers that people appear to respect making jokes about the job they are proud of, the crowd joining in on this negativity. This is not inspiring to them, it just reinforces the impostor syndrome most of us felt as we started into tech.

So what do we do about this? If you are a group organizer, you already have all the power you need to make the changes. Talk with your speakers when they volunteer or are asked to speak. Let them know you want to promote a positive environment regardless of background. Consider writing up guidelines for your speakers to agree to.

How about as just an attendee? The best bet is probably speaking to one of the organizers. Bring it to their attention that their speakers are alienating a portion of their audience with the language trash talking. Approach it as a problem to be fixed in the future, not as if they intended to insult.

Keep in mind I’m not opposed to direct comparison between languages. “I enjoy the lack of type inference because it makes the truth table much easier to understand than, for instance, PHP’s.” This isn’t insulting the whole language, it isn’t turning it into a joke. It is just illustrating a difference that the speaker values.

Much like other negativity in our community, this will be something that will take some time to fix. Keep in mind this isn’t just having to do with user group or conference talks. Discussions around a table suffer from this as well. The first place one should address this problem is within themselves. We are all better than this pandering, we can build ourselves up without having to push others down. Let’s go out and make our community much more positive.


Categorieën: Mozilla-nl planet

Tarek Ziadé: ToxMail experiment

za, 26/07/2014 - 13:22

I am still looking for a good e-mail replacement that is more respectful of my privacy.

This will never happen with the existing e-mail system due to the way it works: when you send an e-mail to someone, even if you encrypt the body of your e-mail, the metadata will transit from server to server in clear, and the final destination will store it.

Every PGP UX I have tried is terrible anyways. It's just too painful to get things right for someone that has no knowledge (and no desire to have some) of how things work.

What I aiming for now is a separate system to send and receive mails with my close friends and my family. Something that my mother can use like regular e-mails, without any extra work.

I guess some kind of "Darknet for E-mails" where they are no intermediate servers between my mailbox and my mom's mailbox, and no way for a eavesdropper to get the content.

Ideally:

  • end-to-end encryption
  • direct network link between my mom's mail server and me
  • based on existing protocols (SMTP/IMAP/POP3) so my mom can use Thunderbird or I can set her up a Zimbra server.
Project Tox

The Tox Project is a project that aims to replace Skype with a more secured instant messaging system. You can send text, voice and even video messages to your friends.

It's based on NaCL for the crypto bits and in particular the crypto_box API which provides high-level functions to generate public/private key pairs and encrypt/decrypt messages with it.

The other main feature of Tox is its Distributed Hash Table that contains the list of nodes that are connected to the network with their Tox Id.

When you run a Tox-based application, you become part of the Tox network by registering to a few known public nodes.

To send a message to someone, you have to know their Tox Id and send a crypted message using the crypto_box api and the keypair magic.

Tox was created as an instant messaging system, so it has features to add/remove/invite friends, create groups etc. but its core capability is to let you reach out another node given its id, and communicate with it. And that can be any kind of communication.

So e-mails could transit through Tox nodes.

Toxmail experiment

Toxmail is my little experiment to build a secure e-mail system on the top of Tox.

It's a daemon that registers to the Tox network and runs an SMTP service that converts outgoing e-mails to text messages that are sent through Tox. It also converts incoming text messages back into e-mails and stores them in a local Maildir.

Toxmail also runs a simple POP3 server, so it's actually a full stack that can be used through an e-mail client like Thunderbird.

You can just create a new account in Thunderbird, point it to the Toxmail SMPT and POP3 local services, and use it like another e-mail account.

When you want to send someone an e-mail, you have to know their Tox Id, and use TOXID@tox as the recipient.

For example:

7F9C31FE850E97CEFD4C4591DF93FC757C7C12549DDD55F8EEAECC34FE76C029@tox

When the SMTP daemon sees this, it tries to send the e-mail to that Tox-ID. What I am planning to do is to have an automatic conversion of regular e-mails using a lookup table the user can maintain. A list of contacts where you provide for each entry an e-mail and a tox id.

End-to-end encryption, no intermediates between the user and the recipient. Ya!

Caveats & Limitations

For ToxMail to work, it needs to be registered to the Tox network all the time.

This limitation can be partially solved by adding in the SMTP daemon a retry feature: if the recipient's node is offline, the mail is stored and it tries to send it later.

But for the e-mail to go through, the two nodes have to be online at the same time at some point.

Maybe a good way to solve this would be to have Toxmail run into a Raspberry-PI plugged into the home internet box. That'd make sense actually: run your own little mail server for all your family/friends conversations.

One major problem though is what to do with e-mails that are to be sent to recipients that are part of your toxmail contact list, but also to recipients that are not using Toxmail. I guess the best thing to do is to fallback to the regular routing in that case, and let the user know.

Anyways, lots of fun playing with this on my spare time.

The prototype is being built here, using Python and the PyTox binding:

https://github.com/tarekziade/toxmail

It has reached a state where you can actually send and receive e-mails :)

I'd love to have feedback on this little project.

Categorieën: Mozilla-nl planet

Kevin Ngo: Poker Sess.28 - Building an App for Tournament Players

za, 26/07/2014 - 02:00
Re-learning how to fish.

Nothing like a win to get things back on track. I went back to my bread-and-butter, the Saturday freerolls at the Final Table. Through the dozen or so times I've played these freerolls, I've amassed an insane ROI. After three hours of play, we chopped four ways for $260.

Building a Personalized Poker Buddy App

I have been thinking about building a personalized mobile app for myself to assist me in all things poker. I always try to look to my hobbies for inspiration to build and develop. With insomnia at time of writing, I was reading Rework by 37signals to pass the time. It said to make use of fleeting inspiration as it came. And this idea may click. The app will have two faces. A poker tracker on one side, and a handy tournament pocket tool on the other.

The poker tracker would track and graph earnings (and losings!) over time. Data is beautiful, and a solid green line slanting from the lower-left to the upper-right would impart some motivation. My blog (and an outdated JSON file) has been the only means I have for bookkeeping. I'd like to be able to easily input results on my phone immediately after a tournament (for those times I don't feel like blogging after a bust).

The pocket tool will act as a "pre-hand" reference during late stage live tournaments. To give recommendations, factoring in several conditions and siutations, on what I should do next hand. It will be optimized to be usable before a hand since phone-use during a hand is illegal. The visuals will be obfuscated and surreptitious (maybe style it like Facebook...) such that neighboring players not catch on. I'd input the blinds, antes, number of players, table dynamics, my stack size, and my position to determine the range of hands I can profitably open-shove.

Though it can also act a post-hand reference, containing Harrington's pre-flop strategy charts and some hand vs hand race percentages.

I'd pay for this, and I'd other live tournament players would be interested as well. I have been in sort of an entrepreneur mood lately, grabbing Rework and $100 Start-Up. I have domain knowledge, a need for a side project, a niche, and something I could dogfood.

The Poker Bits

Won the morning freeroll, busted on the final table on the afternoon freeroll. The biggest mistake that stuck out was calling off a river bet with AQ on a KxxQx board against a maniac (who was obviously high). He raised UTG, I flatted in position with AQ. I called his half-pot cbet with a gutshot and over keeping his maniac image in mind. I improved on the turn and called a min-bet. Then I called a good-sized bet on the river. I played this way given his image and the pot odds, but I should 3bet pre, folded flop, raised turn, or folded river given the triple barrel.

I have been doing well in online tournaments as well. A first place finish in single-table tourney, and a couple high finishes in 500-man multi-table tourneys. Though I have been doing terrible in cash games. It's weird, I used to play exclusively 6-max cash, but since I started playing full ring tourneys, I haven't gotten reaccustomed to it. I prefer the flow of tourneys; it has a start and an end, players increasingly become more aggressive, the blinds make it feel I'm always on the edge. Conversely, cash games are a boring grind.

Session Conclusions
  • Went Well: playing more conservative, decreasing cbet%, improving hand reading
  • Mistakes: should have folded a couple of one-pair hands to double/triple barrels, playing terribly at cash games, playing AQ preflop bad
  • Get Better At: understanding verbal poker tells from Elwood's new book
  • Profit: +$198
Categorieën: Mozilla-nl planet

Jeff Walden: New mach build feature: build-complete notifications on Linux

za, 26/07/2014 - 00:19

Spurred on by gps‘s recent mach blogging (and a blogging dry spell to rectify), I thought it’d be worth noting a new mach feature I landed in mozilla-inbound yesterday: build-complete notifications on Linux.

On OS X, mach build spawns a desktop notification when a build completes. It’s handy when the terminal where the build’s running is out of view — often the case given how long builds take. I learned about this feature when stuck on a loaner Mac for a few months due to laptop issues, and I found the notification quite handy. When I returned to Linux, I wanted the same thing there. evilpie had already filed bug 981146 with a patch using DBus notifications, but he didn’t have time to finish it. So I picked it up and did the last 5% to land it. Woo notifications!

(Minor caveat: you won’t get a notification if your build completes in under five minutes. Five minutes is probably too long; some systems build fast enough that you’d never get a notification. gps thinks this should be shorter and ideally configurable. I’m not aware of an existing bug for this.)

Categorieën: Mozilla-nl planet

Florian Quèze: Converting old Mac minis into CentOS Instantbird build slaves

vr, 25/07/2014 - 23:24

A while ago, I received a few retired Mozilla minis. Today 2 of them started their new life as CentOS6 build slaves for Instantbird, which means we now have Linux nightlies again! Our previous Linux build slave, running CentOS5, was no longer able to build nightlies based on the current mozilla-central code, and this is the reason why we haven't had Linux nightlies since March. We know it's been a long wait, but to help our dear Linux testers forgive us, we started offering 64bit nightly builds!

For the curious, and for future reference, here are the steps I followed to install these two new build slaves:

Partition table

The Mac minis came with a GPT partition table and an hfs+ partition that we don't want. While the CentOS installer was able to detect them, the grub it installed there didn't work. The solution was to convert the GPT partition table to the MBR older format. To do this, boot into a modern linux distribution (I used an Ubuntu 13.10 live dvd that I had around), install gdisk (sudo apt-get update && sudo apt-get gdisk) and use it to edit the disk's partition table:

sudo gdisk /dev/sda
Press 'r' to start recovery/transformation, 'g' to convert from GPT to MBR, 'p' to see the resulting partition table, and finally 'w' to write the changes to disk (instructions initially from here).
Exit gdisk.
Now you can check the current partition table using gparted. At this point I deleted the hfs+ partition.

Installing CentOS

The version of CentOS needed to use the current Mozilla build tools is CentOS 6.2. We tried before using another (slightly newer) version, and we never got it to work.

Reboot on a CentOS 6.2 livecd (press the 'c' key at startup to force the mac mini to look for a bootable CD).
Follow the instructions to install CentOS on the hard disk.
I customized the partition table a bit (50000MB for /, 2048MB of swap space, and the rest of the disk for /home).

The only non-obvious part of the CentOS install is that the boot loaded needs to be installed on the MBR rather than on the partition where the system is installed. When the installer asks where grub should be installed, set it to /dev/sda (the default is /dev/sha2, and that won't boot). Of course I got this wrong in my first attempts.

Installing Mozilla build dependencies

First, install an editor that is usable to you. I typically use emacs, so: sudo yum install emacs

The Mozilla Linux build slaves use a specifically tweaked version of gcc so that the produced binaries have low runtime dependencies, but the compiler still has the build time feature set of gcc 4.7. If you want to use something as old as CentOS6.2 to build, you need this specific compiler.

The good thing is, there's a yum repository publicly available where all the customized mozilla packages are available. To install it, create a file named /etc/yum.repos.d/mozilla.repo and make it contain this:

[mozilla] name=Mozilla baseurl=http://puppetagain.pub.build.mozilla.org/data/repos/yum/releng/public/CentOS/6/x86_64/ enabled=1 gpgcheck=0

Adapt the baseurl to finish with i386 or x86_64 depending on whether you are making a 32 bit or 64 bit slave.

After saving this file, you can check that it had the intended effect by running this comment to list the packages from the mozilla repository: repoquery -q --repoid=mozilla -a

You want to install the version of gcc473 and the version of mozilla-python27 that appear in that list.

You also need several other build dependencies. MDN has a page listing them:

yum groupinstall 'Development Tools' 'Development Libraries' 'GNOME Software Development'
yum install mercurial autoconf213 glibc-static libstdc++-static yasm wireless-tools-devel mesa-libGL-devel alsa-lib-devel libXt-devel gstreamer-devel gstreamer-plugins-base-devel pulseaudio-libs-devel

Unfortunately, two dependencies were missing on that list (I've now fixed the page):
yum install gtk2-devel dbus-glib-devel

At this point, the machine should be ready to build Firefox.

Instantbird, because of libpurple, depends on a few more packages:
yum install avahi-glib-devel krb5-devel

And it will be useful to have ccache:
yum install ccache

Installing the buildbot slave

First, install the buildslave command, which unfortunately doesn't come as a yum package, so you need to install easy_install first:

yum install python-setuptools python-devel mpfr
easy_install buildbot-slave

python-devel and mpfr here are build time dependencies of the buildbot-slave package, and not having them installed will cause compiling errors while attempting to install buildbot-slave.

We are now ready to actually install the buildbot slave. First let's create a new user for buildbot:

adduser buildbot
su buildbot
cd /home/buildbot

Then the command to create the local slave is:

buildslave create-slave --umask=022 /home/buildbot/buildslave buildbot.instantbird.org:9989 linux-sN password

The buildbot slave will be significantly more useful if it starts automatically when the OS starts, so let's edit the crontab (crontab -e) to add this entry:
@reboot PATH=/usr/local/bin:/usr/bin:/bin /usr/bin/buildslave start /home/buildbot/buildslave

The reason why the PATH environment variable has to be set here is that the default path doesn't contain /usr/local/bin, but that's where the mozilla-python27 packages installs python2.7 (which is required by mach during builds).

One step in the Instantbird builds configured on our buildbot use hg clean --all and this requires the purge mercurial extension to be enabled, so let's edit ~buidlbot/.hgrc to look like this:
$ cat ~/.hgrc
[extensions]
purge =

Finally, ssh needs to be configured so that successful builds can be uploaded automatically. Copy and adapt ~buildbot/.ssh from an existing working build slave. The files that are needed are id_dsa (the ssh private key) and known_hosts (so that ssh doesn't prompt about the server's fingerprint the first time we upload something).

Here we go, working Instantbird linux build slaves! Figuring out all these details for our first CentOS6 slave took me a few evenings, but doing it again on the second slave was really easy.

Categorieën: Mozilla-nl planet

Aki Sasaki: on leaving mozilla

vr, 25/07/2014 - 21:26

Today's my last day at Mozilla. It wasn't an easy decision to move on; this is the best team I've been a part of in my career. And working at a company with such idealistic principles and the capacity to make a difference has been a privilege.

Looking back at the past five-and-three-quarter years:

  • I wrote mozharness, a versatile scripting harness. I strongly believe in its three core concepts: versatile locking config; full logging; modularity.



  • I helped FirefoxOS (b2g) ship, and it's making waves in the industry. Internally, the release processes are well on the path to maturing and stabilizing, and b2g is now riding the trains.

    • Merge day: Releng took over ownership of merge day, and b2g increased its complexity exponentially.

      Listening to @escapewindow explain merge day processes is like looking Cthulhu in the eyes. Sanity draining away rapidly

      — Laura Thomson (@lxt) April 29, 2014 I don't think it's quite that bad :) I whittled it down from requiring someone's full mental capacity for three out of every six weeks, to several days of precisely following directions.

    • I rewrote vcs-sync to be more maintainable and robust, and to support gecko-dev and gecko-projects. Being able to support both mercurial and git across many hundreds of repos has become a core part of our development and automation, primarily because of b2g. The best thing you can say about a mission critical piece of infrastructure like this is that you can sleep through the night or take off for the weekend without worrying if it'll break. Or go on vacation for 3 1/2 weeks, far from civilization, without feeling guilty or worried.


  • I helped ship three mobile 1.0's. I learned a ton, and I don't think I could have gotten through it by myself; John and the team helped me through this immensely.

    • On mobile, we went from one or two builds on a branch to full tier-1 support: builds and tests on checkin across all of our integration-, release-, and project- branches. And mobile is riding the trains.

    • We Sim-shipped 5.0 on Firefox desktop and mobile off the same changeset. Firefox 6.0b2, and every release since then, was built off the same automation for desktop and mobile. Those were total team efforts.

    • I will be remembered for the mobile pedalboard. When we talked to other people in the industry, this was more on-device mobile test automation than they had ever seen or heard of; their solutions all revolved around manual QA.

      (full set)

    • And they are like effin bunnies; we later moved on to shoe rack bunnies, rackmounted bunnies, and now more and more emulator-driven bunnies in the cloud, each numbering in the hundreds or more. I've been hands off here for quite a while; the team has really improved things leaps and bounds over my crude initial attempts.


  • I brainstormed next-gen build infrastructure. I started blogging about this back in January 2009, based largely around my previous webapp+db design elsewhere, but I think my LWR posts in Dec 2013 had more of an impact. A lot of those ideas ended up in TaskCluster; mozharness scripts will contain the bulk of the client-side logic. We'll see how it all works when TaskCluster starts taking on a significant percentage of the current buildbot load :)

I will stay a Mozillian, and I'm looking forward to see where we can go from here!



comment count unavailable comments
Categorieën: Mozilla-nl planet

Vaibhav Agrawal: Lets have more green trees

vr, 25/07/2014 - 21:18

I have been working on making jobs ignore intermittent failures for mochitests (bug 1036325) on try servers to prevent unnecessary oranges, and save resources that goes into retriggering those jobs on tbpl. I am glad to announce that this has been achieved for desktop mochitests (linux, osx and windows). It doesn’t work for android/b2g mochitests but they will be supported in the future. This post explains how it works in detail and is a bit lengthy, so bear with me.

Lets see the patch in action. Here is an example of an almost green try push:

Tbpl Push Log

 Note: one bc1 orange job is because of a leak (Bug 1036328)

In this push, the intermittents were suppressed, for example this log shows an intermittent on mochitest-4 job on linux :

tbpl1

Even though there was an intermittent failure for this job, the job remains green. We can determine if a job produced an intermittent  by inspecting the number of tests run for the job on tbpl, which will be much smaller than normal. For example in the above intermittent mochitest-4 job it shows “mochitest-plain-chunked: 4696/0/23” as compared to the normal “mochitest-plain-chunked: 16465/0/1954”. Another way is looking at the log of the particular job for “TEST-UNEXPECTED-FAIL”.

<algorithm>

The algorithm behind getting a green job even in the presence of an intermittent failure is that we recognize the failing test, and run it independently 10 times. If the test fails < 3 times out of 10, it is marked as intermittent and we leave it. If it fails >=3 times out of 10, it means that there is a problem in the test turning the job orange.

</algorithm>

Next to test the case of a “real” failure, I wrote a unit test and tested it out in the try push:

tbpl4

This job is orange and the log for this push is:

tbpl3

In this summary, a test is failing for more than three times and hence we get a real failure. The important line in this summary is:

3086 INFO TEST-UNEXPECTED-FAIL | Bisection | Please ignore repeats and look for ‘Bleedthrough’ (if any) at the end of the failure list

This tells us that the bisection procedure has started and we should look out for future “Bleedthrough”, that is, the test causing the failure. And at the last line it prints the “real failure”:

TEST-UNEXPECTED-FAIL | testing/mochitest/tests/Harness_sanity/test_harness_post_bisection.html | Bleedthrough detected, this test is the root cause for many of the above failures

Aha! So we have found a permanent failing test and it is probably due to some fault in the developer’s patch. Thus, the developers can now focus on the real problem rather than being lost in the intermittent failures.

This patch has landed on mozilla-inbound and I am working on enabling it as an option on trychooser (More on that in the next blog post). However if someone wants to try this out now (works only for desktop mochitest), one can hack in just a single line:

options.bisectChunk = ‘default’

such as in this diff inside runtests.py and test it out!

Hopefully, this will also take us a step closer to AutoLand (automatic landing of patches).

Other Bugs Solved for GSoC:

[1028226] – Clean up the code for manifest parsing
[1036374] – Adding a binary search algorithm for bisection of failing tests
[1035811] – Mochitest manifest warnings dumped at start of each robocop test

A big shout out to my mentor (Joel Maher) and other a-team members for helping me in this endeavour!


Categorieën: Mozilla-nl planet

Just Browsing: Taming Gruntfiles

vr, 25/07/2014 - 15:53

Every software project needs plumbing.

If you write your code in JavaScript chances are you’re using Grunt. And if your project has been around long enough, chances are your Gruntfile is huge. Even though you write comments and indent properly, the configuration is starting to look unwieldy and is hard to navigate and maintain (see ngbp’s Gruntfile for an example).

Enter load-grunt-config, a Grunt plugin that lets you break up your Gruntfile by task (or task group), allowing for a nice navigable list of small per-task Gruntfiles.

When used, your Grunt config file tree might look like this:

./ |_ Gruntfile.coffee |_ grunt/ |_ aliases.coffee |_ browserify.coffee |_ clean.coffee |_ copy.coffee |_ watch.coffee |_ test-group.coffee

watch.coffee, for example, might be:

module.exports = { sources: files: [ '<%= buildDir %>/**/*.coffee', '<%= buildDir %>/**/*.js' ] tasks: ['test'] html: files: ['<%= srcDir %>/**/*.html'] tasks: ['copy:html', 'test'] css: files: ['<%= srcDir %>/**/*.css'] tasks: ['copy:css', 'test'] img: files: ['<%= srcDir %>/img/**/*.*'] tasks: ['copy:img', 'test'] }

and aliases.coffee:

module.exports = { default: [ 'clean' 'browserify:libs' 'browserify:dist' ] dev: [ 'clean' 'connect' 'browserify:libs' 'browserify:dev' 'mocha_phantomjs' 'watch' ] }

By default, load-grunt-config reads the task configurations from the grunt/ folder located on the same level as your Gruntfile. If there’s an alias.js|coffee|yml file in that directory, load-grunt-config will use it to load your task aliases (which is convenient because one of the problem with long Gruntfiles is that the task aliases are hard to find).

Other files in the grunt/ directory define configurations for a single task (e.g. grunt-contrib-watch) or a group of tasks.

Another nice thing is that load-grunt-config takes care of loading plugins; it reads package.json and automatically loadNpmTaskss all the grunt plugins it finds for you.

To sum it up, for a bigger project, your Gruntfile can get messy. load-grunt-config helps combat that by introducing structure into the build configuration, making it more readable and maintainable.

Happy grunting!

Categorieën: Mozilla-nl planet

Pagina's