mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Air Mozilla: Bedrock: From Code to Production

Mozilla planet - fr, 31/03/2017 - 21:28

 From Code to Production A presentation on how changes to our flagship website (www.mozilla.org) are made, and how to request them so that they're as high-quality and quick-to-production as...

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Friend of Add-ons: Prasanth

Mozilla planet - fr, 31/03/2017 - 17:45

Please meet our newest Friend of Add-ons, Prasanth! Prasanth became a Mozillian in 2015 when he joined the Mozilla TamilNadu community and became a Firefox Student Ambassador. Over the last two years, he has contributed to a variety of projects at Mozilla with great enthusiasm. Last year, he organized a group of eleven participants to test featured add-ons for e10s compatibility.

In January, Prasanth became a member of the Add-ons Advisory Board, and has emerged as someone very adept at finding great add-ons to feature. “Prasanth has shown a true talent for identifying great add-ons,” comments Scott DeVaney, Editorial & Campaign Manager for the add-ons team.

In addition to organizing community events and contributing to the Advisory Board, Prasanth is also learning how to write scripts for testing automation and helping contributors participate in QA bugdays.

Of his experience as a contributor at Mozilla, Prasanth says,

“Contributing in an open source community like Mozilla gave me the opportunity to know many great contributors and get their help in developing my skills. It showed me a way to rediscover myself as a person who loves open source philosophy and practices.”

In his spare time, Prasanth enjoys hanging out with friends and watching serials like The Flash and Green Arrow.

Congratulations, Prasanth, and thank you for your contributions to the add-ons community!

Are you a contributor to the add-ons community or know of someone who should be recognized? Please be sure to add them to our Recognition Wiki!

The post Friend of Add-ons: Prasanth appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 061. Twisted bugs.

Mozilla planet - fr, 31/03/2017 - 11:05
webcompat life webcompat issues webcompat.com dev To read

Otsukare!

Categorieën: Mozilla-nl planet

The Mozilla Blog: U.S. Broadband Privacy Rules: We will Fight to Protect User Privacy

Mozilla planet - fr, 31/03/2017 - 01:08

In the U.S., Congress voted to overturn rules that the Federal Communications Commission (FCC) created to protect the privacy of broadband customers. Mozilla supported the creation and enactment of these rules because strong rules are necessary to promote transparency, respect user privacy and support user control.

The Federal Trade Commission has authority over the online industry in general, but these rules were crafted to create a clear policy framework for broadband services where the FTC’s policies don’t apply. They require internet service providers (ISPs) to notify us and get permission from us before any of our information would be collected or shared. ISPs know a lot about us, and this information (which includes your web browsing history) can potentially be shared with third-parties.

We take a stand where we see users lacking meaningful choice with their online privacy because of a lack of transparency and understanding – and, in the case of broadband services, often a lack of options for competitive services. These rules help empower users, and it’s unclear whether remaining laws and policies built around the FCC’s existing consumer privacy framework or other services will be helpful – or whether the current FCC will enforce them.

Now, this is in front of the President to sign or reject, although the White House has already said it “strongly supports” the move and will advise the President to sign. We hope that broadband privacy will be prioritized and protected by the U.S. government, but regardless, we are ready to fight along with you for the right to privacy online.

If these rules are overturned, we will need to be vigilant in monitoring broadband provider practices to demand transparency, and work closely with the FCC to demand accountability. Mozilla – and many other tech companies – strive to deliver better online privacy and security through our products as well as our policy and advocacy work, and that job is never done because technology and threats to online privacy and security are always evolving.

The post U.S. Broadband Privacy Rules: We will Fight to Protect User Privacy appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Firefox Nightly: Guest post: India uses Firefox Nightly – A Campaign especially for India

Mozilla planet - to, 30/03/2017 - 18:45

Biraj Karmakar This is a guest post by Biraj Karmakar, who has been active promoting Mozilla and Mozilla software in India for over 7 years. Biraj is taking the initiative of organizing a series of workshops throughout the country to convince technical people to (mozillians or not) that may be interested in getting involved in Mozilla to use Firefox Nightly.

 

 

Fellow mozillians, I am super excited to inform you that very soon we are going to release a new campaign in India  called “India uses Firefox Nightly“. Behind this campaign, our mission is to increases Firefox nightly usages in India.

Why India?

As we all know we have a great Mozilla community around India. We have a large number of dedicated students, developers and evangelists who are really passionate about Mozilla.

We have seen that very few people in India actually know about Firefox Nightly. So we have taken an initiative to run a pilot campaign for Firefox Nightly throughout India.

Firefox Nightly, as a pre-release version of Firefox targeting power-users and core Mozilla contributors, is a glimpse of what the future of Firefox will be for hundreds of millions of people. Having a healthy and strong technical community using and testing Nightly is a great way to easily get involved in Mozilla by providing a constant feedback loop to developers. Here you can test lots of pre-release features.

So it needs a little bit of general promotion, which will help bring a good number of tech-savvy, power-users who may become new active community members.

Few Key points

Time Frame: 2 months Max              Hashtag: #INUsesFxNightly

Event Duration: 3 – 5 Hours              Total events: 15

Who will join us: We invite students, community members, developers, open source evangelists to run this campaign.

Parts of Campaign Online activities:

Mozillians spread the message of this campaign around India as well as through social media (Facebook, Twitter, Instagram), blogs, promotional snippets, email, mailing list, website news items etc.

Offline activities:

Here, any community member or open source enthusiast can host one event in their area or join any nearby event to help organizers. The event can be held at a startup company, Schools, Universities, Community centres, Home, Cafés.

Goals for this initiative

Impact:

  • 1000 Nightly Installed
  • 20 New Active Contributors

Strength:

  • 30 Mozillians run events (2 mozillians per event)
  • 500 Attendees

 

BTW have you tried Firefox Nightly yet, download it now?

 

More details will come soon. Stay tuned!

 

We need many core campaign volunteers who will help us to run this initiative smoothly. If you are interested in joining the campaign team, please let me know.

Have design skills? We need a logo for this campaign, please come and help us here.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Yes C is unsafe, but…

Mozilla planet - to, 30/03/2017 - 10:04

I posted curl is C a few days ago and it raced on hacker news, reddit and elsewhere and got well over a thousand comments in those forums alone. The blog post has been read more than 130,000 times so far.

Addendum a few days later

Many commenters of my curl is C post struck down on my claim that most of our security flaws aren’t due to curl being written in C. It turned out into some sort of CVE counting game in some of the threads.

I think that’s missing the point I was trying to make. Even if 75% of them happened due to us using C, that fact alone would still not be a strong enough reason for me to reconsider our language of choice (at this point in time). We use C for a whole range of reasons as I tried to lay out there in spite of the security challenges the language brings. We know C has tricky corners and we know we are likely to do more mistakes going forward.

curl is currently one of the most distributed and most widely used software components in the universe, be it open or proprietary and there are easily way over three billion instances of it running in appliances, servers, computers and devices across the globe. Right now. In your phone. In your car. In your TV. In your computer. Etc.

If we then have had 40, 50 or even 60 security problems because of us using C, through-out our 19 years of history, it really isn’t a whole lot given the scale and time we’re talking about here.

Using another language would’ve caused at least some problems due to that language, plus I feel a need to underscore the fact that none of the memory safe languages anyone would suggest we should switch to have been around for 19 years. A portion of our security bugs were even created in our project before those alternatives you would suggest were available! Let alone as stable and functional alternatives.

This is of course no guarantee that there isn’t still more ugly things to discover or that we won’t mess up royally in the future, but who will throw the first stone when it comes to that? We will continue to work hard on minimizing risks, detecting problems early by ourselves and work closely together with everyone who reports suspected problems to us.

Number of problems as a measurement

The fact that we have 62 CVEs to date (and more will follow surely) is rather a proof that we work hard on fixing bugs, that we have an open process that deals with the problems in the most transparent way we can think of and that people are on their toes looking for these problems. You should not rate a project in any way purely based on the number of CVEs – you really need to investigate what lies behind the numbers if you want to understand and judge the situation.

Future

Let me clarify this too: I can very well imagine a future where we transition to another language or attempt various others things to enhance the project further – security wise and more. I’m not really ruling anything out as I usually only have very vague ideas of what the future might look like. I just don’t expect it to be happening within the next few years.

These “you should switch language” remarks are strangely enough from the backseat drivers of the Internet. Those who can tell us with confidence how to run our project but who don’t actually show us any code.

Languages

What perhaps made me most sad in the aftermath of said previous post, is everyone who failed to hold more than one thought at a time in their heads. In my post I wrote 800 words on some of the reasoning behind us sticking to the language C in the curl project. I specifically did not say that I dislike certain other languages or that any of those alternative languages are bad or should be avoided. Please friends, I wrote about why curl uses C. There are many fine languages out there and you should all use them as much as you possibly can, and I will too – but not in the curl project (at the moment). So no, I don’t hate language XXXX. I didn’t say so, and I didn’t imply it either. Don’t put that label on me, thanks.

Categorieën: Mozilla-nl planet

Karl Dubost: Our (css) discussion is broken

Mozilla planet - to, 30/03/2017 - 03:39

I have seen clever, thoughtful and hardworking people trying to be right about the CSS is not broken and CSS is broken. PopCorn. I will not attempt to be right on anything in this post. I'm just sharing a feeling.

Let me steal the image from this provocative tweet/thread. You can also read the thread in these tweets.

CSS in JS

I guess looking at the image I understood how/why the discussions would never have any resolutions. Just let clarify a few things.

A Bit Of An Introduction

CSS means Cascading Style Sheets. Cascading Style Sheets (CSS) is a simple mechanism for adding style (e.g., fonts, colors, spacing) to Web documents.. Words have meanings. It is 20 years old spec wise. But the idea came from a proposal on www-talk on October 10, 1994. Fast forward, there is an ongoing effort to formalize CSS Object Model aka CSSOM defines APIs (including generic parsing and serialization rules) for Media Queries, Selectors, and of course CSS itself.

Let's look at the very recent CSS Grid, currently in Candidate Recommendation phase and starting to be well deployed in browsers. Look carefully at the prose. The word DOM is cited only twice in an example. CSS is a language for describing styling rules.

The Controversy

What people are argueing about and not discussing or dialoguing about is not about CSS, but how to apply the style rules to a document.

  • Some developers prefers to use style elements and files using the properties of the cascade and specificity (which are useful), aka CSS.
  • Some developers wants to apply the style on each individual node in the DOM using JavaScript to constraint (remove) the cascade and the specificity because they consider it annoying for their use case.

devtools screenshot

I do not have a clear cut opinion about it. I don't think CSS is broken. I think it is perfectly usable. But I also completely understand what the developers who wants to use JavaScript to set style on elements are doing. It makes me uncomfortable the same way that Flash (circa 2000s) or frame (circa 1995) was making me uncomfortable. It is something related with la rouille du Web (The Rusty Web). The perennity of what we create on the Web. I guess in this discussion there are sub-discussions about craft and its love, the Web perennity and the notion of Web industrialization.

There is one thing which rubs me in the wrong direction is when people talk about HTML and CSS with the "in JS" term associated. When we manipulate the DOM, create new nodes and associate style to it, we are not doing anymore HTML and CSS. We are basically modifying the DOM, which is a complete different layer. It's easy to see how the concerns are different. When we open a web site made with React for example, the HTML semantics is often gone. You could use only div and span and it would be exactly the same.

To better express my feeling, let's rephrase this:

You could use only div and span and it would be exactly the same.

It would become.

pronoun verb verb adverb noun conjunction noun conjunction pronoun verb verb adverb determiner noun.

then we would apply some JavaScript to convey meaning on it.

So…

As I said I don't think I'm adding anything useful to the debates. I'm not a big fan of everything apps through JavaScript, maybe because I'm old or maybe because I value time.

I also would be curious for the advocates of applying styles to nodes in the DOM, if they made the experiment of generating (programmatically) a hierachical CSS from the DOM. Basically the salmon ladder in the river to go back to the source. I'm not talking about creating a snapshot with plenty of style attributes, but reverse engineering the cascade. At least just as an experiment to understand the two paths and what we could learn about it. It would minimize the CSS selectors, take advantage of the cascade, avoid as much as possible !important, etc.

Otsukare!

Categorieën: Mozilla-nl planet

Nathan Froyd: on mutex performance and WTF::Lock

Mozilla planet - wo, 29/03/2017 - 17:25

One of the things I’ve been doing this quarter is removing Gecko’s dependence on NSPR locks.  Gecko’s (non-recursive) mutexes and condition variables now use platform-specific constructs, rather than indirecting through NSPR.  This change makes things smaller, especially on POSIX platforms, and uses no dynamic memory allocation, so there are fewer untested failure paths.  I haven’t rigorously benchmarked things yet, but I suspect various operations are faster, too.

As I’ve done this, I’ve fielded questions about why we’re not using something like WTF::Lock or the Rust equivalent in parking_lot.  My response has always been some variant of the following: the benchmarks for the WTF::Lock blog post were conducted on OS X.  We have anecdotal evidence that mutex overhead can be quite high on OS X, and that changing locking strategies on OS X can be beneficial.  The blog post also says things like:

One advantage of OS mutexes is that they guarantee fairness: All threads waiting for a lock form a queue, and, when the lock is released, the thread at the head of the queue acquires it. It’s 100% deterministic. While this kind of behavior makes mutexes easier to reason about, it reduces throughput because it prevents a thread from reacquiring a mutex it just released.

This is certainly true for mutexes on OS X, as the measurements in the blog post show.  But fairness is not guaranteed for all OS mutexes; in fact, fairness isn’t even guaranteed in the pthreads standard (which OS X mutexes follow).  Fairness in OS X mutexes is an implementation detail.

These observations are not intended to denigrate the WTF::Lock work: the blog post and the work it describes are excellent.  But it’s not at all clear that the conclusions reached in that post necessarily carry over to other operating systems.

As a partial demonstration of the non-cross-platform applicability of some of the conclusions, I ported WebKit’s lock fairness benchmark to use raw pthreads constructs; the code is available on GitHub.  The benchmark sets up a number of threads that are all contending for a single lock.  The number of lock acquisitions for each thread over a given period of time is then counted.  While both of these qualities are configurable via command-line parameters in WebKit’s benchmark, they are fixed at 10 threads and 100ms in mine, mostly because I was lazy. The output I get on my Mac mini running OS X 10.10.5 is as follows:

1509 1509 1509 1509 1509 1509 1509 1508 1508 1508

Each line indicates the number of lock acquisitions performed by a given thread.  Notice the nearly-identical output for all the threads; this result follows from the fairness of OS X’s mutexes.

The output I get on my Linux box is quite different (aside from each thread performing significantly more lock acquisitions because of differences in processor speed, etc.):

108226 99025 103122 105539 101885 104715 104608 105590 103170 105476

The counts vary significantly between threads: Linux mutexes are not fair by default–and that’s perfectly OK.

What’s more, the developers of OS X have recognized this and added a way to make their mutexes non-fair.  In <pthread_spis.h>, there’s a OS X-only function, pthread_mutexattr_setpolicy_np.  (pthread mutex attributes control various qualities of pthread mutexes: normal, recursively acquirable, etc.  This particular function, supported since OS X 10.7, enables setting the fairness policy of mutexes to either _PTHREAD_MUTEX_POLICY_FAIRSHARE (the default) or _PTHREAD_MUTEX_POLICY_FIRSTFIT.  The firstfit policy is not documented anywhere, but I’m guessing that it’s something akin to the “barging” locks described in the WTF::Lock blog post: the lock is made available to whatever thread happens to get to it first, rather than enforcing a queue to acquire the lock.  (If you’re curious, the code dealing with firstfit policy can be found in Apple’s pthread_mutex.c.)

Running the benchmark on OS X with mutexes configured with the firstfit policy yields quite different numbers:

14627 13239 13503 13720 13989 13449 13943 14376 13927 14142

The variation in these numbers are more akin to what we saw with the non-fair locks on Linux, and what’s more, they’re almost an order of magnitude higher than the fair locks. Maybe we should start using firstfit locks in Gecko!  I don’t know how firstfit policy locks compare to something like WTF::Lock on my Mac mini, but it’s clear that saying simply “OS mutexes are slow” doesn’t tell the whole story. And of course there are other concerns, such as the size required by locks, that motivated the WTF::Lock work.

I have vague plans of doing more benchmarking, especially on Windows, where we may want to use slim reader/writer locks rather than critical sections, and evaluating Rust’s parking_lot on more platforms.  Pull requests welcome.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Announcing the Equal Rating Innovation Challenge Winners

Mozilla planet - wo, 29/03/2017 - 16:04

Six months ago, we created the Equal Rating Innovation Challenge to add an additional dimension to the important work Mozilla has been leading around the concept of “Equal Rating.” In addition to policy and research, we wanted to push the boundaries and find news ways to provide affordable access to the Internet while preserving net neutrality. An open call for new ideas was the ideal vehicle.

An Open and Engaging Process

The Equal Rating Innovation Challenge was founded on the belief that reaching out to the local expertise of innovators, entrepreneurs, and researchers from all around the world would be the right way for Mozilla to help bring the power of the Internet to the next billion and beyond. It has been a thrilling and humbling experience to see communities engage, entrepreneurs conceive of new ideas, and regulatory, technology, and advocacy groups start new conversations.

Through our Innovation Challenge website, www.equalrating.com, in webinars, conferences, and numerous community events within the six week submission period, we reached thousands of people around the world. Ultimately, we received 98 submissions from 27 countries, which all taken together demonstrates the viability and the potential of the Equal Rating approach. Whereas previously many people believed providing affordable access was the domain of big companies and government, we are now experiencing a groundswell of entrepreneurs and ideas celebrating the power of community to bring all of the Internet to all people.

Moderating a panel discussion with our esteemed Judges at the Equal Rating Conference in New York City

Our diverse expert Judges selected five teams as semifinalists in January. Mozilla staff from around the world provided six weeks of expert mentorship to help the semifinalists hone their projects, and on 9 March at our Equal Rating Conference in New York City, these teams presented their solutions to our panel of Judges. In keeping with Mozilla’s belief in openness and participation, we then had a one-week round of online public voting, the results of which formed part of the Judges’ final deliberation. Today, we are delighted to share the Judges’ decisions on the Equal Rating Innovation Challenge winners.

The Winners
With an almost unanimous vote, the Overall Winner of the Equal Rating Innovation Challenge, receiving US$125,000 in funding, is Gram Marg Solution for Rural Broadband. This project is spearheaded by Professor Abhay Karandikar, Dean (Faculty Affairs) and Institute Chair Professor of Electrical Engineering and Dr Sarbani Banerjee Belur, Senior Project Research Scientist, at Indian Institute of Technology (IIT) Bombay in Mumbai, India.

Dr Sarbani Banerjee Belur (India) presenting Gram Marg Solution for Rural Broadband at Mozilla’s Equal Rating Conference in New York City

Gram Marg, which translates as “roadmap” in Hindi, captured the attention of the Judges and audience by focusing on the urgent need to bring 640,000 rural villages in India online. The team reinforced the incredible potential these communities could achieve if they had online access to e-Governance services, payment and financial services, and general digital information. In order to close the digital divide and empower these rural communities, the Gram Marg team has created an ingenious and “indigenous” technology that utilizes unused white space on the TV spectrum to backhaul data from village wifi clusters to provide broadband access (frugal 5G).

The team of academics and practitioners have created a low-cost and ruggedized TV UHF device that converts a 2.4 Ghz signal to connect villages in even the most difficult terrains. Their journey has been one of resilience and perseverance as they have deployed their solution in 25 pilot villages, all while reducing costs, size, and perfecting their solution. This top prize of the Innovation Challenge is awarded to a solution the Judges recognize as creating a robustly scalable solution — Gram Marg is both technology enabler and social partner, and delivered beyond our hopes.

“All five semifinalists were equally competitive and it was really a challenge to pitch our solution among them. We are humbled by the Judges’ decision to choose our solution as the winner,” Professor Karandikar told us. “We will continue to improve our technology solution to make it more efficient. We are also working on a sustainable business model that can enable local village entrepreneurs to deploy and manage access networks. We believe that a decentralized and sustainable model is the key to the success of a technology solution for connecting the unconnected.”

As “Runner-Up” with a funding award of US$75,000, our Judges selected Afri-Fi: Free Public WiFi, lead by Tim Human (South Africa). The project is an extension of the highly awarded and successful Project Isizwe, which offers 500MB of data for free per day, but the key goal of this project is to create a sustainable business model by linking together free wifi networks throughout South Africa and engaging users meaningfully with advertisers so they can “earn” free wifi.

The team presented a compelling and sophisticated way to use consumer data, protect privacy, and bolster entrepreneurship in their solution. “The team has proven how their solution for a FREE internet is supporting thriving communities in South Africa. Their approach towards community building, partnerships, developing local community entrepreneurs and inclusivity, with a goal of connecting some of the most marginalized communities, are all key factors in why they deserve this recognition and are leading the FREE Internet movement in Southern Africa”, concluded Marlon Parker, Founder of Reconstructed Living Labs, on behalf of the jury.

Finally, the “Most Novel” award worth US$30,000 goes to Bruno Vianna (Brazil) and his team from the Free Networks P2P Cooperative. Fueled by citizen science and community technology, this team is building on the energy of the free networks movement in Brazil to tackle the digital divide. Rather than focusing on technology, the Coop has created a financial and logistical model that can be tailored to each village’s norms and community. The team was able to experiment more adventurously with ways to engage communities through “barn-raising” group activities, deploying “open calls” for leadership to reinforce the democratic nature of their approach, and instituting a sense of “play” for the villagers when learning how to use the equipment. The innovative way the team deconstructed the challenge around empowering communities to build their own infrastructure in an affordable and sustainable way proved to be the deciding factor for the Judges.

Semifinalists from left to right: Steve Song (Canada), Freemium Mobile Internet (FMI), Dr Carlos Rey-Moreno (South Africa), Zenzeleni “Do it for yourselves” Networks (ZN), Bruno Vianna (Brazil), Free Networks P2P Cooperative, Tim Genders (South Africa), Afri-Fi: Free Public WiFi, Dr Sarbani Banerjee Belur (India), Gram Marg Solution for Rural Broadband

Enormous thanks to all who participated in this Innovation Challenge through their submissions, engagement in meetups and events, as well as to our expert panel of Judges for their invaluable insights and time, and to the Mozilla mentors who supported the semifinalists in advancing their projects. We also want to thank all who took part in our online community voting. During the week-long period, we received almost 6,000 votes, with Zenzeleni and Gram Marg leading as the top two vote-getters.

Mozilla started this initiative because we believe in the power of collaborative solutions to tackle big issues. We wanted to take action and encourage change. With the Innovation Challenge, we not only highlighted a broader set of solutions, and broadened the dialogue around these issues, but built new communities of problem-solvers that have strengthened the global network of people working toward connecting the next billion and beyond.

At Mozilla, our commitment to Equal Rating through policy, innovation, research, and support of entrepreneurs in the space will continue beyond this Innovation Challenge, but it will take a global community to bring all of the internet to all people. As our esteemed Judge Omobola Johnson, the former Communication Technology Minister of Nigeria and partner at venture capital fund TLcom, commented: “it’s not about the issue of the unconnected, it’s about the people on the ground who make the connection.” We couldn’t agree more!

Visit us on www.equalrating.com, join our community, let your voice be heard. We’re all in this together — and today congratulate our five final teams for their tremendous leadership, vision, and drive. They are the examples of what’s best in all of us!

Announcing the Equal Rating Innovation Challenge Winners was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Announcing the Equal Rating Innovation Challenge Winners

Mozilla planet - wo, 29/03/2017 - 14:17

Six months ago, we created the Equal Rating Innovation Challenge to add an additional dimension to the important work Mozilla has been leading around the concept of “Equal Rating.” In addition to policy and research, we wanted to push the boundaries and find news ways to provide affordable access to the Internet while preserving net neutrality. An open call for new ideas was the ideal vehicle.

An Open and Engaging Process

The Equal Rating Innovation Challenge was founded on the belief that reaching out to the local expertise of innovators, entrepreneurs, and researchers from all around the world would be the right way for Mozilla to help bring the power of the Internet to the next billion and beyond. It has been a thrilling and humbling experience to see communities engage, entrepreneurs conceive of new ideas, and regulatory, technology, and advocacy groups start new conversations.

Through our Innovation Challenge website, equalrating.com, in webinars, conferences, and numerous community events within the six week submission period, we reached thousands of people around the world. Ultimately, we received 98 submissions from 27 countries, which all taken together demonstrates the viability and the potential of the Equal Rating approach. Whereas previously many people believed providing affordable access was the domain of big companies and government, we are now experiencing a groundswell of entrepreneurs and ideas celebrating the power of community to bring all of the Internet to all people.

Our diverse expert Judges selected five teams as semifinalists in January. Mozilla staff from around the world provided six weeks of expert mentorship to help the semifinalists hone their projects, and on 9 March at our Equal Rating Conference in New York City, these teams presented their solutions to our panel of Judges. In keeping with Mozilla’s belief in openness and participation, we then had a one-week round of online public voting, the results of which formed part of the Judges’ final deliberation. Today, we are delighted to share the Judges’ decisions on the Equal Rating Innovation Challenge winners.

The Winners

With an almost unanimous vote, the Overall Winner of the Equal Rating Innovation Challenge, receiving US$125,000 in funding, is Gram Marg Solution for Rural Broadband. This project is spearheaded by Professor Abhay Karandikar, Dean (Faculty Affairs) and Institute Chair Professor of Electrical Engineering and Dr Sarbani Banerjee Belur, Senior Project Research Scientist, at Indian Institute of Technology (IIT) Bombay in Mumbai, India.

Dr Sarbani Banerjee Belur (India) presenting Gram Marg Solution for Rural Broadband at Mozilla’s Equal Rating Conference in New York City

Gram Marg, which translates as “roadmap” in Hindi, captured the attention of the Judges and audience by focusing on the urgent need to bring 640,000 rural villages in India online. The team reinforced the incredible potential these communities could achieve if they had online access to e-Governance services, payment and financial services, and general digital information. In order to close the digital divide and empower these rural communities, the Gram Marg team has created an ingenious and “indigenous” technology that utilizes unused white space on the TV spectrum to backhaul data from village wifi clusters to provide broadband access (frugal 5G).

The team of academics and practitioners have created a low-cost and ruggedized TV UHF device that converts a 2.4 Ghz signal to connect villages in even the most difficult terrains. Their journey has been one of resilience and perseverance as they have deployed their solution in 25 pilot villages, all while reducing costs, size, and perfecting their solution. This top prize of the Innovation Challenge is awarded to a solution the Judges recognize as creating a robustly scalable solution – Gram Marg is both technology enabler and social partner, and delivered beyond our hopes.

“All five semifinalists were equally competitive and it was really a challenge to pitch our solution among them. We are humbled by the Judges’ decision to choose our solution as the winner,” Professor Karandikar told us. “We will continue to improve our technology solution to make it more efficient. We are also working on a sustainable business model that can enable local village entrepreneurs to deploy and manage access networks. We believe that a decentralized and sustainable model is the key to the success of a technology solution for connecting the unconnected.”

As “Runner-Up” with a funding award of US$75,000, our Judges selected Afri-Fi: Free Public WiFi, lead by Tim Human (South Africa). The project is an extension of the highly awarded and successful Project Isizwe, which offers 500MB of data for free per day, but the key goal of this project is to create a sustainable business model by linking together free wifi networks throughout South Africa and engaging users meaningfully with advertisers so they can “earn” free wifi.

The team presented a compelling and sophisticated way to use consumer data, protect privacy, and bolster entrepreneurship in their solution. “The team has proven how their solution for a FREE internet is supporting thriving communities in South Africa. Their approach towards community building, partnerships, developing local community entrepreneurs and inclusivity, with a goal of connecting some of the most marginalized communities, are all key factors in why they deserve this recognition and are leading the FREE Internet movement in Southern Africa”, concluded Marlon Parker, Founder of Reconstructed Living Labs, on behalf of the jury.

Finally, the “Most Novel” award worth US$30,000 goes to Bruno Vianna (Brazil) and his team from the Free Networks P2P Cooperative. Fueled by citizen science and community technology, this team is building on the energy of the free networks movement in Brazil to tackle the digital divide. Rather than focusing on technology, the Coop has created a financial and logistical model that can be tailored to each village’s norms and community. The team was able to experiment more adventurously with ways to engage communities through “barn-raising” group activities, deploying “open calls” for leadership to reinforce the democratic nature of their approach, and instituting a sense of “play” for the villagers when learning how to use the equipment. The innovative way the team deconstructed the challenge around empowering communities to build their own infrastructure in an affordable and sustainable way proved to be the deciding factor for the Judges.

From left to right: Steve Song (Canada), Freemium Mobile Internet (FMI), Dr Carlos Rey-Moreno (South Africa), Zenzeleni “Do it for yourselves” Networks (ZN), Bruno Vianna (Brazil), Free Networks P2P Cooperative, Tim Genders (South Africa), Afri-Fi: Free Public WiFi, Dr Sarbani Banerjee Belur (India), Gram Marg Solution for Rural Broadband

Enormous thanks to all who participated in this Innovation Challenge through their submissions, engagement in meetups and events, as well as to our expert panel of Judges for their invaluable insights and time, and to the Mozilla mentors who supported the semifinalists in advancing their projects. We also want to thank all who took part in our online community voting. During the week-long period, we received almost 6,000 votes, with Zenzeleni and Gram Marg leading as the top two vote-getters.

Mozilla started this initiative because we believe in the power of collaborative solutions to tackle big issues. We wanted to take action and encourage change. With the Innovation Challenge, we not only highlighted a broader set of solutions, and broadened the dialogue around these issues, but built new communities of problem-solvers that have strengthened the global network of people working toward connecting the next billion and beyond.

At Mozilla, our commitment to Equal Rating through policy, innovation, research, and support of entrepreneurs in the space will continue beyond this Innovation Challenge, but it will take a global community to bring all of the internet to all people. As our esteemed Judge Omobola Johnson, the former Communication Technology Minister of Nigeria and partner at venture capital fund TLcom, commented: “it’s not about the issue of the unconnected, it’s about the people on the ground who make the connection.” We couldn’t agree more!

Visit us on equalrating.com, join our community, let your voice be heard. We’re all in this together – and today congratulate our five final teams for their tremendous leadership, vision, and drive. They are the examples of what’s best in all of us!

The post Announcing the Equal Rating Innovation Challenge Winners appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mike Conley: Things I’ve Learned This Week (May 25 – May 29, 2015)

Thunderbird - mo, 01/06/2015 - 07:49
MozReview will now create individual attachments for child commits

Up until recently, anytime you pushed a patch series to MozReview, a single attachment would be created on the bug associated with the push.

That single attachment would link to the “parent” or “root” review request, which contains the folded diff of all commits.

We noticed a lot of MozReview users were (rightfully) confused about this mapping from Bugzilla to MozReview. It was not at all obvious that Ship It on the parent review request would cause the attachment on Bugzilla to be r+’d. Consequently, reviewers used a number of workarounds, including, but not limited to:

  1. Manually setting the r+ or r- flags in Bugzilla for the MozReview attachments
  2. Marking Ship It on the child review requests, and letting the reviewee take care of setting the reviewer flags in the commit message
  3. Just writing “r+” in a MozReview comment

Anyhow, this model wasn’t great, and caused a lot of confusion.

So it’s changed! Now, when you push to MozReview, there’s one attachment created for every commit in the push. That means that when different reviewers are set for different commits, that’s reflected in the Bugzilla attachments, and when those reviewers mark “Ship It” on a child commit, that’s also reflected in an r+ on the associated Bugzilla attachment!

I think this makes quite a bit more sense. Hopefully you do too!

See gps’s blog post for the nitty gritty details, and some other cool MozReview announcements!

Categorieën: Mozilla-nl planet

Mike Conley: The Joy of Coding (Ep. 16): Wacky Morning DJ

Thunderbird - to, 28/05/2015 - 04:12

I’m on vacation this week, but the show must go on! So I pre-recorded a shorter episode of The Joy of Coding last Friday.

In this episode1, I focused on a tool I wrote that I alluded to in the last episode, which is a soundboard to use during Joy of Coding episodes.

I demo the tool, and then I explain how it works. After I finished the episode, I pushed to repository to GitHub, and you can check that out right here.

So I’ll see you next week with a full length episode! Take care!

  1. Which, several times, I mistakenly refer to as the 15th episode, and not the 16th. Whoops. 

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Calendar builds

Thunderbird - wo, 27/05/2015 - 10:26

Common (excluding Website bugs)-specific: (23)

  • Fixed: 735253 – JavaScript Error: “TypeError: calendar is null” {file: “chrome://calendar/content/calendar-task-editing.js” line: 102}
  • Fixed: 768207 – Make the cache checkbox default-on in the new calendar dialog
  • Fixed: 1049591 – Fix lots of strict warnings
  • Fixed: 1086573 – Lightning and Thunderbird disagree about timezone support in ics files
  • Fixed: 1099592 – Make JS callers of ios.newChannel call ios.newChannel2 in calendar/
  • Fixed: 1149423 – Add Windows timezone names to list of aliases
  • Fixed: 1151011 – Calendar events show up on wrong day when printing
  • Fixed: 1151440 – Choose a color not responsive when creating a New calendar in Lightning 4.0b1
  • Fixed: 1153327 – Run compare-locales with merging for Lightning
  • Fixed: 1156015 – Email scheduling fails for recipients with URN id
  • Fixed: 1158036 – Support sendMailTo for URN type attendees
  • Fixed: 1159447 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_extract.js
  • Fixed: 1159638 – Getter fails in calender-migration-dialog on first run after installation
  • Fixed: 1159682 – Provide a more appropriate “learn more” page on integrated Lightning firstrun
  • Fixed: 1159698 – Opt-out dialog has a button for “disable”, but actually the addon is removed
  • Fixed: 1160728 – Unbreak Lightning 4.0b4 beta builds
  • Fixed: 1162300 – TEST-UNEXPECTED-FAIL | xpcshell-libical.ini:calendar/test/unit/test_alarm.js | xpcshell return code: 0
  • Fixed: 1163306 – Re-enable libical tests and disable ical.js in nightly builds when binary compatibility is back
  • Fixed: 1165002 – Lightning broken, tries to load libical backend although “calendar.icaljs” defaults to “true”
  • Fixed: 1165315 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug759324.js | xpcshell return code: 1 | ###!!! ASSERTION: Deprecated, use NewChannelFromURI2 providing loadInfo arguments!
  • Fixed: 1165497 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_alarmservice.js | xpcshell return code: -11
  • Fixed: 1165726 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/testBasicFunctionality.js | testBasicFunctionality.js::testSmokeTest
  • Fixed: 1165728 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug494140.js | xpcshell return code: -11

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Thunderbird comm-central builds

Thunderbird - wo, 27/05/2015 - 10:25

Thunderbird-specific: (54)

  • Fixed: 401779 – Integrate Lightning Into Thunderbird by Default and Ship Thunderbird with Lightning Enabled
  • Fixed: 717292 – Spell check language setting for subject and body not synchronized, but temporarily appears so when changing language and depending on focus (confusing ux)
  • Fixed: 914225 – Support hotfix add-on in Thunderbird
  • Fixed: 1025547 – newmailaccount/jquery.tmpl.js, line 123: reference to undefined property def[1]
  • Fixed: 1088975 – Answering mail with sendername containing encoded special chars and comma creates two “To”-entries
  • Fixed: 1101237 – Remove distribution directory during install
  • Fixed: 1109178 – Thunderbird OAuth implementation does not work with Evernote
  • Fixed: 1110166 – Port |Bug 1102219 – Rename String.prototype.contains to String.prototype.includes| to comm-central
  • Fixed: 1113097 – Fix misuse of fixIterator
  • Fixed: 1130854 – Package Lightning with Thunderbird
  • Fixed: 1131997 – Adapt for Debugger Server code for changes in bug 1059308
  • Fixed: 1135291 – Update chat log entries added to Gloda since bug 955292 to use relative paths
  • Fixed: 1135588 – New conversations get indexed twice by gloda, leading to duplicate search results
  • Fixed: 1138154 – Plugins default to “always activate” in Thunderbird
  • Fixed: 1142879 – [meta] track Mozilla-central (Core) issues that we want to have fixed in TB38
  • Fixed: 1146698 – Chat Messages added to logs just before shutdown may not be indexed by gloda
  • Fixed: 1148330 – Font indicator doesn’t update when cursor is placed in text where core returns sans-serif (Windows). Serif and monospace don’t work (Linux).
  • Fixed: 1148512 – TEST-UNEXPECTED-FAIL | mailnews/imap/test/unit/test_dod.js | xpcshell return code: 0||1 | streamMessages – [streamMessages : 94] false == true | application crashed [@ mozalloc_abort(char const * const)]
  • Fixed: 1149059 – splitter in compose window can be resized down to completely obscure composition area
  • Fixed: 1151206 – Using a theme hides minimize, maximize and close button in composer window [Mac]
  • Fixed: 1151475 – Remove use of expression closures in mail/
  • Fixed: 1152299 – [autoconfig] Cosmetic changes for WEB.DE config
  • Fixed: 1152706 – Upgrade to Correspondents column (combined To/From column) too agressive
  • Fixed: 1152796 – chrome://messenger/content/folderDisplay.js, line 697: TypeError: this._savedColumnStates.correspondentCol is undefined
  • Fixed: 1152926 – New mail sound preview doesn’t work for default system sound on Mac OS X
  • Fixed: 1154737 – Permafail: TEST-UNEXPECTED-FAIL | toolkit/components/telemetry/tests/unit/test_TelemetryPing.js | xpcshell return code: 0
  • Fixed: 1154747 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/session-store/test-session-store.js | test-session-store.js::test_message_pane_height_persistence
  • Fixed: 1156669 – Trash folder duplication while using IMAP with localized TB
  • Fixed: 1157236 – In-content dialogs: Port bug 1043612, bug 1148923 and bug 1141031 to TB
  • Fixed: 1157649 – TEST-UNEXPECTED-FAIL | dom/push/test/xpcshell/test_clearAll_successful.js (and most other push tests)
  • Fixed: 1158824 – Port bug 138009 to fix packaging errors | Missing file(s): bin/defaults/autoconfig/platform.js
  • Fixed: 1159448 – Thunderbird ignores proxy settings on POP3S protocol
  • Fixed: 1159627 – resource:///modules/dbViewWrapper.js, line 560: SyntaxError: unreachable code after return statement
  • Fixed: 1159630 – components/glautocomp.js, line 155: SyntaxError: unreachable code after return statement
  • Fixed: 1159676 – mailnews/mime/jsmime/test/test_custom_headers.js | run_next_test 0 – TypeError: _gRunningTest is undefined at /builds/slave/test/build/tests/xpcshell/head.js:1435 (and other jsmime tests)
  • Fixed: 1159688 – After switching/changing the window layout, dragging the splitter between threadpane and messagepane can create gray/grey area/space (misplaced notificationbox)
  • Fixed: 1159815 – Take bug 1154791 “Inline spell checker loses red underlines after a backspace is used – take two” in Thunderbird 38
  • Fixed: 1159817 – Take “Bug 1100966 – Inline spell checker loses red underlines after a backspace is used” in Thunderbird 38
  • Fixed: 1159834 – Consider taking “Bug 756984 – Changing location in editor doesn’t preserve the font when returning to end of text/line” in Thunderbird 38
  • Fixed: 1159923 – Take bug 1140105 “Can’t query for a specific font face when the selection is collapsed” in TB 38
  • Fixed: 1160105 – Fix strict mode warnings in protovis-r2.6-modded.js
  • Fixed: 1160106 – “Searching…” spinner at the bottom of gloda search results never goes away
  • Fixed: 1160114 – Strict mode warnings on faceted search
  • Fixed: 1160805 – Missing Windows and Linux nightly builds, build step set props: previous_buildid fails
  • Fixed: 1161162 – “Join Chat” doesn’t focus the newly joined MUC
  • Fixed: 1162396 – Take bug 1140617 “Pasting an image loses the composition style” in TB38
  • Fixed: 1163086 – Take bug 967494 “changing spellcheck language in one composition window affects all open and new compositions” in TB38
  • Fixed: 1163299 – “TypeError: getBrowser(…) is null” in contentAreaClick with Lightning installed and started in calendar view
  • Fixed: 1163343 – Incorrectly formatted error message “sending failed”
  • Fixed: 1164415 – Error in comment for imapEnterServerPasswordPrompt
  • Fixed: 1164658 – TypeError: Cc[‘@mozilla.org/weave/service;1’] is undefined at resource://gre/modules/FxAccountsWebChannel.jsm:227
  • Fixed: 1164707 – missing toolkit_perfmonitoring.xpt in aurora builds
  • Fixed: 1165152 – Take bug 1154894 in TB 38 branch: Disable test_plugin_default_state.js so Thunderbird can ship with plugins disabled by default
  • Fixed: 1165320 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/notification/test-notification.js

MailNews Core-specific: (30)

  • Fixed: 610533 – crash [@ nsMsgDatabase::GetSearchResultsTable(char const*, int, nsIMdbTable**)] with virtual folder
  • Fixed: 745664 – Rename Address book aaa to aaa_test, delete another address book bbb, and renamed address book aaa_test will lose its name and appear deleted after restart (dataloss! involving localized names)
  • Fixed: 777770 – get rid of nsVoidArray from /mailnews
  • Fixed: 786141 – Use nsIFile.exists() instead of stat to check the existence of the file
  • Fixed: 1069790 – Email addresses with parenthesis are not pretty-printed anymore
  • Fixed: 1072611 – Ctrl+P not working from Composition’s Print Preview window
  • Fixed: 1099587 – Make JS callers of ios.newChannel call ios.newChannel2 in mail/ and mailnews/
  • Fixed: 1130248 – |To: “foo@example.com” <foo@example.com>| becomes |”foo@example.comfoo”@example.com| when I compose mail to it
  • Fixed: 1138220 – some headers are not not properly capitalized
  • Fixed: 1141446 – Behaviour of malformed rfc2047 encoded From message header inconsistent
  • Fixed: 1143569 – User-agent error when posting to NNTP due to RFC5536 violation of Tb (user-agent header is folded just after user-agent:, “user-agent:[CRLF][SP]Mozilla…”)
  • Fixed: 1144693 – Disable libnotify usage on Linux by default for new-mail notifications (doesn’t always work after bug 858919)
  • Fixed: 1149320 – fix compile warnings in mailnews/extensions/
  • Fixed: 1150891 – Port package-manifest.in changes from Bug 1115495 – Part 2: PAC generator for browsing and system wide proxy
  • Fixed: 1151782 – Inputting 29th Feb as a birthday in the addressbook contact replaces it with 1st Mar.
  • Fixed: 1152364 – crash in Address Book via nsAbBSDirectory::GetChildNodes nsCOMArrayEnumerator::operator new(unsigned int, nsCOMArray_base const&)
  • Fixed: 1152989 – Account Manager Extensions broken in Thunderbird 37/38
  • Fixed: 1154521 – jsmime fails on long references header and e-mail gets sent and stored in Sent without headers
  • Fixed: 1155491 – Support autoconfig and manual config of gmail IMAP OAuth2 authentication
  • Fixed: 1155952 – Nesting level does not match indentation
  • Fixed: 1156691 – GUI “Edit filters”: Conditions/actions (for specfic accounts) not visible
  • Fixed: 1156777 – nsParseMailbox.cpp:505:55: error: ‘do_QueryObject’ was not declared in this scope
  • Fixed: 1158501 – Port bug 1039866 (metro code removal) and bug 1085557 (addition of socorro symbol upload API)
  • Fixed: 1158751 – Port NO_JS_MANIFEST changes | mozbuild.frontend.reader.SandboxValidationError: calendar/base/backend/icaljs/moz.build
  • Fixed: 1159255 – Build error: MSVC_ENABLE_PGO = True is not permitted to be used in mailnews/intl/moz.build
  • Fixed: 1159626 – chrome://messenger/content/accountUtils.js, line 455: SyntaxError: unreachable code after return statement
  • Fixed: 1160647 – Port |Bug 1159972 – Remove the fallible version of PL_DHashTableInit()| to comm-central
  • Fixed: 1163347 – Don’t require scope in ispdb config for OAuth2
  • Fixed: 1165737 – Fix usage of NS_LITERAL_CSTRING in mailnews, port Bug 1155963 to comm-central
  • Fixed: 1166842 – Re-enable binary extensions for comm-central

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Mike Conley: Things I’ve Learned This Week (May 18 – May 22, 2015)

Thunderbird - sn, 23/05/2015 - 23:54

You might have noticed that I had no “Things I’ve Learned This Week” post last week. Sorry about that – by the end of the week, I looked at my Evernote of “lessons from the week”, and it was empty. I’m certain I’d learned stuff, but I just failed to write it down. So I guess the lesson I learned last week was, always write down what you learn.

How to make your mozilla-central Mercurial clone work faster

I like Mercurial. I also like Git, but recently, I’ve gotten pretty used to Mercurial.

One complaint I hear over and over (and I’m guilty of it myself sometimes), is that “Mercurial is slow”. I’ve even experienced that slowness during some of my Joy of Coding episodes.

This past week, I was helping my awesome new intern get set up to tear into some e10s bugs, and at some point we went through this document to get her .hgrc all set up.

This document did not exist when I first started working with Mercurial – back then, I was using mq or sometimes pbranch, and grumbling about how I missed Git.

But there is some gold in this document.

gps has been doing some killer work documenting best practices with Mercurial, and this document is one of the results of his labour.

The part that’s really made the difference for me is the hgwatchman bit.

watchman is a tool that some folks at Facebook wrote to monitor changes in a folder. hgwatchman is an extension for Mercurial that takes advantage of watchman for a repository, smartly precomputing a bunch of stuff when the folder changes so that when you fire a command, like

hg status

It takes a fraction of the time it’d take without hgwatchman. A fraction.

Here’s how I set hgwatchman up on my MacBook (though you should probably go by the Mercurial for Mozillians doc as the official reference):

  1. Install watchman with brew: brew install watchman
  2. Clone the hgwatchman extension to some folder that you can easily remember and build it: hg clone https://bitbucket.org/facebook/hgwatchman cd hgwatchman make local
  3. Add the following lines to my user .hgrc: [extensions] hgwatchman = cloned-in-dir/hgwatchman/hgwatchman
  4. Make sure the extension is properly installed by running: hg help extensions
  5. hgwatchman should be listed under “enabled extensions”. If it didn’t work, keep in mind that you want to target the hgwatchman directory
  6. And then in my mozilla-central .hg/.hgrc: [watchman] mode = on
  7. Boom, you’re done!

Congratulations, hg should feel snappier now!

Next step is to try out this chg thingthough I’m having some issues still.

Categorieën: Mozilla-nl planet

Mark Banner: Using eslint alongside the Firefox Hello code base to help productivity

Thunderbird - wo, 13/05/2015 - 21:19

On Firefox Hello, we recently added the eslint linter to be run against the Hello code base. We started of with a minimal set of rules, just enough to get us something running. Now we’re working on enabling more rules.

Since we enabled it, I feel like I’m able to iterate faster on patches. For example, if just as I finish typing I see something like:

eslint syntax error in sublime I know almost immediately that I’ve forgotten a closing bracket and I don’t have to run anything to find out – less run-edit-run cycles.

Now I think about it, I’m realising it has also helped reduced the amount of review nits on my patches – due to trivial formatting mistakes being caught automatically, e.g. trailing white-space or missing semi-colons.

Talking about reviews, as we’re running eslint on the Hello code, we just have to apply the patch, and run our tests, and we automatically get eslint output:

eslint output - no trailing spacesHopefully our patch authors will be running eslint before uploading the patch anyway, but this is an additional test, and a few less things that we need to look at during review which helps speed up that cycle as well.

I’ve also put together a global config file for eslint (see below), that I use for outside of the Hello code, on the rest of the Firefox code base (and other projects). This is enough, that, when using it in my editor it gives me a reasonable amount of information about bad syntax, without complaining about everything.

I would definitely recommend giving it a try. My patches feel faster overall, and my test runs are for testing, not stupid-mistake catching!

Want more specific details about the setup and advantages? Read on…

My Setup

For my setup, I’ve recently switched to using Sublime. I used to use Aquamacs (an emacs variant), but when eslint came along, the UI for real-time linting within emacs didn’t really seem great.

I use sublime with the SublimeLinter and SublimeLinter-contrib-eslint packages. I’m told other editors have eslint integration as well, but I’ve not looked at any of them.

You need to have eslint installed globally, or at least in your path, other than that, just follow the installation instructions given on the SublimeLinter page.

One configuration I change I did have to make to the global configuration:

  • Open up a normal javascript (*.js) file.
  • Select “Preferences” -> “Settings – More” -> “Syntax Specific – User”
  • In the file that appears, set the configuration up as follows (or whatever suits you):
{ "extensions": [ "jsm", "jsx", "sjs" ] }

This makes sure sublime treats the .jsm and .jsx files as javascript files, which amongst other things turns on eslint for those files.

Global Configuration

I’ve uploaded my global configuration to a gist, if it changes I’ll update it there. It isn’t intended to catch everything – there’s too many inconsistencies across the code base for that to be sensible at the moment. However, it does at least allow general syntax issues to be highlighted for most files – which is obviously useful in itself.

I haven’t yet tried running it across the whole code base via eslint on the command line – there seems to be some sort of configuration issue that is messing it up and I’ve not tracked it down yet.

Firefox Hello’s Configuration

The configuration files for Hello can be found in the mozilla-central source. There’s a few of these because we have both content and chrome code, and some of the content code is shared with a website that can be viewed by most browsers, and hence isn’t currently able to use all the es6 features, whereas the chrome code can. This is another thing that eslint is good for enforcing.

Our eslint configuration is evolving at the moment, as we enable more rules, which we’re tracking in this bug.

Any Questions?

Feel free to ask any questions about eslint or the setup in the comments, or come and visit us in #loop on irc.mozilla.org (IRC info here).

Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - to, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Calendar: The Third Beta on the way to Lightning 4.0

Thunderbird - ti, 28/04/2015 - 12:21

It’s that time of year again, we have a new major release of Lightning on the horizon. About every 42 weeks, Thunderbird prepares for a major release, we follow up with a matching major version. You may know these as Lightning 2.6 or 3.3.In order to avoid disappointments, we do a series of beta releases before a such major release. This is where we need you. Please help out in making Lightning 4.0 a great success.Time flies when you are preparing for releases, so we are already at Thunderbird 38.0b3 and Lightning 4.0b3. The final release will be on May 12th and there will be at least one more beta. Please download these betas and take a moment to go through all the actions you normally do on a daily basis. Create an event, accept an invitation, complete a task. You probably have your own workflow, these are of course just examples.

Here is how to get the builds. If you have found an issue, you can either leave a comment here or file a bug on bugzilla.

You may wonder what is new. I’ve gone through the bugs fixed since 3.3 and found that most issues are backend fixes that won’t be very visible. We do however have a great new feature to save copies of invitations to your calendar. This helps in case you don’t care about replying to the invitation but would still like to see it in your calendar. We also have more general improvements in invitation compatibility, performance and stability and some slight visual enhancements. The full list of changes can be found on bugzilla.

Although its highly unlikely that severe problems will arise, you are encouraged to make a backup before switching to beta. If it comforts you, I am using beta builds for my production profile and I don’t recall there being a time where I lost events or had to start over.

If you have questions or have found a bug, feel free to leave a comment here.

Categorieën: Mozilla-nl planet

Robert Kaiser: "Nothing to Hide"?

Thunderbird - mo, 27/04/2015 - 00:38
I've been bothered for quite a while with people telling me they "have nothing to hide anyhow" when the topic of Internet privacy comes up.

I guess that mostly comes from the impression that the whole story is our government watching (over) us and the worst thing that can happen is incrimination. While that might threaten some things, most people do nothing that is really interesting enough for a government to go into attack mode over it (or so they believe, and very firmly so). And I even agree that most governments (including the US and EU countries) actually actively seek out what they call "terrorist activities" (even though they often stretch that term in crazy ways) and/or child abuse and similar topics that the vast majority of citizens agree are a bad thing and are not part of - and the vast majority of politicians and government workers believe they act in the best interest of their citizens when "obviously fighting that" via their different programs of privacy-undermining surveillance. That said, most people seem to be OK with their government collecting data about them as long as it's not used to incriminate them (and when that happens, it's too late to protest the practice anyhow).

A lot has been said about that since the "Snowden leaks", but I think the more obvious short-term and direct threat is in corporate surveillance, which has been swept under the rug in most discussions recently - to the joy of Facebook, Google and other major players in that area. I have also seen that when depicting some obvious scenarios resulting of that, people start to think about it much more promptly and realize the effect on their daily lives (even if those are minor issues compared to government starting a manhunt against you with terror allegations or similar).

So what I start asking is:
  • Are you OK with banks determining your credit conditions based on all his comments on Facebook and his Google searches? ("Your friends say you owe them money, and that you live beyond your means, this is gonna be difficult...")
  • Are you OK with insurances changing your rates based on all that data? ("Oh, so you 'like' all those videos about dangerous sports and that deafening music, and you have some quite aggressive or even violent friends - so you see why we need to go a bit higher there, right?")
  • Are you OK with prices for flights or products in online stores (Amazon etc.) being different depending on what other things you have done on the web? ("So, you already planned that vacation at that location, good, so we can give you a higher air rate as you' can't back out now anyhow.")
  • And, of course, envision ads in public or half-public locations being customized for whoever is in the area. ("You recently searched for engagement rings, so we'll show ads for them wherever you go." or "Hey, this is the third time today we sat down and a screen nearby shows Viagra ads." or "My dear daughter, why do we see ads for diapers everywhere we go?")
There are probably more examples, those are the ones that came to my mind so far. Even if those are smaller things, people can relate to them as they affect things in their own life and not scenarios that feel very theoretical to them.

And, of course, they are true to a degree even now. Banks are already buying data from Facebook, probably including "private" messages, for determining credit scores, insurances base rates on anything they can find out about you, flight rates as well as prices for some Amazon and other web shop products vary based on what you searched before - and ads both on your screen and even on postal mail get tailored to a profile built on all kinds of your online behavior. My questions above just take all of those another step forward - but a pretty realistic one in my opinion.

I hope thinking about questions like that makes people realize they might actually want to evade some of that and in the end they actually have something to hide.

And then, of course, that a non-profit like Mozilla, which doesn't seek to maximize money, can believably be on their side and help them regain some privacy where they - now - want to.
Categorieën: Mozilla-nl planet

Thunderbird Blog: Thunderbird 38 goes to beta!

Thunderbird - fr, 03/04/2015 - 11:13

The next major release of Thunderbird, version 38, is now in beta and available for testing. You may download Thunderbird 38.0b1 here.

This version of Thunderbird is the first that is mostly managed by volunteer community members rather than by Mozilla staff. We have many new features, including:

  • Message filtering when a message is sent or archived
  • File-per-message local storage available for new accounts (maildir)
  • Contact search over multiple address books
  • Internationalized domain names for RSS feeds
  • Allow expanded columns to the folder pane for folder size and counts

Release notes are available here.

There are still a couple of features missing from this beta that we hope to ship in the final version of Thunderbird 38. Those are:

  • Ship Lightning calendar addon with Thunderbird with an opt-out dialog
  • Use OAUTH authentication with Gmail IMAP accounts

 

Categorieën: Mozilla-nl planet

Pages