mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 weken 12 uur geleden

Daniel Pocock: What is the risk of using proprietary software for people who prefer not to?

wo, 12/04/2017 - 08:43

Jonas Öberg has recently blogged about Using Proprietary Software for Freedom. He argues that it can be acceptable to use proprietary software to further free and open source software ambitions if that is indeed the purpose. Jonas' blog suggests that each time proprietary software is used, the relative risk and reward should be considered and there may be situations where the reward is big enough and the risk low enough that proprietary software can be used.

A question of leadership

Many of the free software users and developers I've spoken to express frustration about how difficult it is to communicate to their family and friends about the risks of proprietary software. A typical example is explaining to family members why you would never install Skype.

Imagine a doctor who gives a talk to school children about the dangers of smoking and is then spotted having a fag at the bus stop. After a month, if you ask the children what they remember about that doctor, is it more likely to be what he said or what he did?

When contemplating Jonas' words, it is important to consider this leadership factor as a significant risk every time proprietary software or services are used. Getting busted with just one piece of proprietary software undermines your own credibility and posture now and well into the future.

Research has shown that when communicating with people, what they see and how you communicate is ninety three percent of the impression you make. What you actually say to them is only seven percent. When giving a talk at a conference or a demo to a client, or communicating with family members in our everyday lives, using a proprietary application or a product or service that is obviously proprietary like an iPhone or Facebook will have far more impact than the words you say.

It is not only a question of what you are seen doing in public: somebody who lives happily and comfortably without using proprietary software sounds a lot more credible than somebody who tries to explain freedom without living it.

The many faces of proprietary software

One of the first things to consider is that even for those developers who have a completely free operating system, there may well be some proprietary code lurking in their BIOS or other parts of their hardware. Their mobile phone, their car, their oven and even their alarm clock are all likely to contain some proprietary code too. The risks associated with these technologies may well be quite minimal, at least until that alarm clock becomes part of the Internet of Things and can be hacked by the bored teenager next door. Accessing most web sites these days inevitably involves some interaction with proprietary software, even if it is not running on your own computer.

There is no need to give up

Some people may consider this state of affairs and simply give up, using whatever appears to be the easiest solution for each problem at hand without thinking too much about whether it is proprietary or not.

I don't think Jonas' blog intended to sanction this level of complacency. Every time you come across a piece of software, it is worth considering whether a free alternative exists and whether the software is really needed at all.

An orderly migration to free software

In our professional context, most software developers come across proprietary software every day in the networks operated by our employers and their clients. Sometimes we have the opportunity to influence the future of these systems. There are many cases where telling the client to go cold-turkey on their proprietary software would simply lead to the client choosing to get advice from somebody else. The free software engineer who looks at the situation strategically may find that it is possible to continue using the proprietary software as part of a staged migration, gradually helping the user to reduce their exposure over a period of months or even a few years. This may be one of the scenarios where Jonas is sanctioning the use of proprietary software.

On a technical level, it may be possible to show the client that we are concerned about the dangers but that we also want to ensure the continuity of their business. We may propose a solution that involves sandboxing the proprietary software in a virtual machine or a DMZ to prevent it from compromising other systems or "calling home" to the vendor.

As well as technical concerns about a sudden migration, promoters of free software frequently encounter political issues as well. For example, the IT manager in a company may be five years from retirement and is not concerned about his employer's long term ability to extricate itself from a web of Microsoft licenses after he or she has the freedom to go fishing every day. The free software professional may need to invest significant time winning the trust of senior management before he is able to work around a belligerant IT manager like this.

No deal is better than a bad deal

People in the UK have probably encountered the expression "No deal is better than a bad deal" many times already in the last few weeks. Please excuse me for borrowing it. If there is no free software alternative to a particular piece of proprietary software, maybe it is better to simply do without it. Facebook is a great example of this principle: life without social media is great and rather than trying to find or create a free alternative, why not just do something in the real world, like riding motorcycles, reading books or getting a cat or dog?

Burning bridges behind you

For those who are keen to be the visionaries and leaders in a world where free software is the dominant paradigm, would you really feel satisfied if you got there on the back of proprietary solutions? Or are you concerned that taking such shortcuts is only going to put that vision further out of reach?

Each time you solve a problem with free software, whether it is small or large, in your personal life or in your business, the process you went through strengthens you to solve bigger problems the same way. Each time you solve a problem using a proprietary solution, not only do you miss out on that process of discovery but you also risk conditioning yourself to be dependent in future.

For those who hope to build a successful startup company or be part of one, how would you feel if you reach your goal and then the rug is pulled out underneath you when a proprietary software vendor or cloud service you depend on changes the rules?

Personally, in my own life, I prefer to avoid and weed out proprietary solutions wherever I can and force myself to either make free solutions work or do without them. Using proprietary software and services is living your life like a rat in a maze, where the oligarchs in Silicon Valley can move the walls around as they see fit.

Categorieën: Mozilla-nl planet

Adblock Plus: The plan towards offering Adblock Plus for Firefox as a Web Extension

wo, 12/04/2017 - 08:23

TL;DR: Sometime in autumn this year the current Adblock Plus for Firefox extension is going to be replaced by another, which is more similar to Adblock Plus for Chrome. Brace for impact!

What are Web Extensions?

At some point, Web Extensions are supposed to become a new standard for creating browser extensions. The goal is writing extensions in such a way that they could run on any browser without any or only with minimal modifications. Mozilla and Microsoft are pursuing standardization of Web Extensions based on Google Chrome APIs. And Google? Well, they aren’t interested. Why should they be, if they already established themselves as an extension market leader and made everybody copy their approach.

It isn’t obvious at this point how Web Extensions will develop. The lack of interest from Google isn’t the only issue here; so far the implementation of Web Extensions in Mozilla Firefox and Microsoft Edge shows very significant differences as well. It is worth noting that Web Extensions are necessarily less powerful than the classic Firefox extensions, even though many shortcomings can probably be addressed. Also, my personal view is that the differences between browsers are either going to result in more or less subtle incompatibilities or in an API which is limited by the lowest common denominator of all browsers and not good enough for anybody.

So why offer Adblock Plus as a Web Extension?

Because we have no other choice. Mozilla’s current plan is that Firefox 57 (scheduled for release on November 14, 2017) will no longer load classic extensions, and only Web Extensions are allowed to continue working. So we have to replace the current Adblock Plus by a Web Extension by then or ideally even by the time Firefox 57 is published as a beta version. Otherwise Adblock Plus will simply stop working for the majority of our users.

Mind you, there is no question why Mozilla is striving to stop supporting classic extensions. Due to their deep integration in the browser, classic extensions are more likely to break browser functionality or to cause performance issues. They’ve also been delaying important Firefox improvements due to compatibility concerns. This doesn’t change the fact that this transition is very painful for extension developers, and many existing extensions won’t take this hurdle. Furthermore, it would have been better if the designated successor of the classic extension platform were more mature by the time everybody is forced to rewrite their code.

What’s the plan?

Originally, we hoped to port Adblock Plus for Firefox properly. While using Adblock Plus for Chrome as a starting point would require far less effort, this extension also has much less functionality compared to Adblock Plus for Firefox. Also, when developing for Chrome we had to make many questionable compromises that we hoped to avoid with Firefox.

Unfortunately, this plan didn’t work out. Adblock Plus for Firefox is a large codebase and rewriting it all at once without introducing lots of bugs is unrealistic. The proposed solution for a gradual migration doesn’t work for us, however, due to its asynchronous communication protocols. So we are using this approach to start data migration now, but otherwise we have to cut our losses.

Instead, we are using Adblock Plus for Chrome as a starting point, and improving it to address the functionality gap as much as possible before we release this version for all our Firefox users. For the UI this means:

  • Filter Preferences: We are working on a more usable and powerful settings page than what is currently being offered by Adblock Plus for Chrome. This is going to be our development focus, but it is still unclear whether advanced features such as listing filters of subscriptions or groups for custom filters will be ready by the deadline.
  • Blockable Items: Adblock Plus for Chrome offers comparable functionality, integrated in the browser’s Developer Tools. Firefox currently doesn’t support Developer Tools integration (bug 1211859), but there is still hope for this API to be added by Firefox 57.
  • Issue Reporter: We have plans for reimplementing this important functionality. Given all the other required changes, this one has lower priority, however, and likely won’t happen before the initial release.

If you are really adventurous you can install a current development build here. There is still much work ahead however.

What about applications other than Firefox Desktop?

The deadline only affects Firefox Desktop for now; in other applications classic extensions will still work. However, it currently looks like by Firefox 57 the Web Extensions support in Firefox Mobile will be sufficient to release a Web Extension there at the same time. If not, we still have the option to stick with our classic extension on Android.

As to SeaMonkey and Thunderbird, things aren’t looking well there. It’s doubtful that these will have noteworthy Web Extensions support by November. In fact, it’s not even clear whether they plan to support Web Extensions at all. And unlike with Firefox Mobile, we cannot publish a different build for them (Addons.Mozilla.Org only allows different builds per operating system, not per application). So our users on SeaMonkey and Thunderbird will be stuck with an outdated Adblock Plus version.

What about extensions like Element Hiding Helper, Customizations and similar?

Sadly, we don’t have the resources to rewrite these extensions. We just released Element Hiding Helper 1.4, and it will most likely remain as the last Element Hiding Helper release. There are plans to integrate some comparable functionality into Adblock Plus, but it’s not clear at this point when and how it will happen.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: AMO Has a New Look on Android

di, 11/04/2017 - 20:57

The mobile version of addons.mozilla.org (AMO) recently debuted a new appearance. It’s not a complete redesign, but rather the start of an iterative process that will take months to fully transform AMO for mobile. The new look is also a preview of what’s to come for desktop AMO. Once the mobile design elements mature, we’ll apply the same concepts to desktop, likely sometime later this year.

“Parity between the two platforms is a high priority,” says Sr. Visual Designer Philip Walmsley. “We’re using mobile to test and learn what works, and uplifting that into the desktop designs. And anything new we discover along the way on desktop will be designed back into mobile, as well.”

Our main goal was to make browsing add-ons more intuitive and effortless. To that end, the new design presents content in a cleaner, more streamlined manner. There are fewer buttons to tap, but the ones that remain are bold and clear.

Illustrated in the images above, the homepage displays a subset of categories represented primarily though iconography… The density of information on an add-on detail page is more balanced now, with only essential information in clear view… and theme previews are bigger and screenshots more prominent.

There’s a bit more color, too. In general much of the aesthetic was in need of a modernizing overhaul. These recent changes are just the start. Plenty more to come. If you’re exploring the new AMO on your Android device and spot a bug, please feel free to let us know about it.

The post AMO Has a New Look on Android appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Joel Maher: Project Stockwell (reduce intermittents) – April 2017

di, 11/04/2017 - 19:06

I am 1 week late in posting the update for Project Stockwell.  This wraps up a full quarter of work.  After a lot of concerns raised by developers about a proposed new backout policy, we moved on and didn’t change too much although we did push a little harder and I believe we have disabled more than we fixed as a result.

Lets look at some numbers:

Week Starting 01/02/17 02/27/17 03/24/17 Orange Factor 13.76 9.06 10.08 # P1 bugs 42 32 55 OF(P2) 7.25 4.78 5.13

As you can see we increased in March on all numbers- but overall a great decrease so far in 2017.

There have been a lot of failures which have lingered for a while which are not specific to a test.  For example:

  • windows 8 talos has a lot of crashes (work is being done in bug 1345735)
  • reftest crashes in bug 1352671.
  • general timeouts in jobs in bug 1204281.
  • and a few other leaks/timeouts/crashes/harness issues unrelated to a specific test
  • infrastructure issues and tier-3 jobs

While these are problematic, we see the overall failure rate is going down.  In all the other bugs where the test is clearly a problem we have seen many fixes which and responses to bugs from so many test owners and developers.  It is rare that we would suggest disabling a test and it was not agreed upon, and if there was concern we had a reasonable solution to reduce or fix the failure.

Speaking of which, we have been tracking total bugs, fixed, disabled, etc with whiteboard tags.  While there was a request to not use “stockwell” in the whiteboard tags and to make them more descriptive, after discussing this with many people we couldn’t come to agreement on names or what to track and what we would do with the data- so for now, we have remained the same.  Here is some data:

03/07/17 04/11/17 total 246 379 fixed 106 170 disabled 61 91 infrastructure 11 17 unknown 44 60 needswork 24 38 % disabled 36.53% 34.87%

What is interesting is that prior to march we had disabled 36.53% of the fixes, but in March when we were more “aggressive” about disabling tests, the overall percentage went down.  In fact this is a cumulative number for the year, so for the month of March alone we only disable 31.91% of the fixed tests.  Possibly if we had disabled a few more tests the overall numbers would have continued to go down vs slightly up.

A lot of changes took place on the tree in the last month, some interesting data on newer jobs:

  • taskcluster windows 7 tests are tier-2 for almost all windows VM tests
  • autophone is running all media tests which are not crashing or perma failing
  • disabled external media tests on linux platforms
  • added stylo mochitest and mochitest-chrome
  • fixed stylo reftests to run in e10s mode and on ubuntu 16.04

Upcoming job changes that I am aware of:

  • more stylo tests coming online
  • more linux tests moving to ubuntu 16.04
  • push to green up windows 10 taskcluster vm jobs

Regarding our tests, we are working on tracking new tests added to the tree, what components they belong in, what harness they run in, and overall how many intermittents we have for each component and harness.  Some preliminary work shows that we added 942 mochitest*/xpcshell tests in Q1 (609 were imported webgl tests, so we wrote 333 new tests, 208 of those are browser-chrome).  Given the fact that we disabled 91 tests and added 942, we are not doing so bad!

Looking forward into April and Q2, I do not see immediate changes to a policy needed, maybe in May we can finalize a policy and make it more formal.  With the recent re-org, we are now in the Product Integrity org.  This is a good fit, but dedicating full time resources to sheriffing and tooling for the sake of project stockwell is not in the mission.  Some of the original work will continue as it serves many purposes.  We will be looking to formalize some of our practices and tools to make this a repeatable process to ensure that progress can still be made towards reducing intermittents (we want <7.0) and creating a sustainable ecosystem for managing these failures and getting fixes in place.

 


Categorieën: Mozilla-nl planet

Air Mozilla: Martes Mozilleros, 11 Apr 2017

di, 11/04/2017 - 17:00

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

Air Mozilla: Martes Mozilleros, 11 Apr 2017

di, 11/04/2017 - 17:00

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 177

di, 11/04/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's Crate of this Week is rust-skeptic, a cargo subcommand to doctest your README.md. Thanks to staticassert for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

132 pull requests were merged in the last week.

New Contributors
  • Anatol Pomozov
  • Bryan Tan
  • GitLab
  • Matthew Jasper
  • Nathan Stocks
  • Peter Gerber
  • Shiz
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Nobody expects the Rust Evangelism Strike Force! Our chief weapon is surprise, surprise and fearless concurrency... fearless concurrency and surprise... our two weapons are fearless concurrency and surprise, and ruthless efficiency our three, weapons are fearless concurrency, and surprise, and ruthless efficiency, and an almost fanatical devotion to zero-cost abstractions. Our four, no--amongst our weapons... Amongst our weaponry... are, such elements as fearless concurrency, surprise... I'll come in again.

kibwen on reddit.

Thanks to shadow31 and KillTheMule for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Julia Vallera: Mozilla DOT Clubs kick off in East Africa

di, 11/04/2017 - 02:17

I just got back from an inspiring visit to East Africa where I joined several colleagues from Mozilla and Digital Opportunity Trust (DOT) to work on a new project with people from Kenya, Tanzania, Rwanda, Jordan and Canada. Most of the work we did was in Nairobi and Dar Es Salaam with one short visit to Machakos. Below, I tried to capture some key experiences and learnings from the trip.

Mozilla DOT Club participants with Club Leader Vincent Juma and Regional Coordinator Dome Dennis in Machakos, Kenya. CC-BY-SA by Mozilla

Project Background

In Collaboration with Digital Opportunity Trust (DOT) we are launching 30+ Mozilla Clubs in Jordan, Lebanon, Kenya, Rwanda and Tanzania over the next six months. Each country has leaders helping to organize and facilitate Clubs in their community. These leaders are experienced facilitators and mentors who were selected by DOT staff to launch the project in each location. For more information here is a description of the benefits of the Mozilla Clubs initiative and why Clubs are an integral component of the DOT Digital Champion program.

Why East Africa?

Kenya, Tanzania and Rwanda are three of the five countries where this project is launching. DOT has dedicated staff and a strong community in each country who we are working closely with to develop the project. Mozilla has a growing community Kenya that is helping to inspire the work for this project. With limited time we were only able to visit Nairobi and Dar Es Salaam, but were fortunate to have Individuals from Jordan and Rwanda join us in Nairobi.

Traveling to Tanzania and Kenya gave us the opportunity to:

Three Trainings

Training attendees check notes while they develop vision statements for their Clubs. CC-BY-SA by Mozilla

Our main focus for this trip was to lead three in-person, day long trainings to introduce Open Leadership practices and principles to DOT Club leaders. Training attendees included 10 DOT Staff and 25 youth leaders. Each training lasted 6–8 hours and covered topics like fostering safe spaces for learning, open source licensing, project based learning, tools for collaboration, digital inclusion, online privacy and much more. Throughout the day participants engaged with each other through deep discussion, group break-outs, physical activities and content creation.

  • Training 1: Mozilla Club Leader Training in Dar Es Salaam. An in-person, full day training for individuals facilitating Mozilla DOT Clubs in Tanzania. Participants got an introduction to Mozilla, working open, web literacy, facilitation techniques, tools and resources.
  • Training 2: Mozilla Club Leader Training in Nairobi. An in-person, one day training for individuals facilitating Mozilla DOT Clubs in Kenya and Rwanda. Participants got an introduction to Mozilla, working open, web literacy, facilitation techniques, tools and resources.
  • Training 3: Mozilla Workshop for DOT Staff. An in-person workshop for DOT staff in Tanzania, Kenya, Rwanda, Jordan and Canada. Participants got an introduction to Mozilla programs, resources and initiatives as they relate to the DOT Digital Champion program.

Through discussion and activities participants gained a better understanding of how to utilize everything Mozilla Clubs has to offer.

Developing The Training

Since December 2016 we’ve been mapping our work for this project. We set the groundwork for this training through several planning calls, a virtual orientation and lots of writing. Working with DOT was a unique opportunity for us to develop a new kind of training for a unique audience that have existing expertise in leadership, community development and education. To complement this expertise we based our training agenda on Open Leadership, Web Literacy and Mozilla Clubs. Here are a few very important goals we had in mind:

  • Make it offline: We knew very early on that we wanted to create something that could be done entirely offline. Many of the training participants work in learning environments that have little or no access to internet and/or computers. So, we prepared printed materials and modeled something they could replicate in their local context.
  • Incorporate many opportunities for collaboration: This was a unique chance to leverage the in-person quality of this training. We incorporated several group activities that gave attendees a chance to share ideas, experiences and expertise from multiple countries, cities and villages.
  • Include several layers of hands-on learning: The content we shared throughout the training demonstrates hands-on learning, including physical games to learn HTML (Hyper Text Markup Language) and have engaging discussions about technology.
  • Implement an assessment plan: As a new project this was an opportunity to initiate an ongoing assessment plan starting with the training. We brought a team of assessment experts together from Mozilla and DOT to advise us on developing a registration and training survey. We used these to gather essential feedback before and after the training.

In less than three months we made several iterations of the training agenda and requested feedback from multiple stakeholders. The trainings took place between March 13–21, 2017. CC-BY-SA by Mozilla

Running The Training

This full recap includes the detailed agenda, objectives, survey results and outcomes from the training series. It is openly licensed on Github and can be duplicated as needed.

Attendees explored how to adapt the Mozilla Clubs model in a local framework that encourages safe and open learning spaces online and offline. We discussed complicated topics like teaching in lofi areas, working openly, empowering learners and creating optimal learning environments. Participants shared experiences, learned from each other and connected on many different levels.

We did our best to create a fun, friendly and comfortable environment for everyone. To keep engagement high we moved around a lot and rotated between group brainstorms, activities, team-work and break time. Sometimes the room was filled with laughter and movement. Other times the room was quiet and focused. In each case attendees brought their own personality and ideas to the agenda, which added a wonderful spontaneity to the training that we were pleasantly surprised with.

Additional Experiences & Learnings

While in Nairobi I had a couple more experiences worth noting here. These were not directly related to the trainings described above, but definitely helped to contextualize the learning that happened for me while I was there.

    1. Digital Skills Observatory (DSO) Convening and Workshop:

      My group during DSO workshop. CC-BY-SA by Mozilla

      In advance of the training in Nairobi I joined 60 other people in a convening to workshop findings from the Digital Skills Observatory. This research project followed 200 first-time smartphone users throughout 2016, and studied the impact of digital skills training on their adoption and usage behaviours. During the convening we discussed findings from the project, and worked in small groups to build new ideas based on them. The workshop was a great introduction to several other organizations and people around the world who are involved in similar areas of technology and education.

      2. Site Visit to Mozilla Club Machakos, Kenya:

      Recent high school graduates learn HTML during a Mozilla DOT Club session in Machakos, Kenya. CC-BY-SA by Mozilla

      I was honored to visit a Mozilla DOT Club in action. Mozilla Club Machakos meets every week in a Red Cross facility in the town of Machakos, which is about an hour outside of Nairobi. The Club has 13 Club participants, all of which are recent high school graduates. During my visit, I learned about their hobbies, interests and curiosities. We talked about what they were learning in the Club and how it overlaps with their interests. Some of them were planning to attend university, others were not. In all cases they chose to attend the Club and learn new skills that could help them progress in their personal and professional lives. They share computers during their Club meetings and take turns trying different tools and activities online. While I was there they were learning basic HTML using X-Ray Goggles.

      3. Dinner with Digital Inclusion leaders from Kenya:

      Local leaders discuss digital inclusion and web literacy over dinner. CC-BY-SA by Mozilla

      I had the pleasure of joining seven women for dinner to discuss and learn about the work they are doing related to digital inclusion and gender equality. They are currently working with Mozilla, United Nations and other organizations to increase opportunities for women and girls in technology. A lot of this work is based in Kenya, but extends and inspires multiple other communities around the globe. Amira Dhalla (Lead for Women and Web Literacy at Mozilla) brought us together in a wonderful and friendly exchange filled with inspiration and delicious local food.

What’s Next?

Training participants will now launch Clubs in their home countries. As they recruit Club learners and host Club events they will share their experiences on our community channels. These updates are viewable on the Mozilla Clubs Event Reporter, Mozilla Learning Forum and Clubs Facebook Group.

My learnings from this trip will continue to impact my work for months and years to come. I am inspired by the spirit and culture of the individuals I got to know during my visit. I look forward to working alongside DOT leaders as they grow web literacy and open leadership in their regions.

Special thanks to my co-facilitator Amira Dhalla and DOT Staff Roy Lamond, Christine Kelly, Dome Dennis, Judy Muriuki and Frederick Sigalla for their assistance with organizing the events. For more about this work check out Amira Dhalla’s Training Digital Leaders blogpost and photographs.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Awards $365,000 to Open Source Projects as part of MOSS

di, 11/04/2017 - 00:46

At Mozilla we were born out of, and remain a part of, the open source and free software movement. Through the Mozilla Open Source Support (MOSS) program, we recognize, celebrate, and support open source projects that contribute to our work and to the health of the Internet.

Since our last update

We have provided a total of $365,000 in support of open source projects through MOSS.

MOSS supports SecureDrop with a quarter of a million dollars

The biggest award went to SecureDrop, a whistleblower submission system used by over 30 news organizations, maintained by the non-profit Freedom of the Press Foundation.

The $250,000 given represents the largest amount we’ve ever provided to an organization since launching the MOSS program. It will support the creation of the next version of SecureDrop, which will be easier to install, easier for journalists to use, and even more secure.

Additional awards

We have also made awards to other projects we believe will advance a free and healthy Internet:

  • $10,000 to the libjpeg-turbo project, the leading implementation of JPEG compression for photos and similar images;
  • $25,000 to LLVM, a widely-used collection of technologies for building software;
  • $30,000 to the LEAP Encryption Access Project, a nonprofit focusing on giving Internet users access to secure communication;
  • $50,000 to Tokio, a Rust project to bring easy-to-use asynchronous input and output to the language.

We believe in encouraging growth and partnerships with our awardees. Where we can, we look to structure awards in creative ways to try and unlock additional value. Here are two examples of how we did that in this cycle:

  • The OSVR project is a virtual and augmented reality platform that Mozilla uses in Firefox. They came to us with a proposal to improve their rendering pipeline; we offered to put up half of the money, if they can encourage their partner companies to provide the other half. They have until the end of June 2017 to make that happen, and we hope they succeed.
  • The Hunspell project maintains the premier open-source spell-checking engine. They proposed to rewrite their software in C++ using a more modern, streaming, embeddable design. We accepted their proposal, but also offered more funds and time to rewrite it in Rust instead. After considering carefully, the Hunspell team opted for the C++ option, but we are happy to have been able to offer them a choice.

Under the Secure Open Source arm of MOSS

We ran a major joint audit on two codebases, one of which is a fork of the other – ntp and ntpsec. ntp is a server implementation of the Network Time Protocol, whose codebase has been under development for 35 years. The ntpsec team forked ntp to pursue a different development methodology, and both versions are widely used. As the name implies, the ntpsec team suggest that their version is or will be more secure. Our auditors did find fewer security flaws in ntpsec than in ntp, but the results were not totally clear-cut.

Security audits have also been performed on the curl HTTP library, the oauth2-server authentication library, and the dovecot IMAP server.

The auditors were extremely impressed with the quality of the dovecot code in particular, writing: “Despite much effort and thoroughly all-encompassing approach, [we] only managed to assert the excellent security-standing of Dovecot. More specifically, only three minor security issues have been found in the codebase.”

Sometimes, finding nothing is better than finding something.

Applications for “Foundational Technology” and “Mission Partners” remain open, with the next batch deadline being the end of April 2017. Please consider whether a project you know of could benefit from a MOSS award.  Encourage them to apply! You can also submit a suggestion for a project which might benefit from an SOS audit.

The post Mozilla Awards $365,000 to Open Source Projects as part of MOSS appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Anthony Hughes: GPU Process in Beta 53

di, 11/04/2017 - 00:09

A few months ago I reported on an experiment I had run in Firefox Nightly 53 where I compared stability (ie. crashes) between users with and without GPU Process. Since then we’ve fixed some bugs and are now days away from releasing GPU Process in Firefox 53. In anticipation of this moment I wanted to make sure we were ready so I organized a repeat of the previous experiment but this time on our Beta users.

As anticipated the results were different than we saw on Nightly but still favourable. I’d like to highlight some of those results in this post.

Anyone with Windows 7 Platform Update or later with a graphics card that has not previously been blacklisted, and who have multi-process enabled are supported. As it stands this represents approximately 25% of users. Since GPU Process is enabled by default for these users, the experiment randomly selects half of these users and turns off GPU Process by flipping a pref. There is a small percentage of noise in the data (+/- ~0.5%) since flipping the pref doesn’t actually turn off GPU Process until the first restart following the pref flip.

17% fewer driver related crashes

In the grand scheme of things graphics driver crashes represented about 3.17% of all crashes reported. For those with GPU Process enabled the percentage was much lower (2.81%) compared to those with GPU Process disabled (3.54%). While these numbers are low it’s worth noting that all users are not created equal. Some users may experience more driver-related crashes than others based on a multitude of factors (modernity of OS and driver updates, hardware, and mixture of third-party software, etc). It is conceivable, although not provable by this experiment, that the impact of this change would be more noticeable to users more prone to driver related issues. It’s also worth noting that we managed to make this change without introducing any new driver related crashes in the UI process which means Firefox should be much less prone to crashing entirely because of an interaction with the driver, although content may still be affected.

22% fewer Direct3D related crashes

Another category of crashes that sees improvements from GPU Process is D3D related crashes. This category of crashes typically involves hardware accelerated content on the web. In the past we’d see these occurring in the UI process which resulted in Firefox crashing completely. Now, with GPU Process we see about a 1/5th reduction in these crashes and those that remain tend not to happen in the UI process anymore. The end-user impact is that you might have to reload a page but Firefox dies less often.

11% fewer Direct3D accelerated video crashes

More stable hardware accelerated video another interesting benefit of GPU Process. We see about 11% fewer DXVA (DirectX Video Accelerator) related crashes in the test group with GPU Process enabled than the test group with it disabled. The end result of this should be slightly fewer crashes which take down the browser when viewing hardware accelerated video, think sites like YouTube.

Top crashes

Looking at the topcrash charts from Socorro shows expected movement in overall crash volumes. Browser topcrashes are down approximately 10% overall when GPU Process is enabled. Meanwhile, Content topcrashes are up approximately 8% and Plugin topcrashes are down 18%. Topcrashes in the GPU process only account for 0.13% of the overall topcrash volume. These numbers are in line with what we expected based on prior testing.

Telemetry

All of the previous metrics are based on anonymous data we receive from Socorro, ie. crash reports submitted by users. This data is extremely useful in digging into very specific details about crashes but it biases towards users who submit crash reports. Telemetry gives us less detailed information but is better at determining the broader impact of features since it represents a broader user population.

For the purposes of the experiment I compared the overall crash rate for each test group, where crash rate is defined as the number of crashes per 1,000 hours of aggregate browser usage across the entire test group population. The findings from the experiment are interesting in that we saw a slight increase in the Browser process crash rate (up 5.9% in the Enabled cohort), a smaller increase in the Content process crash rate (up 2.5% in the Enabled cohort), and a large decrease in the Plugin crash rate (down 20.6% in the Enabled cohort).

Overall, however, comparing the Enabled cohort to Firefox 52.0.2 does show more expected results with 0.51% lower browser crash rate, 18.1% higher content crash rate, and 18% lower plugin crash rate. It’s also worth noting that the crash rate for GPU Process crashes is very low, relatively, with just 0.07 crashes per 1,000 usage hours. Put in other terms that’s one GPU Process crash every 14,285 hours compared to one Browser process crash every 353 hours.

Conclusions

In the end I think we have accomplished our goal: introducing a GPU process (a foundational piece of the Quantum project) without regressing overall product stability. We’ve reduced the overall volume of some categories of graphics-related crashes while making others less prone to taking down the entire browser. One of the primary fears was that in doing so we’d introduce new ways to take down the browser but I’ve not yet found evidence that this has happened.

Of course the numbers themselves don’t tell the whole story. Now begins a deeper investigation of which crashes (ie. signatures) have changed significantly so that we can improve GPU Process further, but I think we have an excellent foundation to build from.

And of course, watching what happens as we roll out to users in Release with Firefox 53 in the coming days.

Categorieën: Mozilla-nl planet

Daniel Pocock: If Alan Turing was born today, would he be a Muslim?

ma, 10/04/2017 - 22:01

Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web).


Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler.

While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life.

Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere.

Please prove me wrong

In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared.

Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit".

What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely assimilated into British life, working and paying taxes. Both were doing nothing out of the ordinary at the time the abuse occurred: one had engaged in a conversation at a bus stop, the other was on a routine visit to a Government office. There is no evidence that either of them had done anything to provoke or invite the abhorrent treatment meted out to them by the followers of Theresa May and Nigel Farage.

The first victim, on 30 March, was Stojan Jankovic, a refugee from Yugoslavia who has been in the UK for 26 years. He had a routine meeting at an immigration department office where he was ambushed, thrown in the back of a van and sent to rot in a prison cell by Theresa May's gestapo. On Friday, 31 March, it was Reker Ahmed, a 17 year old Kurdish-Iranian beaten to the brink of death by a crowd in south London.

One of the more remarkable facts to emerge about these two cases is that while Stojan Jankovic was basically locked up for no reason at all, the street thugs who the police apprehended for the assault on Ahmed were kept in a cell for less than 48 hours and released again on bail. While the harmless and innocent Jankovic was eventually released after a massive public outcry, he spent more time locked up than that gang of violent criminals who beat Reker Ahmed.

In other words, Theresa May and Nigel Farage's Britain has more concern for the liberty of violent criminals than somebody like Jankovic who has been working and paying taxes in the UK since before any of those street thugs were born.

A deeper insight into Turing's fate

With gay marriage having been legal in the UK for a number of years now, the rainbow flag flying at the Tate and Sir Elton John achieving a knighthood, it becomes difficult for people to relate to the world in which Turing and many other victims were collectively classified by their sexuality, systematically persecuted by the state and ultimately died far sooner than they should have. (Turing was only 41 when he died).

In fact, the cruel and brutal forces that ripped Turing apart (and countless other victims too) haven't dissipated at all, they have simply shifted their target. The slanderous comments insinuating that immigrants "steal" jobs or that Islam is about terrorism are eerily reminiscent of suggestions that gay men abduct young boys or work as Soviet spies. None of these lies has any basis in fact, but repeat them often enough in certain types of newspaper and these ideas spread like weeds.

In an ironic twist, Turing's groundbreaking work at Bletchley Park was founded on the contributions of Polish mathematicians, their own country having been the first casualty to Hitler, they were also both immigrants and refugees in Britain. Today, under the Theresa May/Nigel Farage leadership, Polish citizens have been subjected to regular vilification by the media and some have even been killed in the street.

It is said that a picture is worth a thousand words. When you compare these two pieces of propaganda: a 1963 article in the Sunday Mirror advising people "How to spot a possible homo" and a UK Government billboard encouraging people to be on the lookout for people who look different, could you imagine the same type of small-minded and power-hungry tyrants crafting them, singling out a minority so as to keep the public's attention in the wrong place?


Many people have noticed that these latest UK Government posters portray foreigners, Muslims and basically anybody who is not white using a range of characteristics found in anti-semetic propaganda from the Third Reich:

Do the people who create such propaganda appear to have any concern whatsoever for the people they hurt? How would Alan Turing have felt when he encountered propaganda like that from the Sunday Mirror? Do posters like these encourage us to judge people by their gifts in science, the arts or sporting prowess or do they encourage us to lump them all together based on their physical appearance?

It is a basic expectation of scientific methodology that when you repeat the same experiment, you should get the same result. What type of experiment are Theresa May and Nigel Farage conducting and what type of result would you expect?

Playing ping-pong with children

If anybody has any doubt that this evil comes from the top, take a moment to contemplate the 3,000 children who were baited with the promise of resettlement from the Calais "jungle" camp into the UK under the Dubs amendment.

When French authorities closed the "jungle" in 2016, the children were lured out of the camp and left with nowhere to go as Theresa May and French authorities played ping-pong with them. Given that the UK parliament had already agreed they should be accepted, was there any reason for Theresa May to dig her heels in and make these children suffer? Or was she just trying to prove her credentials as somebody who can bastardize migrants just the way Nigel Farage would do it?

How do British politicians really view migrants?

Parliamentarian Keith Vaz, former chair of the Home Affairs Select Committee (responsible for security, crime, prostitution and similar things) was exposed with young men from eastern Europe, encouraging them to take drugs before he ordered them "Take your shirt off. I'm going to attack you.". How many British MP's see foreigners this way? Next time you are groped at an airport security checkpoint, remember it was people like Keith Vaz and his committee who oversee those abuses, writing among other things that "The wider introduction of full-body scanners is a welcome development". No need to "take your shirt off" when these machines can look through it as easily as they can look through your children's underwear.

According to the World Health Organization, HIV/AIDS kills as many people as the September 11 attacks every single day. Keith Vaz apparently had no concern for the possibility he might spread this disease any further: the media reported he doesn't use any protection in his extra-marital relationships.

While Britain's new management continue to round up foreigners like Stojan Jankovic who have done nothing wrong, they chose not to prosecute Keith Vaz for his antics with drugs and prostitution.

Who is Britain's next Alan Turing?

Britain's next Alan Turing may not be a homosexual. He or she may have been a child turned away by Theresa May's spat with the French at Calais, a migrant bundled into a deportation van by the gestapo (who are just following orders) or perhaps somebody of Muslim appearance who is set upon by thugs in the street who have been energized by Nigel Farage. If you still have any uncertainty about what Brexit really means, this is it. A country that denies itself the opportunity to be great by subjecting itself to be ruled under the "divide and conquer" mantra of the colonial era.

Throughout the centuries, Britain has produced some of the most brilliant scientists of their time. Newton, Darwin and Hawking are just some of those who are even more prominent than Turing, household names around the world. One can only wonder what the history books will have to say about Theresa May and Nigel Farage however.

Next time you see a British policeman accosting a Muslim, whether it is at an airport, in a shopping centre, keeping Manchester United souvenirs or simply taking a photograph, spare a thought for Alan Turing and the era when homosexuals were their target of choice.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: glTF Workflow for A-Saturday-Night

ma, 10/04/2017 - 20:58
glTF Workflow for A-Saturday-Night

In A-Saturday-Night, we used the glTF format for all of the 3D content. glTF (gl Transmission Format) is a new 3D file format positioning itself as "the JPEG of 3D" for the Web. glTF has features such as JSON descriptions of entire scenes included binary-encoded data (e.g., vertex positions, UVs, normals) that requires no intermediate processing when uploading to GPU.

glTF exporters and converters are fairly stable, but there are still some loose ends and things that work better than other (by the way, Khronos just hired somebody to improve the Blender exporter). In this post, I will explain which workflow was the most satisfactory for me while producing the assets for A-Saturday-Night. And I’ll share some tips and tricks along the way. That's not to say this is the one way to work with glTF; it’s just the way we’re using it today.

We use Blender for creating the assets and COLLADA as an intermediate format, and then converting them to glTF using collada2gltf. You can grab the collada2gltf binary pre-release builds at https://github.com/KhronosGroup/glTF/releases. Note that glTF v2.0 is here! Khronos is urging everyone to migrate to v2.0 quickly as there is no backwards compatibility to v1.0. A v2.0 branch in collada2gltf for updating to glTF 2.0 is almost completed.

glTF Workflow for A-Saturday-Night

Once I have the COLLADA file, I can convert it to glTF with collada2gltf:

collada2gltf.exe -f <assetname>.dae -o <assetname> -k

This will generate two files: assetname.gltf and assetname.bin. Copy both of them to your assets folder.

The -k command-line flag defines the glTF output file to use standard materials (e.g., constant, lambert, phong) instead of translating them to GLSL shaders. This is important right now since three.js has trouble loading glTF shaders (for example, issue #8869, issue #10549, and issue #1110). It also does not make sense to use fragment and vertex shaders for standard Lambert or Phong materials.

Then in our A-Frame scene, we can import the glTF file:

<a-scene> <a-assets> <a-asset-item id="head" src="head.gltf"></a-asset-item> </a-assets> <a-entity gltf-model="#head"></a-entity> </a-scene>

I couldn’t find a way to export Constant Materials from Blender, found as the "Shadeless" checkbox in Blender’s "Material" tab. For now, the only way I know is to edit the .gltf file by hand.

Replace the material's "technique". This example replaces "PHONG" with "CONSTANT", but we could overwrite Lambert materials as well. Replace this:

"KHR_materials_common": { "doubleSided": false, "jointCount": 0, "technique": "PHONG", "transparent": false, "values": { "ambient": [ 0, 0, 0, 1 ], "diffuse": "texture_asset", "emission": [ 0, 0, 0, 1 ], "shininess": 50, "specular": [ 0, 0, 0, 1 ] } }

with this:

"KHR_materials_common": { "technique": "CONSTANT", "values": { "emission": "texture_asset" } }

If our constant material does not have any texture, we can put define a color as an [r,g,b,a] value instead.

Blender Tips

Here are some steps we can do in Blender before exporting to COLLADA that help to get everything okay:

  • Keep models and their textures in the same folder (we can separate different assets or kinds of assets in different folders)
  • Use relative paths in our textures: //texture.jpg instead of path/to/myproject/texture.jpg
  • In textures, specify the image nodes with the same name as the image file (without the extension)

glTF Workflow for A-Saturday-Night

To make sure our normals are exported okay and hard edges are preserved, in the "Object Data" tab, click on the "Add Custom Split Normals Data" button. Also make sure that the "Store Edge Crease" option is unchecked (as it is by default).

Before: glTF Workflow for A-Saturday-Night

After: glTF Workflow for A-Saturday-Night

  • In case something fails, we can try exporting to OBJ and importing it back to Blender:

  • Export the asset to OBJ

  • Create a new clean scene in Blender and import the OBJ
  • Export it to COLLADA

  • Below are my COLLADA exporter options, for simple assets (i.e., no animation, rigging nor hierarchies):

glTF Workflow for A-Saturday-Night

Batch Convert with batchgltf

If we have a lot of models to convert, perhaps in different folders, calling collada2gltf for each one is inconvenient. So I made a tiny tool for batch converting .dae files to .gltf using collada2gltf.

glTF Workflow for A-Saturday-Night

The batch converter will scan all input folders for .dae files, convert them to .glTF, and save the result in the specified output folders. We can either have .glTFs all saved in the same folder or in separate folders.

You can download the batchgltf converter from https://github.com/feiss/batchgltf. Take a look at the README for requirements and instructions. batchgltf.py can also work from the command line, so we could include it in a typical Webpack/Gulp workflow.

I will try to have it updated with the latest collada2gltf version while adding additional 3D formats. If you have any problem or would like to collaborate on this little tool, feel free to post an issue or send a pull request! I cannot guarantee it is free of bugs; use with care and keep a backup of your files ;)

By the way, you can find all the A-Saturday-Night assets on GitHub, in both glTF and OBJ formats.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 10 Apr 2017

ma, 10/04/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Gervase Markham: Buzzword Bingo

ma, 10/04/2017 - 14:39

This is a genuine question from a European Union public consultation:

Do you see the need for the definition of a reference architecture recommending a standardised high-level framework identifying interoperability interfaces and specific technical standards for facilitating seamless exchanges across data platforms?

Words fail me.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFoxBox 1.0.1 available

ma, 10/04/2017 - 05:23
As Steven Tyler howls he's Done With Mirrors from my G5's CD, TenFourFoxBox 1.0.1 is available for testing from this totally s3kr1t download location. This is a minor custodial release that bumps the "stealth" user agent to Firefox 52 on macOS Sierra and changes the official map app to OpenStreetMap which has rather better performance. Assuming no problems, it will go live later this week, probably Wednesday Pacific time.
Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 97

ma, 10/04/2017 - 02:30

In the last week, we landed 119 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions
  • emilio improved the performance of parsing numeric CSS values.
  • emilio replaced many ad-hoc checks in the CSS selector matching with more structured and consistent logic.
  • rlhunt added support for tiling gradients in WebRender.
  • mrobinson improved the logic for deciding when to clip content.
  • stshine made text correctly inherit overflow properties from its parent element.
  • nox shared more code between the websocket handshake and regular HTTP connection.
  • gw improved the performance of more simple border rendering cases.
  • bholley replaced a hashmap with a vector when storing pseudo-element styles.
  • pyfisch implemented CSS serialization for transform functions.
  • emilio added codegen for C++ destructors in rust-bindgen.
  • jdm enabled a bunch of no-longer-intermittently-failing WebGL tests.
  • ferjm blocked scripts from being loaded when the wrong MIME type is present.
  • emilio improved the performance of selector matching by avoiding unnecessary hashing.
  • cbrewster reduced the amount of cloning requires for some session history code.
  • jdm added support for running web platform tests that require HTTPS.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Mike Hoye: Planet: Secure For Now

za, 08/04/2017 - 04:25

Elevation

This is a followup to a followup – hopefully the last one for a while – about Planet. First of all, I apologize to the community for taking this long to resolve it. It turned out to have a lot more moving parts than were visible at first, and I didn’t know enough about the problem’s context to be able to solve it quickly. I owe a number of people an apology for that, first among them Ehsan who originally brought it to my attention.

The root cause of the problem was that HTTPlib2 in Python 2.x doesn’t – and apparently will never – support Server Name Indication, an important part of Transport Layer Security on shared hosts. This is probably not a big deal for anyone who doesn’t need to make legacy web-facing Python utilities interact securely with modernity, but… well. It me, as the kids say. Here we are.

For some context, our particular SSL problems manifested themselves with error messages like “Error urllib2 Python. SSL: TLSV1_ALERT_INTERNAL_ERROR ssl.c:590” behind the scenes and “internal error” in Planet proper, and I think it’s fair to feel like those messages are less than helpful. I also – no slight on my colleagues in this – don’t have a lot of say in the infrastructure Planet is running on, and it’s equally fair to say I’m not much of a programmer. Python feature-backporting is kind of a rodeo too, and I had a hard time mapping from “I’m using this version of Python on this OS” to “therefore, I have these tools available to me.” Ultimately this combination of OS constraints, library opacity and learning how (and if, where and when) SSL works (or doesn’t, and why) while working in the dated idioms of a language I only half-know didn’t add up to the smoothest experience.

I had a few options open to me, or at least I thought I did. Refactoring for Python 3.x was a non-starter, but I spent far more time than I should have trying to rewrite Planet to work directly with Requests. That turned out to be harder than I’d expected, largely because Planet code has a lot of expectations all over it about HTTPlib2 and how it behaves. I mistakenly thought re-engineering that behavior would be straightforward, and I definitely wasn’t expecting the surprising number of rusty edge cases I’d run into when my assumptions hit the real live web.

Partway through this exercise, in a curious set of coincidences, Mike Connor and I were talking about an old line – misquoted by John F. Kennedy as “Don’t ever take a fence down until you know the reason why it was put up” – by G. K. Chesterton, that went:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

Infrastructure

One nice thing about ancient software is that it builds up these fences; they look like cruft, like junk you should tear out and throw away, until you really, really understand that your code, and you, are being tested. That conversation reminded me of this blog post from Joel Spolsky, about The Worst Thing you can do with software, which smelled suspiciously like what I was right in the middle of doing.

There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:

It’s harder to read code than to write it.

This is why code reuse is so hard. This is why everybody on your team has a different function they like to use for splitting strings into arrays of strings. They write their own function because it’s easier and more fun than figuring out how the old function works.

As a corollary of this axiom, you can ask almost any programmer today about the code they are working on. “It’s a big hairy mess,” they will tell you. “I’d like nothing better than to throw it out and start over.”

Why is it a mess?

“Well,” they say, “look at this function. It is two pages long! None of this stuff belongs in there! I don’t know what half of these API calls are for.”

[…] I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

The first one of these fences I hit was when I discovered that HTTPlib2.Response objects are (somehow) case-insensitive dictionaries because HTTP headers, per spec, are case-insensitive (though normal Python dictionaries very much not, even though examining Response objects with basic tools like “print” makes them look just like they’re a perfectly standard python dict(), nothing to see here move along. Which definitely has this kind of a vibe to it.) Another was hitting what might be a bug in Requests, where usually it gives you “200” as the HTTP “Everything’s Fine” response, which Python will happily and silently turn into the integer HTTPlib2 is expecting, but sometimes gives you “200 OK” which: kaboom.

On the bright side, I did get to spend a few minutes reminiscing fondly to myself about working with Dave Humphrey way back in the day; in hindsight he warned me about this kind of thing when we were working through a similar problem. “It’s the Web. You get whatever you get, whenever you get it, and you’ll like it.”

I was mulling over all of this earlier this week when I decided to take the best (and also worst, and also last) available option: I threw out everything I’d done up to that point and just started lying to the program until it did what I wanted.

This gist is the meat of that effort; the rest of it (swap out the HTTPlib2 calls for Requests and update your error handling) is straightforward, and running in production now. It boils down to taking a Requests object, giving it an imaginary friend, and then standing it on that imaginary friend’s shoulders, throwing a trenchcoat over it and telling it to act like a grownup. The content both calls returns is identical but the supplementary data – headers, response codes, etc – isn’t, so using this technique as a shim potentially makes Requests a drop-in replacement for HTTPlib2. On the off chance that you’re facing the same problems Planet was facing, I hope it’s useful to you.

Again, I apologize for the delay in sorting this out, and thank you for your patience.

Categorieën: Mozilla-nl planet

Aaron Klotz: Asynchronous Plugin Initialization: Requiem

vr, 07/04/2017 - 23:00

My colleague bsmedberg is going to be removing asynchronous plugin initialization in bug 1352575. Sadly the feature never became solid enough to remain enabled on release, so we cut our losses and cancelled the project early in 2016. Now that code is just a bunch of dead weight. With the deprecation of non-Flash NPAPI plugins in Firefox 52, our developers are now working on simplifying the remaining NPAPI code as much as possible.

Obviously the removal of that code does not prevent me from discussing some of the more interesting facets of that work.

Today I am going to talk about how async plugin init worked when web content attempted to access a property on a plugin’s scriptable object, when that plugin had not yet completed its asynchronous initialization.

As described on MDN, the DOM queries a plugin for scriptability by calling NPP_GetValue with the NPPVpluginScriptableNPObject constant. With async plugin init, we did not return the true NPAPI scriptable object back to the DOM. Instead we returned a surrogate object. This meant that we did not need to synchronously wait for the plugin to initialize before returning a result back to the DOM.

If the DOM subsequently called into that surrogate object, the surrogate would be forced to synchronize with the plugin. There was a limit on how much fakery the async surrogate could do once the DOM needed a definitive answer – after all, the NPAPI itself is entirely synchronous. While you may question whether the asynchronous surrogate actually bought us any responsiveness, performance profiles and measurements that I took at the time did indeed demonstrate that the asynchronous surrogate did buy us enough additional concurrency to make it worthwhile. A good number of plugin instantiations were able to complete in time before the DOM had made a single invocation on the surrogate.

Once the surrogate object had synchronized with the plugin, it would then mostly act as a pass-through to the plugin’s real NPAPI scriptable object, with one notable exception: property accesses.

The reason for this is not necessarily obvious, so allow me to elaborate:

The DOM usually sets up a scriptable object as follows:

this.__proto__.__proto__.__proto__
  • Where this is the WebIDL object (ie, content’s <embed> element);
  • Whose prototype is the NPAPI scriptable object;
  • Whose prototype is the shared WebIDL prototype;
  • Whose prototype is Object.prototype.

NPAPI is reentrant (some might say insanely reentrant). It is possible (and indeed common) for a plugin to set properties on the WebIDL object from within the plugin’s NPP_New.

Suppose that the DOM tries to access a property on the plugin’s WebIDL object that is normally set by the plugin’s NPP_New. In the asynchronous case, the plugin’s initialization might still be in progress, so that property might not yet exist.

In the case where the property does not yet exist on the WebIDL object, JavaScript fails to retrieve an “own” property. It then moves on to the first prototype and attempts to resolve the property on that. As outlined above, this prototype would actually be the async surrogate. The async surrogate would then be in a situation where it must absolutely produce a definitive result, so this would trigger synchronization with the plugin. At this point the plugin would be guaranteed to have finished initializing.

Now we have a problem: JS was already examining the NPAPI scriptable object when it blocked to synchronize with the plugin. Meanwhile, the plugin went ahead and set properties (including the one that we’re interested in) on the WebIDL object. By the time that JS execution resumes, it would already be looking too far up the prototype chain to see those new properties!

The surrogate needed to be aware of this when it synchronized with the plugin during a property access. If the plugin had already completed its initialization (thus rendering synchronization unnecessary), the surrogate would simply pass the property access on to the real NPAPI scriptable object. On the other hand, if a synchronization was performed, the surrogate would first retry the WebIDL object by querying for the WebIDL object’s “own” properties, and return the own property if it now existed. If no own property existed on the WebIDL object, then the surrogate would revert to its “pass through all the things” behaviour.

If I hadn’t made the asynchronous surrogate scriptable object do that, we would have ended up with a strange situation where the DOM’s initial property access on an embed could fail non-deterministically during page load.

That’s enough chatter for today. I enjoy blogging about my crazy hacks that make the impossible, umm… possible, so maybe I’ll write some more of these in the future.

Categorieën: Mozilla-nl planet

Andrew Overholt: Quantum work

vr, 07/04/2017 - 21:43

Quantum Curling

Last week we had a work week at Mozilla’s Toronto office for a bunch of different projects including Quantum DOM, Quantum Flow (performance), etc. It was great to have people from a variety of teams participate in discussions and solidify (and change!) plans for upcoming Firefox releases. There were lots of sessions going on in parallel and I wasn’t able to attend them all but some of the results were written up by the inimitable Ehsan in his fourth Quantum Flow newsletter.

Near the end of the week, Ehsan gave an impromptu walkthrough of the Gecko profiler. I’m planning on taking some of the tips he gave and that were discussed and put them onto the documentation for the profiler. If you’re interested in helping, please let me know!

The photo above is of us going curling at the High Park Curling Club. It was a lot of fun and I was happy that only one other person had ever curled before so it was a unique experience for almost everyone!

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Everett v0.9 released and why you should use Everett

vr, 07/04/2017 - 16:00
What is it?

Everett is a Python configuration library.

Configuration with Everett:

  • is composeable and flexible
  • makes it easier to provide helpful error messages for users trying to configure your software
  • supports auto-documentation of configuration with a Sphinx autocomponent directive
  • supports easy testing with configuration override
  • can pull configuration from a variety of specified sources (environment, ini files, dict, write-your-own)
  • supports parsing values (bool, int, lists of things, classes, write-your-own)
  • supports key namespaces
  • supports component architectures
  • works with whatever you're writing--command line tools, web sites, system daemons, etc

Everett is inspired by python-decouple and configman.

v0.9 released!

This release focused on overhauling the Sphinx extension. It now:

  • has an Everett domain
  • supports roles
  • indexes Everett components and options
  • looks a lot better

This was the last big thing I wanted to do before doing a 1.0 release. I consider Everett 0.9 to be a solid beta. Next release will be a 1.0.

Why you should take a look at Everett

At Mozilla, I'm using Everett 0.9 for Antenna which is running in our -stage environment and will go to -prod very soon. Antenna is the edge of the crash ingestion pipeline for Mozilla Firefox.

When writing Antenna, I started out with python-decouple, but I didn't like the way python-decouple dealt with configuration errors (it's pretty hands-off) and I really wanted to automatically generate documentation from my configuration code. Why write the same stuff twice especially where it's a critical part of setting Antenna up and the part everyone will trip over first?

Here's the configuration documentation for Antenna:

http://antenna.readthedocs.io/en/latest/configuration.html#application

Here's the index which makes it easy to find things by component or by option (in this case, environment variables):

http://antenna.readthedocs.io/en/latest/genindex.html

When you configure Antenna incorrectly, it spits out an error message like this:

1 <traceback omitted, but it'd be here> 2 everett.InvalidValueError: ValueError: invalid literal for int() with base 10: 'foo' 3 namespace=None key=statsd_port requires a value parseable by int 4 Port for the statsd server 5 For configuration help, see https://antenna.readthedocs.io/en/latest/configuration.html

So what's here?:

  • Block 1 is the traceback so you can trace the code if you need to.
  • Line 2 is the exception type and message
  • Line 3 tells you the namespace, key, and parser used
  • Line 4 is the documentation for that specific configuration option
  • Line 5 is the "see also" documentation for the component with that configuration option

Is it beautiful? No. [1] But it gives you enough information to know what the problem is and where to go for more information.

Further, in Python 3, Everett will always raise a subclass of ConfigurationError so if you don't like the output, you can tailor it to your project's needs. [2]

First-class docs. First-class configuration error help. First-class testing. This is why I created Everett.

If this sounds useful to you, take it for a spin. It's almost a drop-in replacement for python-decouple [3] and os.environ.get('CONFIGVAR', 'default_value') style of configuration.

Enjoy!

[1]I would love some help here--making that information easier to parse would be great for a 1.0 release. [2]Python 2 doesn't support exception chaining and I didn't want to stomp on the original exception thrown, so in Python 2, Everett doesn't wrap exceptions. [3]python-decouple is a great project and does a good job at what it was built to do. I don't mean to demean it in any way. I have additional requirements that python-decouple doesn't do well and that's where I'm coming from. Where to go for more

For more specifics on this release, see here: http://everett.readthedocs.io/en/latest/history.html#april-7th-2017

Documentation and quickstart here: https://everett.readthedocs.org/en/v0.9/

Source code and issue tracker here: https://github.com/willkg/everett

Categorieën: Mozilla-nl planet

Pagina's