mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Zack Weinberg: Google Voice Search and the Appearance of Trustworthiness

Mozilla planet - za, 27/06/2015 - 17:50

Last week there were several bug reports [1] [2] [3] about how Chrome (the web browser), even in its fully-open-source Chromium incarnation, downloads a closed-source, binary extension from Google’s servers and installs it, without telling you it has done this, and moreover this extension appears to listen to your computer’s microphone all the time, again without telling you about it. This got picked up by the trade press [4] [5] [6] and we rapidly had a full-on Internet panic going.

If you dig into the bug reports and/or the open source part of the code involved, which I have done, it turns out that what Chrome is doing is not nearly as bad as it looks. It does download a closed-source binary extension from Google, install it, and hide it from you in the list of installed extensions (technically there are two hidden extensions involved, only one of which is closed-source, but that’s only a detail of how it’s all put together). However, it does not activate this extension unless you turn on the voice search checkbox in the settings panel, and this checkbox has always (as far as I can tell) been off by default. The extension is labeled, accurately, as having the ability to listen to your computer’s microphone all the time, but of course it does not get to do this until it is activated.

As best anyone can tell without access to the source, what the closed-source extension actually does when it’s activated is monitor your microphone for the code phrase OK Google. When it detects this phrase it transmits the next few words spoken to Google’s servers, which convert it to text and conduct a search for the phrase. This is exactly how one would expect a voice search feature to behave. In particular, a voice-activated feature intrinsically has to listen to sound all the time, otherwise how could it know that you have spoken the magic words? And it makes sense to do the magic word detection with code running on the local computer, strictly as a matter of efficiency. There is even a non-bogus business reason why the detector is closed source; speech recognition is still in the land where tiny improvements lead to measurable competitive advantage.

So: this feature is not actually a massive privacy violation. However, Google could and should have put more care into making this not appear to be a massive privacy violation. They wouldn’t have had mud thrown at them by the trade press about it, and the general public wouldn’t have had to worry about it. Everyone wins. I will now dissect exactly what was done wrong and how it could have been done better.

It was a diagnostic report, intended for use by developers of the feature, that gave people the impression the extension was listening to the microphone all the time. Below is a screen shot of this diagnostic report (click for full width). You can see it on your own copy of Chrome by typing chrome://voicesearch into the URL bar; details will probably differ a little (especially if you’re not using a Mac).

 ENABLED. Screen shot of Google Voice Search diagnostic report, taken on Chrome 43 running on MacOS X.

Google’s first mistake was not having anyone check this over for what it sounds like it means to someone who isn’t familiar with the code. It is very well known that when faced with a display like this, people who aren’t familiar with the code will pick out whatever bits they think they understand and ignore everything else, even if that means they completely misunderstand it. [7] In this case, people see Microphone: Yes and Audio Capture Allowed: Yes and maybe also Extension State: ENABLED and assume that this means the extension is actively listening right now. (What the developers know it means is this computer has a microphone, the extension could listen to it if it had been activated, and it’s connected itself to the checkbox in the preferences so it can be activated. And it’s hard for them to realize that anyone could think it would mean something else.)

They didn’t have anyone check it because they thought, well, who’s going to look at this who isn’t a developer? Thing is, it only takes one person to look at it, decide it looks hinky, mention it online, and now you have a media circus on your hands. Obscurity is no excuse for not doing a UX review.

Now, mistake number two becomes evident when you consider what this screen ought to say in order not to scare people who haven’t turned the feature on (and maybe this is the first they’ve heard of it even): something like

Voice Search is inactive.

(A couple of sentences about what Voice Search is and why you might want it.) To activate Voice Search, go to the preferences screen and check the box.

It would also be okay to have a duplicate checkbox right there on this screen, and to have all the same debugging information show up after you check the box. But wait—how do developers diagnose problems with downloading the extension, which happens before the box has been checked? And that’s mistake number two. The extension should not be downloaded until the box is checked. I am not aware of any technical reason why that couldn’t have been the way it worked in the first place, and it would go a long way to reassure people that this closed-source extension can’t listen to them unless they want it to. Note that even if the extension were open source it might still be a live question whether it does anything hinky. There’s an excellent chance that it’s a generic machine recognition algorithm that’s been trained to detect OK Google, which training appears in the code as a big lump of meaningless numbers—and there’s no way to know whether those numbers train it to detect anything besides OK Google. Maybe if you start talking about bombs the computer just quietly starts recording…

Mistake number three, finally, is something they got half-right. This is not a core browser feature. Indeed, it’s hard for me to imagine any situation where I would want this feature on a desktop computer. Hands-free operation of a mobile device, sure, but if my hands are already on a keyboard, that’s faster and less bothersome for other people in the room. So, Google implemented this frill as a browser extension—but then they didn’t expose that in the user interface. It should be an extension, and it should be visible as such. Then it needn’t take up space in the core preferences screen, even. If people want it they can get it from the Chrome extension repository like any other extension. And that would give Google valuable data on how many people actually use this feature and whether it’s worth continuing to develop.

Categorieën: Mozilla-nl planet

Chris Cooper: Releng & Relops weekly highlights - June 26, 2015

Mozilla planet - za, 27/06/2015 - 17:19

Friday, foxyeah!

It’s been a very busy and successful work week here in beautiful Whistler, BC. People are taking advantage of being in the same location to meet, plan, hack, and socialize. A special thanks to Jordan for inviting us to his place in beautiful Squamish for a BBQ!

(Note: No release engineering folks were harmed by bears in the making of this work week.)

tl;dr

Whistler: Keynotes were given by our exec team and we learned we’re focusing on quality, dating our users to get to know them better, and that WE’RE GOING TO SPACE!! We also discovered that at LEGO, Everything is Awesome now that they’re thinking around the box instead of inside or outside of it. Laura’s GoFaster project sounds really exciting, and we got a shoutout from her on the way we manage the complexity of our systems. There should be internal videos of the keynotes up next week if you missed them.

Internally, we talked about Q3 planning and goals, met with our new VP, David, met with our CEO, Chris, presented some lightning talks, and did a bunch of cross-group planning/hacking. Dustin, Kim, and Morgan talked to folks at our booth at the Science Fair. We had a cool banner and some cards (printed by Dustin) that we could hand out to tell people about try. SHIP IT!

Taskcluster: Great news; the TaskCluster team is joining us in Platform! There was lots of evangelism about TaskCluster and interest from a number of groups. There were some good discussions about operationalizing taskcluster as we move towards using it for Firefox automation in production. Pete also demoed the Generic Worker!

Puppetized Windows in AWS: Rob got the nxlog puppet module done. Mark is working on hg and NSIS puppet modules in lieu of upgrading to MozillaBuild 2.0. Jake is working on the metric-collective module. The windows folks met to discuss the future of windows package management. Q is finishing up the performance comparison testing in AWS. Morgan, Mark, and Q deployed runner to all of the try Windows hosts and one of the build hosts.

Operational: Amy has been working on some additional nagios checks. Ben, Rail, and Nick met and came up with a solid plan for release promotion. Rail and Nick worked on releasing Firefox 39 and two versions of Firefox ESR. Hal spent much of the week working with IT. Dustin and catlee got some work on on migrating treestatus to relengapi. Hal, Nick, Chris, and folks from IT, sheriffs, dev-services debugged problems with b2g jobs. Callek deployed a new version of slaveapi. Kim, Jordan, Chris, and Ryan worked on a plan for addons. Kim worked with some new buildduty folks to bring them up to speed on operational procedures.

Thank you all, and have a safe trip home!

And here are all the details:

Taskcluster
  • We got to spend some quality time with the our new TaskCluster teammates, Greg, Jonas, Wander, Pete, and John. We’re all looking forward to working together more closely.
  • Morgan convinced lots of folks that Taskcluster is super amazing, and now we have a lot of people excited to start hacking on it and moving their workloads to it.
  • We put together a roadmap for TaskCluster in Trello and identified the blockers to turning Buildbot Scheduling off.
Puppetized Windows in AWS
  • Rob has pushed out the nxlog puppet module to get nxlog working in scl3 (bug 1146324). He has a follow-on bug to modify the ec2config file for AWS to reset the log-aggregator host so that we’re aggregating to the local region instead of where we instantiate the instance (like we do with linux). This will ensure we have Windows system logs in AWS (bug 1177577).
  • The new version of MozillaBuild was released, and our plan was to upgrade to that on Windows (bug 1176111). An attempt at that showed that the way hg was compiled requires an external dll (likely something from cygwin), and needs to be run from bash. Since this would require significant changes, we’re going to install the old version of MozillaBuild and put upgrades of hg (bug 1177740) and NSIS on top of that (like we’re doing with GPO now). Future work will include splitting out all the packages and not using MozillaBuild. Jake is working on the puppet module for metric-collective, our host-level stats gathering software for windows (similar to collectd on windows/OS X). This will give use Windows system metrics in graphite in AWS (bug 1097356).
  • We met to talk about Windows packaging and how to best integrate with puppet. Rob is starting to investigate using NuGet and Chocolatey to handle this (bugs 1175133 and 1175107).
  • Q spun up some additional instance types in AWS and is in the process of getting some more data for Windows performance after the network modifications we made earlier (bug 1159384).
  • Jordan added a new puppetized path for all windows jobs, fixing a problem we were seeing with failing sendchanges on puppetized machines (bug 1175701).
  • Morgan, Mark, and Q deployed runner to all of the try Windows hosts (bug 1055794).
Operational
  • The relops team met to perform a triage of their two bugzilla queues and closed almost 20% of the open bugs as either already done or wontfix based on changes in direction.
  • Amy has been working on some additional nagios checks for some Windows services and for AWS subnets filling up (bugs 1164441 and 793293).
  • Ben, Rail, and Nick met and came up with a solid plan for the future of release promotion.
  • Rail and Nick worked on getting Firefox 39 (and the related ESR releases) out to our end users.
  • Hal spent lots of time working with IT and the MOC, improving our relationships and workflow.
  • Dustin and catlee did some hacking to start the porting of treestatus to relengapi (one of the blockers to moving us out of PHX1).
  • Hal, Nick, Chris, and folks from IT, sheriffs, dev-services tracked down an intermittent problem with the repo-tool impacting only b2g jobs (bug 1177190).
  • Callek deployed the new version of slaveapi to support slave loans using the AWS API (bug 1177932).
  • Kim, Jordan, Chris, and Ryan discussed the initial steps for future addon support.
  • Coop (hey, that’s me) held down the buildduty fort while everyone else was in Whistler

See you next week!

Categorieën: Mozilla-nl planet

Yahoo growth stalls months after Mozilla deal - Computerworld

Nieuws verzameld via Google - za, 27/06/2015 - 13:13

Computerworld

Yahoo growth stalls months after Mozilla deal
Computerworld
Yahoo's share gains since November from a deal with Mozilla may be a clue about whether the search company can attract more users through the just-announced contract to change Internet Explorer's and Chrome's default search through installations of ...

en meer »
Categorieën: Mozilla-nl planet

Cameron Kaiser: 31.8.0 available (say goodbye)

Mozilla planet - za, 27/06/2015 - 04:15
31.8.0 is available, the last release for the 31 series (release notes, downloads, hashes). Download it and give it one last spin. 31 wasn't a high water mark for us in terms of features or performance, but it was pretty stable and did the job, so give it a salute as it rides into the sunset. It finalizes Monday PM Pacific time as usual.

I'm trying very hard to get you the 38.0.1 beta by sometime next week, probably over the July 4th weekend assuming the local pyros don't burn my house down with errant illegal fireworks, but I keep hitting showstoppers while trying to dogfood it. First it was fonts and then it was Unicode input, and then the newtab crap got unstuck again, and then the G5 build worked but the 7450 build didn't, and then, and then, and then. I'm still working on the last couple of these major bugs and then I've got some additional systems to test on before I introduce them to you. There are a couple minor bugs that I won't fix before the beta because we need enough time for the localizers to do their jobs, and MP3 support is present but is still not finished, but there will be a second beta that should address most of these problems prior to our launch with 38.0.2. Be warned of two changes right away: no more tiles in the new tab page (I never liked them anyway, but they require Electrolysis now, so that's a no-no), and Check for Updates is now moved to the Help menu, congruent with regular Firefox, since keeping it in its old location now requires substantial extra code that is no longer worth it. If you can't deal with these changes, I will hurt you very slowly.

Features that did not make the cut: Firefox Hello and Pocket, and the Cisco H.264 integration. Hello and Pocket are not in the ESR, and I wouldn't support them anyway; Hello needs WebRTC, which we still don't really support, and you can count me in with the people who don't like a major built-in browser component depending exclusively on a third-party service (Pocket). As for the Cisco integration, there will never be a build of those components for Tiger PowerPC, so there. Features that did make the cut, though, are pdf.js and Reader View. Although PDF viewing is obviously pokier compared to Preview.app, it's still very convenient, generally works well enough now that we have IonPower backing it, and is much safer. However, Reader View on the other hand works very well on our old systems. You'll really like it especially on a G3 because it cuts out a lot of junk.

After that there are two toys you'll get to play with before 38.0.2 since I hope to introduce them widely with the 38 launch. More on that after the beta, but I'll whet your appetite a little: although the MacTubes Enabler is now officially retired, since as expected the MacTubes maintainer has thrown in the towel, thanks to these projects the MTE has not one but two potential successors, and one of them has other potential applications. (The QuickTime Enabler soldiers on, of course.)

Last but not least, I have decided to move the issues list and the wiki from Google Code to Github, and leave downloads with SourceForge. That transition will occur sometime late July before Google Code goes read-only on August 24th. (Classilla has already done this invisibly but I need to work on a stele so that 9.3.4 will be able to use Github effectively.) In the meantime, I have already publicly called Google a bunch of meaniepants and poopieheads for their shameful handling of what used to be a great service, so my work here is done.

Categorieën: Mozilla-nl planet

Gervase Markham: Promises: Code vs. Policy

Mozilla planet - za, 27/06/2015 - 02:27

A software organization wants to make a promise, for example about its data practices. For example, “We don’t store information on your location”. They can keep that promise in two ways: code or policy.

If they were keeping it in code, they would need to be open source, and would simply make sure the code didn’t transmit location information to the server. Anyone can review the code and confirm that the promise is being kept. (It’s sometimes technically possible for the company to publish source code that does one thing, and binaries which do another, but if that was spotted, there would be major reputational damage.)

If they were keeping it in policy, they would add “We don’t store information on your location” to their privacy policy or Terms of Service. The documents can be reviewed, but in general you have to trust the company that they are sticking to their word. This is particularly so if the policy states that it does not create a binding obligation on the company. So this is a function of your view of the company’s reputation.

Geeks like promises kept in code. They can’t be worked around using ambiguities in English, and they can’t be changed without the user’s consent (to a software upgrade). I suspect many geeks think of them as superior to promises kept in policy – “that’s what they _say_, but who knows?”. This impression is reinforced when companies are caught sticking to the letter but not the spirit of their policies.

But some promises can’t be kept in code. For example, you can’t simply not send the user’s IP address, which normally gives coarse location information, when making a web request. More complex or time-bound promises (“we will only store your information for two weeks”) also require policy by their nature. Policy is also more flexible, and using a policy promise rather than a code promise can speed time-to-market due to reduced software complexity and increased ability to iterate.

Question: is this distinction, about where to keep your promises, useful when designing new features?

Question: is it reasonable or misguided for geeks to prefer promises kept in code?

Question: if Mozilla or its partners are using promises kept in policy for e.g. a web service, how can we increase user confidence that such a policy is being followed?

Categorieën: Mozilla-nl planet

Vladan Djeric: Announcing the Content Performance program

Mozilla planet - vr, 26/06/2015 - 22:52
Introduction

Aaron Klotz, Avi Halachmi and I have been studying Firefox’s performance on Android & Windows over the last few weeks as part of an effort to evaluate Firefox “content performance” and find actionable issues. We’re analyzing and measuring how well Firefox scrolls pages, loads sites, and navigates between pages. At first, we’re focusing on 3 reference sites: Twitter, Facebook, and Yahoo Search.

We’re trying to find reproducible, meaningful, and common use cases on popular sites which result in noticeable performance problems or where Firefox performs significantly worse than competitors. These use cases will be broken down into tests or profiles, and shared with platform teams for optimization. This “Content Performance” project is part of a larger organizational effort to improve Firefox quality.

I’ll be regularly posting blog posts with our progress here, but you can can also track our efforts on our mailing list and IRC channel:

Mailing list: https://mail.mozilla.org/listinfo/contentperf
IRC channel: #contentperf
Project wiki page: Content_Performance_Program

Summary of Current Findings (June 18)

Generally speaking, desktop and mobile Firefox scroll as well as other browsers on reference sites when there is only a single tab loaded in a single window.

  • We compared Firefox vs Chrome and IE:
    • Desktop Firefox scrolling can badly deteriorate when the machine is in power-saver mode1 (Firefox performance relative to other browsers depends on the site)
    • Heavy activity in background tabs badly affects desktop Firefox’s scrolling performance1 (much worse than other browsers — we need E10S)
    • Scrolling on infinitely-scrolling pages only appears janky when the page is waiting on additional data to be fetched
  • Inter-page navigation in Firefox can exhibit flicker, similar to other browsers
  • The Firefox UI locks up during page loading, unlike other browsers (need E10S)
  • Scrolling in desktop E10S (with heavy background tab activity) is only as good as the other browsersn1 when Firefox is in the process-per-tab configuration (dom.ipc.processCount >> 1)

1 You can see Aaron’s scrolling measurements here: http://bit.ly/1K1ktf2

Potential scenarios to test next:
  • Check impact of different Firefox configurations on scrolling smoothness:
    • Hardware acceleration disabled
    • Accessibility enabled & disabled
    • Maybe: Multiple monitors with different refresh rate (test separately on Win 8 and Win 10)
    • Maybe: OMTC, D2D, DWrite, display & font scaling enabled vs disabled
      • If we had a Telemetry measurement of scroll performance, it would be easier to determine relevant characteristics
  • Compare Firefox scrolling & page performance on Windows 8 vs Windows 10
    • Compare Firefox vs Edge on Win 10
  • Test other sites in Alexa top 20 and during random browsing
  • Test the various scroll methods on reference sites (Avi has done some of this already): mouse wheel, mouse drag, arrow key, page down, touch screen swipe and drag, touchpad drag, touchpad two finger swipe, trackpoints (special casing for ThinkPads should be re-evaluated).
    • Check impact of pointing device drivers
  • Check performance inside Google web apps (Search, Maps, Docs, Sheets)
    • Examine benefits of Chrome’s network pre-fetcher on Google properties (e.g. Google search)
    • Browse and scroll simple pages when top Google apps are loaded in pinned tabs
  • Compare Firefox page-load & page navigation performance on HTTP/2 sites (Facebook & Twitter, others?)
  • Check whether our cache and pre-connector benefit perceived performance, compare vs competition
Issues to report to Platform teams
  • Worse Firefox scrolling performance with laptop in power-save mode
  • Scrolling Twitter feed with YouTube HTML5 videos is jankier in Firefox
  • bug 1174899: Scrolling on Facebook profile with many HTML5 videos eventually causes 100% CPU usage on a Necko thread + heavy CPU usage on main thread + the page stops loading additional posts (videos)
Tooling questions:
  • Find a way to to measure when the page is “settled down” after loading, i.e. time until last page-loading event. This could be measured by the page itself (similar to Octane), which would allow us to compare different browsers
  • How to reproduce dynamic websites offline?
  • Easiest way to record demos of bad Firefox & Fennec performance vs other browsers?
Decisions made so far:
  • Exclusively focus on Android 5.0+ and Windows 7, 8.1 & 10
  • Devote the most attention to single-process Nightly on desktop, but do some checks of E10S performance as well
  • Desktop APZC and network pre-fetcher are a long time away, don’t wait
Categorieën: Mozilla-nl planet

Doug Belshaw: Web Literacy Map v2.0

Mozilla planet - vr, 26/06/2015 - 12:35

I’m delighted to see that development of Mozilla’s Web Literacy Map is still continuing after my departure a few months ago.

Read, Write, Participate

Mark Surman, Executive Director of the Mozilla Foundation, wrote a blog post outlining the way forward and a working group has been put together to drive forward further activity. It’s great to see Mark Lesser being used as a bridge to previous iterations.

Another thing I’m excited to see is the commitment to use Open Badges to credential Web Literacy skills. We tinkered with badges a little last year, but hopefully there’ll be a new impetus around this.

The approach to take the Web Literacy Map from version 1.5 to version 2.0 is going to be different from the past few years. It’s going to be a ‘task force’ approach with people brought in to lend their expertise rather than a fully open community approach. That’s probably what’s needed at this point.

I’m going to give myself some space to, as my friend and former colleague Laura Hilliger said, 'disentangle myself’ from the Web Literacy Map and wider Mozilla work. However, I wish them all the best. It’s important work.

Comments? Questions? I’m @dajbelshaw on Twitter or you can email: mail@dougbelshaw.com

Categorieën: Mozilla-nl planet

Planet Mozilla Interns: Willie Cheong: Maximum Business Value; Minimum Effort

Mozilla planet - vr, 26/06/2015 - 10:44

enhanced-4875-1433865112-1

Dineapple is an online food delivery gig that I have been working on recently. In essence, a new food item is introduced periodically, and interested customers place orders online to have their food delivered the next day.

Getting down to the initial build of the online ordering site, I started to think about the technical whats and hows. For this food delivery service, a customer places an order by making an online payment. The business then needs to know of this transaction, and have it linked to the contact information of the customer.

Oh okay, easy. Of course I’ll set up a database. I’ll store the order details inside a few tables. Then I’ll build a mini application to extract this information and generate a daily report for the cooks and delivery people to operate on. Then I started to build these things in my head. But wait, there is a simpler way to get the operations people aware of orders. We could just send an email to the people on every successful transaction to notify them of a new incoming order. But this means the business loses visibility and data portability. Scraping for relational data from a bunch of automated emails, although possible, will be a nightmare. The business needs to prepare to scale, and that means analytics.

Then I saw something that now looks so obvious I feel pretty embarrassed. Payments on the ordering service are processed using Stripe. When the HTTP request to process a payment is made, Stripe provides an option to submit additional metadata that will be tagged to the payment. There is a nice interface on the Stripe site that allows business owners to do some simple analytics on the payment data. There is also the option to export all of that data (and metadata) to CSV for more studying.

Forget about ER diagrams, forget about writing custom applications, forget about using automated emails to generate reports. Stripe is capable of doing the reporting for Dineapple, we just had to see a way to adapt the offering to fit the business’s use case.

Beyond operations reporting through Stripe, there are so many existing web services out there that can be integrated into Dineapple. Just to name a few, an obvious one would be to use Google Analytics to study site traffic. Customers’ reviews on food and services could (and probably should) be somehow integrated to work using Yelp. Note that none of these outsourced alternatives, although significantly easier to implement, compromise on the quality of the solution for the business. Because at the end of the day, all that really matters is that the business gets what it needs.

So here’s a reminder to my future self. Spend a little more time looking around for simpler alternatives that you can take advantage of before jumping into development for a custom solution.

Engineers are builders by instinct, but that isn’t always a good thing.

Categorieën: Mozilla-nl planet

Emma Irwin: #Mozlove for Tad

Mozilla planet - vr, 26/06/2015 - 08:08

I truly believe, that to make Mozilla a place worth ‘hanging your hat‘, we need to get better at being ‘forces of good for each other’.  I like to think this idea is catching on, but only time will tell.

This month’s #mozlove post is for Tom Farrow AKA ‘Tad’,  a long time contributor, in numerous initiatives across Mozilla.  Although Tad’s contribution focus is in Community Dev Ops, it’s his interest in teaching youth digital literacy that first led to a crossing of our paths. You’ll probably find it interesting to know that despite being in his sixth(!!) year of contribution to Mozilla –  Tad is still a High School in Solihull Birmingham, UK.

Tad starting contribution to Mozilla after helping a friend install Firefox on their government-issued laptop, which presented some problems. He found help on SUMO, and through being helped was inspired to become a helper and contributor himself.  Tad speaks fondly of starting with SUMO, of finding friends, training and mentorship.

Originally drawn to IT and DevOps contribution for the opportunity of ‘belonging to something’, Tad has become a fixture in this space helping design hosting platforms, and the evolution of a multi-tenant WordPress hosting. When I asked what was most rewarding about contributing to Community Dev Ops, he shared that pride in innovating a quality solution.

I’m also increasingly curious about the challenges of participation and asked about this as well.  Tad expressed some frustration around ‘access and finding the right people to unlock resources’.  I think that’s probably something that speaks to the greater challenges for the Mozilla community in understanding pathways for support.

Finally my favorite question:  “How do your friends and family relate to your volunteer efforts? Is it easy or hard to explain volunteering at Mozilla?”.

I don’t really try to explain it – my parents get the general idea, and are happy I’m gaining skills in web technology.

I think it’s very cool that in a world of ‘learn to code’ merchandizing, that Tad found his opportunity to learn and grow technical skills in participation at Mozilla :)

I want to thank Tad for taking the time to chat with me, for being such an amazing contributor, and inspiration to others around the project.

* I set a reminder in my calendar every month, which this month happens to be during Mozilla’s Work Week in Whistler.  Tad is also in Whistler, make sure you look out for him – and say hello!

 

 

 

 

 

Categorieën: Mozilla-nl planet

Aki Sasaki: on configuration

Mozilla planet - vr, 26/06/2015 - 00:14

A few people have suggested I look at other packages for config solutions. I thought I'd record some of my thoughts on the matter. Let's look at requirements first.

Requirements
  1. Commandline argument support. When running scripts, it's much faster to specify some config via the commandline than always requiring a new config file for each config change.

  2. Default config value support. If a script assumes a value works for most cases, let's make it default, and allow for overriding those values in some way.

  3. Config file support. We need to be able to read in config from a file, and in some cases, several files. Some config values are either too long and unwieldy to pass via the commandline, and some config values contain characters that would be interpreted by the shell. Plus, the ability to use diff and version control on these files is invaluable.

  4. Multiple config file type support. json, yaml, etc.

  5. Adding the above three solutions together. The order should be: default config value -> config file -> commandline arguments. (The rightmost value of a configuration item wins.)

  6. Config definition and validation. Commandline options are constrained by the options that are defined, but config files can contain any number of arbitrary key/value pairs.

  7. The ability to add groups of commandline arguments together. Sometimes familes of scripts need a common set of commandline options, but also need the ability to add script-specific options. Sharing the common set allows for consistency.

  8. The ability to add config definitions together. Sometimes families of scripts need a common set of config items, but also need the ability to add script-specific config items.

  9. Locking and/or logging any changes to the config. Changing config during runtime can wreak havoc on the debugability of a script; locking or logging the config helps avoid or mitigate this.

  10. Python 3 support, and python 2.7 unicode support, preferably unicode-by-default.

  11. Standardized solution, preferably non-company and non-language specific.

  12. All-in-one solution, rather than having to use multiple solutions.

Packages and standards argparse

Argparse is the standardized python commandline argument parser, which is why configman and scriptharness have wrapped it to add further functionality. Its main drawbacks are lack of config file support and limited validation.

  1. Commandline argument support: yes. That's what it's written for.

  2. Default config value support: yes, for commandline options.

  3. Config file support: no.

  4. multiple config file type support: no.

  5. Adding the above three solutions together: no. The default config value and the commandline arguments are placed in the same Namespace, and you have to use the parser.get_default() method to determine whether it's a default value or an explicitly set commandline option.

  6. Config definition and validation: limited. It only covers commandline option definition+validation, and there's the required flag but not a if foo is set, bar is required type validation. It's possible to roll your own, but that would be script-specific rather than part of the standard.

  7. Adding groups of commandline arguments together: yes. You can take multiple parsers and make them parent parsers of a child parser, if the parent parsers have specified add_help=False

  8. Adding config definitions together: limited, as above.

  9. The ability to lock/log changes to the config: no. argparse.Namespace will take changes silently.

  10. Python 3 + python 2.7 unicode support: yes.

  11. Standardized solution: yes, for python. No for other languages.

  12. All-in-one solution: no, for the above limitations.

configman

Configman is a tool written to deal with configuration in various forms, and adds the ability to transform configs from one type to another (e.g., commandline to ini file). It also adds the ability to block certain keys from being saved or output. Its argparse implementation is deeper than scriptharness' ConfigTemplate argparse abstraction.

Its main drawbacks for scriptharness usage appear to be lack of python 3 + py2-unicode-by-default support, and for being another non-standardized solution. I've given python3 porting two serious attempts, so far, and I've hit a wall on the dotdict __getattr__ hack working differently on python 3. My wip is here if someone else wants a stab at it.

  1. Commandline argument support: yes.

  2. Default config value support: yes.

  3. Config file support: yes.

  4. Multiple config file type support: yes.

  5. Adding the above three solutions together: not as far as I can tell, but since you're left with the ArgumentParser object, I imagine it'll be the same solution to wrap configman as argparse.

  6. Config definition and validation: yes.

  7. Adding groups of commandline arguments together: yes.

  8. Adding config definitions together: not sure, but seems plausible.

  9. The ability to lock/log changes to the config: no. configman.namespace.Namespace will take changes silently.

  10. Python 3 support: no. Python 2.7 unicode support: there are enough str() calls that it looks like unicode is a second class citizen at best.

  11. Standardized solution: no.

  12. All-in-one solution: no, for the above limitations.

docopt

Docopt simplifies the commandline argument definition and prettifies the help output. However, it's purely a commandline solution, and doesn't support adding groups of commandline options together, so it appears to be oriented towards relatively simple script configuration. It could potentially be added to json-schema definition and validation, as could the argparse-based commandline solutions, for an all-in-two solution. More on that below.

json-schema

This looks very promising for an overall config definition + validation schema. The main drawback, as far as I can see so far, is the lack of commandline argument support.

A commandline parser could generate a config object to validate against the schema. (Bonus points for writing a function to validate a parser against the schema before runtime.) However, this would require at least two definitions: one for the schema, one for the hopefully-compliant parser. Alternately, the schema could potentially be extended to support argparse settings for various items, at the expense of full standards compatiblity.

There's already a python jsonschema package.

  1. Commandline argument support: no.

  2. Default config value support: yes.

  3. Config file support: I don't think directly, but anything that can be converted to a dict can be validated.

  4. Multiple config file type support: no.

  5. Adding the above three solutions together: no.

  6. Config definition and validation: yes.

  7. Adding groups of commandline arguments together: no.

  8. Adding config definitions together: sure, you can add dicts together via update().

  9. The ability to lock/log changes to the config: no.

  10. Python 3 support: yes. Python 2.7 unicode support: I'd guess yes since it has python3 support.

  11. Standardized solution: yes, even cross-language.

  12. All-in-one solution: no, for the above limitations.

scriptharness 0.2.0 ConfigTemplate + LoggingDict or ReadOnlyDict

Scriptharness currently extends argparse and dict for its config. It checks off the most boxes in the requirements list currently. My biggest worry with the ConfigTemplate is that it isn't fully standardized, so people may be hesitant to port all of their configs to it.

An argparse/json-schema solution with enough glue code in between might be a good solution. I think ConfigTemplate is sufficiently close to that that adding jsonschema support shouldn't be too difficult, so I'm leaning in that direction right now. Configman has some nice behind the scenes and cross-file-type support, but the python3 and __getattr__ issues are currently blockers, and it seems like a lateral move in terms of standards.

An alternate solution may be BYOC. If the scriptharness Script takes a config object that you built from somewhere, and gives you tools that you can choose to use to build that config, that may allow for enough flexibility that people can use their preferred style of configuration in their scripts. The cost of that flexibility is familiarity between scriptharness scripts.

  1. Commandline argument support: yes.

  2. Default config value support: yes, both through argparse parsers and script initial_config.

  3. Config file support: yes. You can define multiple required config files, and multiple optional config files.

  4. Multiple config file type support: no. Mozharness had .py and .json. Scriptharness currently only supports json because I was a bit iffy about execfileing python again, and PyYAML doesn't always install cleanly everywhere. It's on the list to add more formats, though. We probably need at least one dynamic type of config file (e.g. python or yaml) or a config-file builder tool.

  5. Adding the above three solutions together: yes.

  6. Config definition and validation: yes.

  7. Adding groups of commandline arguments together: yes.

  8. Adding config definitions together: yes.

  9. The ability to lock/log changes to the config: yes. By default Scripts use LoggingDict that logs runtime changes; StrictScript uses a ReadOnlyDict (sams as mozharness) that prevents any changes after locking.

  10. Python 3 and python 2.7 unicode support: yes.

  11. Standardized solution: no. Extended/abstracted argparse + extended python dict.

  12. All-in-one solution: yes.

Corrections, additions, feedback?

As far as I can tell there is no perfect solution here. Thoughts?



comment count unavailable comments
Categorieën: Mozilla-nl planet

Daily API RoundUp: Internet Game Database, Mozilla Canvas, BlockCypher - ProgrammableWeb

Nieuws verzameld via Google - do, 25/06/2015 - 23:57

ProgrammableWeb

Daily API RoundUp: Internet Game Database, Mozilla Canvas, BlockCypher
ProgrammableWeb
Mozilla Canvas API lets users edit graphics, create animations, and process videos in real time. Resources include the canvas handbook, along with specs and 11 libraries that explain how to work with canvas and SVG parsing, interactivity, game engines, ...

en meer »
Categorieën: Mozilla-nl planet

Daily API RoundUp: Internet Game Database, Mozilla Canvas, BlockCypher - ProgrammableWeb

Nieuws verzameld via Google - do, 25/06/2015 - 23:57

ProgrammableWeb

Daily API RoundUp: Internet Game Database, Mozilla Canvas, BlockCypher
ProgrammableWeb
Mozilla Canvas API lets users edit graphics, create animations, and process videos in real time. Resources include the canvas handbook, along with specs and 11 libraries that explain how to work with canvas and SVG parsing, interactivity, game engines, ...

en meer »
Categorieën: Mozilla-nl planet

Sean McArthur: hyper v0.6

Mozilla planet - do, 25/06/2015 - 21:37

A bunch of goodies are included in version 0.6 of hyper.

Highlights
  • Experimental HTTP2 support for the Client! Thanks to tireless work of @mlalic.
  • Redesigned Ssl support. The Server and Client can accept any implementation of the Ssl trait. By default, hyper comes with an implementation for OpenSSL, but this can now be disabled via the ssl cargo feature.
  • A thread safe Client. As in, Client is Sync. You can share a Client over multiple threads, and make several requests simultaneously.
  • Just about 90% test coverage. @winding-lines has been bumping the number ever higher.

Also, as a reminder, hyper has been following semver more closely, and so, breaking changes mean bumping the minor version (until 1.0). So, to reduce unplanned breakage, you should probably depend on a specific minor version, such as 0.6, and not *.

Categorieën: Mozilla-nl planet

Mozilla veut faciliter le portage d'extensions Chrome vers Firefox - KultureGeek

Nieuws verzameld via Google - do, 25/06/2015 - 20:04

KultureGeek

Mozilla veut faciliter le portage d'extensions Chrome vers Firefox
KultureGeek
Pour essayer d'attirer les développeurs, Mozilla veut s'inspirer de l'API de Google pour les extensions afin de l'intégrer dans Firefox. Ce système permettrait aux développeurs d'avoir très peu de modifications à faire pour porter une extension Chrome ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla wants to make Chrome Extension ports to Firefox easier - Ghacks Technology News

Nieuws verzameld via Google - do, 25/06/2015 - 08:35

Ghacks Technology News

Mozilla wants to make Chrome Extension ports to Firefox easier
Ghacks Technology News
Add-ons are one of the cornerstones of the Firefox web browser. I know several Firefox users who stick with the browser because of extensions that they don't want to browse the web without with. Some developers moved from Firefox to Chrome when Google ...

en meer »
Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Rust 1.1 stable, the Community Subteam, and RustCamp

Mozilla planet - do, 25/06/2015 - 02:00

We’re happy to announce the completion of the first release cycle after Rust 1.0: today we are releasing Rust 1.1 stable, as well as 1.2 beta.

Read on for details the releases, as well as some exciting new developments within the Rust community.

What’s in 1.1 Stable

One of the highest priorities for Rust after its 1.0 has been improving compile times. Thanks to the hard work of a number of contributors, Rust 1.1 stable provides a 32% improvement in compilation time over Rust 1.0 (as measured by bootstrapping).

Another major focus has been improving error messages throughout the compiler. Again thanks to a number of contributors, a large portion of compiler errors now include extended explanations accessible using the --explain flag.

Beyond these improvements, the 1.1 release includes a number of important new features:

  • New std::fs APIs. This release stabilizes a large set of extensions to the filesystem APIs, making it possible, for example, to compile Cargo on stable Rust.
  • musl support. It’s now possible to target musl on Linux. Binaries built this way are statically linked and have zero dependencies. Nightlies are on the way.
  • cargo rustc. It’s now possible to build a Cargo package while passing arbitrary flags to the final rustc invocation.

More detail is available in the release notes.

What’s in 1.2 Beta

Performance improvements didn’t stop with 1.1 stable. Benchmark compilations are showing an additional 30% improvement from 1.1 stable to 1.2 beta; Cargo’s main crate compiles 18% faster.

In addition, parallel codegen is working again, and can substantially speed up large builds in debug mode; it gets another 33% speedup on bootstrapping on a 4 core machine. It’s not yet on by default, but will be in the near future.

Cargo has also seen some performance improvements, including a 10x speedup on large “no-op” builds (from 5s to 0.5s on Servo), and shared target directories that cache dependencies across multiple packages.

In addition to all of this, 1.2 beta includes our first support for MSVC (Microsoft Visual C): the compiler is able to bootstrap, and we have preliminary nightlies targeting the platform. This is a big step for our Windows support, making it much easier to link Rust code against code built using the native toolchain. Unwinding is not yet available – code aborts on panic – but the implementation is otherwise complete, and all rust-lang crates are now testing on MSVC as a first-tier platform.

Rust 1.2 stable will be released six weeks from now, together with 1.3 beta.

Community news

In addition to the above technical work, there’s some exciting news within the Rust community.

In the past few weeks, we’ve formed a new subteam explicitly devoted to supporting the Rust community. The team will have a number of responsibilities, including aggregating resources for meetups and other events, supporting diversity in the community through leadership in outreach, policies, and awareness-raising, and working with our early production users and the core team to help guide prioritization.

In addition, we’ll soon be holding the first official Rust conference: RustCamp, on August 1, 2015, in Berkeley, CA, USA. We’ve received a number of excellent talk submissions, and are expecting a great program.

Contributors to 1.1

As with every release, 1.1 stable is the result of work from an amazing and active community. Thanks to the 168 contributors to this release:

  • Aaron Gallagher
  • Aaron Turon
  • Abhishek Chanda
  • Adolfo Ochagavía
  • Alex Burka
  • Alex Crichton
  • Alexander Polakov
  • Alexis Beingessner
  • Andreas Tolfsen
  • Andrei Oprea
  • Andrew Paseltiner
  • Andrew Straw
  • Andrzej Janik
  • Aram Visser
  • Ariel Ben-Yehuda
  • Avdi Grimm
  • Barosl Lee
  • Ben Gesoff
  • Björn Steinbrink
  • Brad King
  • Brendan Graetz
  • Brian Anderson
  • Brian Campbell
  • Carol Nichols
  • Chris Morgan
  • Chris Wong
  • Clark Gaebel
  • Cole Reynolds
  • Colin Walters
  • Conrad Kleinespel
  • Corey Farwell
  • David Reid
  • Diggory Hardy
  • Dominic van Berkel
  • Don Petersen
  • Eduard Burtescu
  • Eli Friedman
  • Erick Tryzelaar
  • Felix S. Klock II
  • Florian Hahn
  • Florian Hartwig
  • Franziska Hinkelmann
  • FuGangqiang
  • Garming Sam
  • Geoffrey Thomas
  • Geoffry Song
  • Graydon Hoare
  • Guillaume Gomez
  • Hech
  • Heejong Ahn
  • Hika Hibariya
  • Huon Wilson
  • Isaac Ge
  • J Bailey
  • Jake Goulding
  • James Perry
  • Jan Andersson
  • Jan Bujak
  • Jan-Erik Rediger
  • Jannis Redmann
  • Jason Yeo
  • Johann
  • Johann Hofmann
  • Johannes Oertel
  • John Gallagher
  • John Van Enk
  • Jordan Humphreys
  • Joseph Crail
  • Kang Seonghoon
  • Kelvin Ly
  • Kevin Ballard
  • Kevin Mehall
  • Krzysztof Drewniak
  • Lee Aronson
  • Lee Jeffery
  • Liigo Zhuang
  • Luke Gallagher
  • Luqman Aden
  • Manish Goregaokar
  • Marin Atanasov Nikolov
  • Mathieu Rochette
  • Mathijs van de Nes
  • Matt Brubeck
  • Michael Park
  • Michael Rosenberg
  • Michael Sproul
  • Michael Wu
  • Michał Czardybon
  • Mike Boutin
  • Mike Sampson
  • Ms2ger
  • Nelo Onyiah
  • Nicholas
  • Nicholas Mazzuca
  • Nick Cameron
  • Nick Hamann
  • Nick Platt
  • Niko Matsakis
  • Oliver Schneider
  • P1start
  • Pascal Hertleif
  • Paul Banks
  • Paul Faria
  • Paul Quint
  • Pete Hunt
  • Peter Marheine
  • Philip Munksgaard
  • Piotr Czarnecki
  • Poga Po
  • Przemysław Wesołek
  • Ralph Giles
  • Raphael Speyer
  • Ricardo Martins
  • Richo Healey
  • Rob Young
  • Robin Kruppe
  • Robin Stocker
  • Rory O’Kane
  • Ruud van Asseldonk
  • Ryan Prichard
  • Sean Bowe
  • Sean McArthur
  • Sean Patrick Santos
  • Shmuale Mark
  • Simon Kern
  • Simon Sapin
  • Simonas Kazlauskas
  • Sindre Johansen
  • Skyler
  • Steve Klabnik
  • Steven Allen
  • Steven Fackler
  • Swaroop C H
  • Sébastien Marie
  • Tamir Duberstein
  • Theo Belaire
  • Thomas Jespersen
  • Tincan
  • Ting-Yu Lin
  • Tobias Bucher
  • Toni Cárdenas
  • Tshepang Lekhonkhobe
  • Ulrik Sverdrup
  • Vadim Chugunov
  • Valerii Hiora
  • Wangshan Lu
  • Wei-Ming Yang
  • Wojciech Ogrodowczyk
  • Xuefeng Wu
  • York Xiang
  • Young Wu
  • bors
  • critiqjo
  • diwic
  • gareins
  • inrustwetrust
  • jooert
  • klutzy
  • kwantam
  • leunggamciu
  • mdinger
  • nwin
  • parir
  • pez
  • robertfoss
  • sinkuu
  • tynopex
  • らいどっと
Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Disponible cuentaFox 3.1.1

Mozilla planet - do, 25/06/2015 - 01:17

Pocos días después de presentarse la versión que corregía el problema presentado con el servicio para obtener el estado de las cuotas, ya está aquí cuentaFox 3.1.1.

¿Qué hay de nuevo?
  • Ahora se muestra la lista de todos los usuarios que han almacenado sus contraseñas en Firefox.

v31.1-userlist

  • Las alertas de consumo ahora se muestran pero sin iconos pues al agregarle un icono no se muestran (probado en Linux).

v3.1.1-alertas

  • También se corrigieron algunos errores menores.
Firmando el complemento

A partir de Firefox 41 se introducirán algunos cambios en la gestión de los complementos en el navegador y solo se podrán instalar complementos firmados por Mozilla. Que un complemento esté firmado por Mozilla significa más seguridad para las usuarios ante extensiones malignas y programas de terceros que intentan instalar add-ons en Firefox.

Para estar preparados cuando llegue Firefox 41 hemos enviando cuentaFox para su revisión en AMO y dentro de poco lo tendremos por aquí.

Aún muchas personas utilizan versiones viejas, actualiza a cuentaFox 3.1.1

Desde el panel estadísticas de AMO nos hemos dado cuenta que muchas personas siguen usando versiones viejas que no funcionan y no son recomendadas. Desde aquí hacemos el llamado para que actualicen y difundan la noticia sobre la nueva liberación.

No obstante, cuando el add-on sea aprobado, Firefox lo actualizará según la configuración del usuario. La idea que tenemos es que el complemento se actualice desde Firefoxmanía y no desde Mozilla pero el certificado autofirmado y otros problemas impiden que lo hagamos.

stats-cuentafox

El usuario o la contraseña es incorrecta

Muchas personas han manifestado que al intentar obtener sus datos se muestra una alerta donde dice “El usuario no es válido o lo contraseña es incorrecta” y nos piden solucionar esto pero no podemos. Nosotros no somos responsables del servicio que brinda cuotas.uci.cu y tampoco sabemos que utiliza para verificar que esos datos son correctos.

Si deseas colaborar en el desarrollo del complemento puedes acceder a GitLab (UCI) y clonar el proyecto o dejar una sugerencia.

Instalar cuentaFox 3.1.1
Categorieën: Mozilla-nl planet

Daily API RoundUp: Mozilla WebVR, Yammer, CloudBoost, Diffbot Clients - ProgrammableWeb

Nieuws verzameld via Google - do, 25/06/2015 - 00:34

ProgrammableWeb

Daily API RoundUp: Mozilla WebVR, Yammer, CloudBoost, Diffbot Clients
ProgrammableWeb
The Mozilla WebVR API allows developers to access and integrate the functionality of Mozilla WebVR with other applications and devices. Some example API methods include integrating virtual reality devices with WebVR functionality, managing movements ...

Categorieën: Mozilla-nl planet

Mozilla möchte es einfacher machen, Chrome-Erweiterungen für Firefox zu ... - soeren-hentzschel.at

Nieuws verzameld via Google - wo, 24/06/2015 - 23:48

Mozilla möchte es einfacher machen, Chrome-Erweiterungen für Firefox zu ...
soeren-hentzschel.at
Mozilla-Entwickler Erik Vold hat vor wenigen Wochen mit „Chrome Tailor“ außerdem ein experimentelles Tool auf GitHub veröffentlicht, welches aus Chrome-Erweiterungen Add-ons für Firefox machen soll. Welche Chrome-APIs unterstützt werden, kann der ...

Categorieën: Mozilla-nl planet

Matt Thompson: Updating our on-ramps for contributors

Mozilla planet - wo, 24/06/2015 - 21:46

I got to sit in on a great debrief / recap of the recent Webmaker go-to-market strategy. A key takwaway: we’re having promising early success recruiting local volunteers to help tell the story, evangelize for the product, and (crucially) feed local knowledge into making the product better. In short:

Volunteer contribution is working. But now we need to document, systematize and scale up our on-ramps.

Documenting and systematizing

It’s been a known issue that we need to update and improve our on-ramps for contributors across MoFo. They’re fragmented, out of date, and don’t do enough to spell out the value for contributors. Or celebrate their stories and successes.

We should prioritize this work in Q3. Our leadership development work, local user research, social marketing for Webmaker, Mozilla Club Captains and Regional Co-ordinators recruitment, the work the Participation Lab is doing — all of that is coming together at an opportune moment.

Ryan is a 15-year-old volunteer contributor to Webmaker for Android — and currently the second-most-active Java developer on the entire project.

Get the value proposition right

A key learning is: we need to spell out the concrete value proposition for contributors. Particularly in terms of training and gaining relevant work experience.

Don’t assume we know in advance what contributors actually want. They will tell us.

We sometimes assume contributors want something like certification or a badge — but what if what they *really* want is a personalized letter of recommendation, on Mozilla letterhead, from an individual mentor at Mozilla that can vouch for them and help them get a job, or get into a school program? Let’s listen.

An on-boarding and recruiting checklist

Here’s some key steps in the process the group walked through. We can document / systematize / remix these as we go forward.

  • Value proposition. Start here first. What’s in it for contributors? (e.g., training, a letter of recommendation, relevant work experience?) Don’t skip this! It’s the foundation for doing this in a real way.
  • Role description. Get good at describing those skills and opportunities, in language people can imagine adding to their CV, personal bio or story, etc.
  • Open call. Putting the word out. Having the call show up in the right channels, places and networks where people will see and hear about it.
  • Application / matching. How do people express interest? How do we sort and match them?
  • On-boarding and training. These processes exist, but aren’t well-documented. We need a playbook for how newcomers get on-boarded and integrated.
  • Assigning  to a specific team and individual mentor. So that they don’t feel disconnected or lost. This could be an expectation for all MoFo: each staff member will mentor at least one apprentice each quarter.
  • Goal-setting / tasking. Tickets or some other way to surface and co-ordinate the work they’re doing.
  • A letter of recommendation. Once the work is done. Written by their mentor. In a language that an employer / admission officer / local community members understand and value.
  • Certification. Could eventually also offer something more formal. Badging, a certificate, something you could share on your linked in profile, etc.
Next steps
Categorieën: Mozilla-nl planet

Pagina's