It’s been a very busy and successful work week here in beautiful Whistler, BC. People are taking advantage of being in the same location to meet, plan, hack, and socialize. A special thanks to Jordan for inviting us to his place in beautiful Squamish for a BBQ!
(Note: No release engineering folks were harmed by bears in the making of this work week.)tl;dr
Whistler: Keynotes were given by our exec team and we learned we’re focusing on quality, dating our users to get to know them better, and that WE’RE GOING TO SPACE!! We also discovered that at LEGO, Everything is Awesome now that they’re thinking around the box instead of inside or outside of it. Laura’s GoFaster project sounds really exciting, and we got a shoutout from her on the way we manage the complexity of our systems. There should be internal videos of the keynotes up next week if you missed them.
Internally, we talked about Q3 planning and goals, met with our new VP, David, met with our CEO, Chris, presented some lightning talks, and did a bunch of cross-group planning/hacking. Dustin, Kim, and Morgan talked to folks at our booth at the Science Fair. We had a cool banner and some cards (printed by Dustin) that we could hand out to tell people about try. SHIP IT!
Taskcluster: Great news; the TaskCluster team is joining us in Platform! There was lots of evangelism about TaskCluster and interest from a number of groups. There were some good discussions about operationalizing taskcluster as we move towards using it for Firefox automation in production. Pete also demoed the Generic Worker!
Puppetized Windows in AWS: Rob got the nxlog puppet module done. Mark is working on hg and NSIS puppet modules in lieu of upgrading to MozillaBuild 2.0. Jake is working on the metric-collective module. The windows folks met to discuss the future of windows package management. Q is finishing up the performance comparison testing in AWS. Morgan, Mark, and Q deployed runner to all of the try Windows hosts and one of the build hosts.
Operational: Amy has been working on some additional nagios checks. Ben, Rail, and Nick met and came up with a solid plan for release promotion. Rail and Nick worked on releasing Firefox 39 and two versions of Firefox ESR. Hal spent much of the week working with IT. Dustin and catlee got some work on on migrating treestatus to relengapi. Hal, Nick, Chris, and folks from IT, sheriffs, dev-services debugged problems with b2g jobs. Callek deployed a new version of slaveapi. Kim, Jordan, Chris, and Ryan worked on a plan for addons. Kim worked with some new buildduty folks to bring them up to speed on operational procedures.
Thank you all, and have a safe trip home!
And here are all the details:Taskcluster
- We got to spend some quality time with the our new TaskCluster teammates, Greg, Jonas, Wander, Pete, and John. We’re all looking forward to working together more closely.
- Morgan convinced lots of folks that Taskcluster is super amazing, and now we have a lot of people excited to start hacking on it and moving their workloads to it.
- We put together a roadmap for TaskCluster in Trello and identified the blockers to turning Buildbot Scheduling off.
- Rob has pushed out the nxlog puppet module to get nxlog working in scl3 (bug 1146324). He has a follow-on bug to modify the ec2config file for AWS to reset the log-aggregator host so that we’re aggregating to the local region instead of where we instantiate the instance (like we do with linux). This will ensure we have Windows system logs in AWS (bug 1177577).
- The new version of MozillaBuild was released, and our plan was to upgrade to that on Windows (bug 1176111). An attempt at that showed that the way hg was compiled requires an external dll (likely something from cygwin), and needs to be run from bash. Since this would require significant changes, we’re going to install the old version of MozillaBuild and put upgrades of hg (bug 1177740) and NSIS on top of that (like we’re doing with GPO now). Future work will include splitting out all the packages and not using MozillaBuild. Jake is working on the puppet module for metric-collective, our host-level stats gathering software for windows (similar to collectd on windows/OS X). This will give use Windows system metrics in graphite in AWS (bug 1097356).
- We met to talk about Windows packaging and how to best integrate with puppet. Rob is starting to investigate using NuGet and Chocolatey to handle this (bugs 1175133 and 1175107).
- Q spun up some additional instance types in AWS and is in the process of getting some more data for Windows performance after the network modifications we made earlier (bug 1159384).
- Jordan added a new puppetized path for all windows jobs, fixing a problem we were seeing with failing sendchanges on puppetized machines (bug 1175701).
- Morgan, Mark, and Q deployed runner to all of the try Windows hosts (bug 1055794).
- The relops team met to perform a triage of their two bugzilla queues and closed almost 20% of the open bugs as either already done or wontfix based on changes in direction.
- Amy has been working on some additional nagios checks for some Windows services and for AWS subnets filling up (bugs 1164441 and 793293).
- Ben, Rail, and Nick met and came up with a solid plan for the future of release promotion.
- Rail and Nick worked on getting Firefox 39 (and the related ESR releases) out to our end users.
- Hal spent lots of time working with IT and the MOC, improving our relationships and workflow.
- Dustin and catlee did some hacking to start the porting of treestatus to relengapi (one of the blockers to moving us out of PHX1).
- Hal, Nick, Chris, and folks from IT, sheriffs, dev-services tracked down an intermittent problem with the repo-tool impacting only b2g jobs (bug 1177190).
- Callek deployed the new version of slaveapi to support slave loans using the AWS API (bug 1177932).
- Kim, Jordan, Chris, and Ryan discussed the initial steps for future addon support.
- Coop (hey, that’s me) held down the buildduty fort while everyone else was in Whistler
See you next week!
I'm trying very hard to get you the 38.0.1 beta by sometime next week, probably over the July 4th weekend assuming the local pyros don't burn my house down with errant illegal fireworks, but I keep hitting showstoppers while trying to dogfood it. First it was fonts and then it was Unicode input, and then the newtab crap got unstuck again, and then the G5 build worked but the 7450 build didn't, and then, and then, and then. I'm still working on the last couple of these major bugs and then I've got some additional systems to test on before I introduce them to you. There are a couple minor bugs that I won't fix before the beta because we need enough time for the localizers to do their jobs, and MP3 support is present but is still not finished, but there will be a second beta that should address most of these problems prior to our launch with 38.0.2. Be warned of two changes right away: no more tiles in the new tab page (I never liked them anyway, but they require Electrolysis now, so that's a no-no), and Check for Updates is now moved to the Help menu, congruent with regular Firefox, since keeping it in its old location now requires substantial extra code that is no longer worth it. If you can't deal with these changes, I will hurt you very slowly.
Features that did not make the cut: Firefox Hello and Pocket, and the Cisco H.264 integration. Hello and Pocket are not in the ESR, and I wouldn't support them anyway; Hello needs WebRTC, which we still don't really support, and you can count me in with the people who don't like a major built-in browser component depending exclusively on a third-party service (Pocket). As for the Cisco integration, there will never be a build of those components for Tiger PowerPC, so there. Features that did make the cut, though, are pdf.js and Reader View. Although PDF viewing is obviously pokier compared to Preview.app, it's still very convenient, generally works well enough now that we have IonPower backing it, and is much safer. However, Reader View on the other hand works very well on our old systems. You'll really like it especially on a G3 because it cuts out a lot of junk.
After that there are two toys you'll get to play with before 38.0.2 since I hope to introduce them widely with the 38 launch. More on that after the beta, but I'll whet your appetite a little: although the MacTubes Enabler is now officially retired, since as expected the MacTubes maintainer has thrown in the towel, thanks to these projects the MTE has not one but two potential successors, and one of them has other potential applications. (The QuickTime Enabler soldiers on, of course.)
Last but not least, I have decided to move the issues list and the wiki from Google Code to Github, and leave downloads with SourceForge. That transition will occur sometime late July before Google Code goes read-only on August 24th. (Classilla has already done this invisibly but I need to work on a stele so that 9.3.4 will be able to use Github effectively.) In the meantime, I have already publicly called Google a bunch of meaniepants and poopieheads for their shameful handling of what used to be a great service, so my work here is done.
A software organization wants to make a promise, for example about its data practices. For example, “We don’t store information on your location”. They can keep that promise in two ways: code or policy.
If they were keeping it in code, they would need to be open source, and would simply make sure the code didn’t transmit location information to the server. Anyone can review the code and confirm that the promise is being kept. (It’s sometimes technically possible for the company to publish source code that does one thing, and binaries which do another, but if that was spotted, there would be major reputational damage.)
Geeks like promises kept in code. They can’t be worked around using ambiguities in English, and they can’t be changed without the user’s consent (to a software upgrade). I suspect many geeks think of them as superior to promises kept in policy – “that’s what they _say_, but who knows?”. This impression is reinforced when companies are caught sticking to the letter but not the spirit of their policies.
But some promises can’t be kept in code. For example, you can’t simply not send the user’s IP address, which normally gives coarse location information, when making a web request. More complex or time-bound promises (“we will only store your information for two weeks”) also require policy by their nature. Policy is also more flexible, and using a policy promise rather than a code promise can speed time-to-market due to reduced software complexity and increased ability to iterate.
Question: is this distinction, about where to keep your promises, useful when designing new features?
Question: is it reasonable or misguided for geeks to prefer promises kept in code?
Question: if Mozilla or its partners are using promises kept in policy for e.g. a web service, how can we increase user confidence that such a policy is being followed?
Aaron Klotz, Avi Halachmi and I have been studying Firefox’s performance on Android & Windows over the last few weeks as part of an effort to evaluate Firefox “content performance” and find actionable issues. We’re analyzing and measuring how well Firefox scrolls pages, loads sites, and navigates between pages. At first, we’re focusing on 3 reference sites: Twitter, Facebook, and Yahoo Search.
We’re trying to find reproducible, meaningful, and common use cases on popular sites which result in noticeable performance problems or where Firefox performs significantly worse than competitors. These use cases will be broken down into tests or profiles, and shared with platform teams for optimization. This “Content Performance” project is part of a larger organizational effort to improve Firefox quality.
I’ll be regularly posting blog posts with our progress here, but you can can also track our efforts on our mailing list and IRC channel:
Generally speaking, desktop and mobile Firefox scroll as well as other browsers on reference sites when there is only a single tab loaded in a single window.
- We compared Firefox vs Chrome and IE:
- Desktop Firefox scrolling can badly deteriorate when the machine is in power-saver mode1 (Firefox performance relative to other browsers depends on the site)
- Heavy activity in background tabs badly affects desktop Firefox’s scrolling performance1 (much worse than other browsers — we need E10S)
- Scrolling on infinitely-scrolling pages only appears janky when the page is waiting on additional data to be fetched
- Inter-page navigation in Firefox can exhibit flicker, similar to other browsers
- The Firefox UI locks up during page loading, unlike other browsers (need E10S)
- Scrolling in desktop E10S (with heavy background tab activity) is only as good as the other browsersn1 when Firefox is in the process-per-tab configuration (dom.ipc.processCount >> 1)
1 You can see Aaron’s scrolling measurements here: http://bit.ly/1K1ktf2Potential scenarios to test next:
- Check impact of different Firefox configurations on scrolling smoothness:
- Hardware acceleration disabled
- Accessibility enabled & disabled
- Maybe: Multiple monitors with different refresh rate (test separately on Win 8 and Win 10)
- Maybe: OMTC, D2D, DWrite, display & font scaling enabled vs disabled
- If we had a Telemetry measurement of scroll performance, it would be easier to determine relevant characteristics
- Compare Firefox scrolling & page performance on Windows 8 vs Windows 10
- Compare Firefox vs Edge on Win 10
- Test other sites in Alexa top 20 and during random browsing
- Test the various scroll methods on reference sites (Avi has done some of this already): mouse wheel, mouse drag, arrow key, page down, touch screen swipe and drag, touchpad drag, touchpad two finger swipe, trackpoints (special casing for ThinkPads should be re-evaluated).
- Check impact of pointing device drivers
- Check performance inside Google web apps (Search, Maps, Docs, Sheets)
- Examine benefits of Chrome’s network pre-fetcher on Google properties (e.g. Google search)
- Browse and scroll simple pages when top Google apps are loaded in pinned tabs
- Compare Firefox page-load & page navigation performance on HTTP/2 sites (Facebook & Twitter, others?)
- Check whether our cache and pre-connector benefit perceived performance, compare vs competition
- Worse Firefox scrolling performance with laptop in power-save mode
- Scrolling Twitter feed with YouTube HTML5 videos is jankier in Firefox
- bug 1174899: Scrolling on Facebook profile with many HTML5 videos eventually causes 100% CPU usage on a Necko thread + heavy CPU usage on main thread + the page stops loading additional posts (videos)
- Find a way to to measure when the page is “settled down” after loading, i.e. time until last page-loading event. This could be measured by the page itself (similar to Octane), which would allow us to compare different browsers
- How to reproduce dynamic websites offline?
- Easiest way to record demos of bad Firefox & Fennec performance vs other browsers?
- Exclusively focus on Android 5.0+ and Windows 7, 8.1 & 10
- Devote the most attention to single-process Nightly on desktop, but do some checks of E10S performance as well
- Desktop APZC and network pre-fetcher are a long time away, don’t wait
I’m delighted to see that development of Mozilla’s Web Literacy Map is still continuing after my departure a few months ago.
Mark Surman, Executive Director of the Mozilla Foundation, wrote a blog post outlining the way forward and a working group has been put together to drive forward further activity. It’s great to see Mark Lesser being used as a bridge to previous iterations.
Another thing I’m excited to see is the commitment to use Open Badges to credential Web Literacy skills. We tinkered with badges a little last year, but hopefully there’ll be a new impetus around this.
The approach to take the Web Literacy Map from version 1.5 to version 2.0 is going to be different from the past few years. It’s going to be a ‘task force’ approach with people brought in to lend their expertise rather than a fully open community approach. That’s probably what’s needed at this point.
I’m going to give myself some space to, as my friend and former colleague Laura Hilliger said, 'disentangle myself’ from the Web Literacy Map and wider Mozilla work. However, I wish them all the best. It’s important work.
Dineapple is an online food delivery gig that I have been working on recently. In essence, a new food item is introduced periodically, and interested customers place orders online to have their food delivered the next day.
Getting down to the initial build of the online ordering site, I started to think about the technical whats and hows. For this food delivery service, a customer places an order by making an online payment. The business then needs to know of this transaction, and have it linked to the contact information of the customer.
Oh okay, easy. Of course I’ll set up a database. I’ll store the order details inside a few tables. Then I’ll build a mini application to extract this information and generate a daily report for the cooks and delivery people to operate on. Then I started to build these things in my head. But wait, there is a simpler way to get the operations people aware of orders. We could just send an email to the people on every successful transaction to notify them of a new incoming order. But this means the business loses visibility and data portability. Scraping for relational data from a bunch of automated emails, although possible, will be a nightmare. The business needs to prepare to scale, and that means analytics.
Then I saw something that now looks so obvious I feel pretty embarrassed. Payments on the ordering service are processed using Stripe. When the HTTP request to process a payment is made, Stripe provides an option to submit additional metadata that will be tagged to the payment. There is a nice interface on the Stripe site that allows business owners to do some simple analytics on the payment data. There is also the option to export all of that data (and metadata) to CSV for more studying.
Forget about ER diagrams, forget about writing custom applications, forget about using automated emails to generate reports. Stripe is capable of doing the reporting for Dineapple, we just had to see a way to adapt the offering to fit the business’s use case.
Beyond operations reporting through Stripe, there are so many existing web services out there that can be integrated into Dineapple. Just to name a few, an obvious one would be to use Google Analytics to study site traffic. Customers’ reviews on food and services could (and probably should) be somehow integrated to work using Yelp. Note that none of these outsourced alternatives, although significantly easier to implement, compromise on the quality of the solution for the business. Because at the end of the day, all that really matters is that the business gets what it needs.
So here’s a reminder to my future self. Spend a little more time looking around for simpler alternatives that you can take advantage of before jumping into development for a custom solution.
Engineers are builders by instinct, but that isn’t always a good thing.
I truly believe, that to make Mozilla a place worth ‘hanging your hat‘, we need to get better at being ‘forces of good for each other’. I like to think this idea is catching on, but only time will tell.
This month’s #mozlove post is for Tom Farrow AKA ‘Tad’, a long time contributor, in numerous initiatives across Mozilla. Although Tad’s contribution focus is in Community Dev Ops, it’s his interest in teaching youth digital literacy that first led to a crossing of our paths. You’ll probably find it interesting to know that despite being in his sixth(!!) year of contribution to Mozilla – Tad is still a High School in Solihull Birmingham, UK.
Tad starting contribution to Mozilla after helping a friend install Firefox on their government-issued laptop, which presented some problems. He found help on SUMO, and through being helped was inspired to become a helper and contributor himself. Tad speaks fondly of starting with SUMO, of finding friends, training and mentorship.
Originally drawn to IT and DevOps contribution for the opportunity of ‘belonging to something’, Tad has become a fixture in this space helping design hosting platforms, and the evolution of a multi-tenant WordPress hosting. When I asked what was most rewarding about contributing to Community Dev Ops, he shared that pride in innovating a quality solution.
I’m also increasingly curious about the challenges of participation and asked about this as well. Tad expressed some frustration around ‘access and finding the right people to unlock resources’. I think that’s probably something that speaks to the greater challenges for the Mozilla community in understanding pathways for support.
Finally my favorite question: “How do your friends and family relate to your volunteer efforts? Is it easy or hard to explain volunteering at Mozilla?”.
I don’t really try to explain it – my parents get the general idea, and are happy I’m gaining skills in web technology.
I think it’s very cool that in a world of ‘learn to code’ merchandizing, that Tad found his opportunity to learn and grow technical skills in participation at Mozilla :)
I want to thank Tad for taking the time to chat with me, for being such an amazing contributor, and inspiration to others around the project.
* I set a reminder in my calendar every month, which this month happens to be during Mozilla’s Work Week in Whistler. Tad is also in Whistler, make sure you look out for him – and say hello!
A few people have suggested I look at other packages for config solutions. I thought I'd record some of my thoughts on the matter. Let's look at requirements first.Requirements
Commandline argument support. When running scripts, it's much faster to specify some config via the commandline than always requiring a new config file for each config change.
Default config value support. If a script assumes a value works for most cases, let's make it default, and allow for overriding those values in some way.
Config file support. We need to be able to read in config from a file, and in some cases, several files. Some config values are either too long and unwieldy to pass via the commandline, and some config values contain characters that would be interpreted by the shell. Plus, the ability to use diff and version control on these files is invaluable.
Multiple config file type support. json, yaml, etc.
Adding the above three solutions together. The order should be: default config value -> config file -> commandline arguments. (The rightmost value of a configuration item wins.)
Config definition and validation. Commandline options are constrained by the options that are defined, but config files can contain any number of arbitrary key/value pairs.
The ability to add groups of commandline arguments together. Sometimes familes of scripts need a common set of commandline options, but also need the ability to add script-specific options. Sharing the common set allows for consistency.
The ability to add config definitions together. Sometimes families of scripts need a common set of config items, but also need the ability to add script-specific config items.
Locking and/or logging any changes to the config. Changing config during runtime can wreak havoc on the debugability of a script; locking or logging the config helps avoid or mitigate this.
Python 3 support, and python 2.7 unicode support, preferably unicode-by-default.
Standardized solution, preferably non-company and non-language specific.
All-in-one solution, rather than having to use multiple solutions.
Argparse is the standardized python commandline argument parser, which is why configman and scriptharness have wrapped it to add further functionality. Its main drawbacks are lack of config file support and limited validation.
Commandline argument support: yes. That's what it's written for.
Default config value support: yes, for commandline options.
Config file support: no.
multiple config file type support: no.
Adding the above three solutions together: no. The default config value and the commandline arguments are placed in the same Namespace, and you have to use the parser.get_default() method to determine whether it's a default value or an explicitly set commandline option.
Config definition and validation: limited. It only covers commandline option definition+validation, and there's the required flag but not a if foo is set, bar is required type validation. It's possible to roll your own, but that would be script-specific rather than part of the standard.
Adding groups of commandline arguments together: yes. You can take multiple parsers and make them parent parsers of a child parser, if the parent parsers have specified add_help=False
Adding config definitions together: limited, as above.
The ability to lock/log changes to the config: no. argparse.Namespace will take changes silently.
Python 3 + python 2.7 unicode support: yes.
Standardized solution: yes, for python. No for other languages.
All-in-one solution: no, for the above limitations.
Configman is a tool written to deal with configuration in various forms, and adds the ability to transform configs from one type to another (e.g., commandline to ini file). It also adds the ability to block certain keys from being saved or output. Its argparse implementation is deeper than scriptharness' ConfigTemplate argparse abstraction.
Its main drawbacks for scriptharness usage appear to be lack of python 3 + py2-unicode-by-default support, and for being another non-standardized solution. I've given python3 porting two serious attempts, so far, and I've hit a wall on the dotdict __getattr__ hack working differently on python 3. My wip is here if someone else wants a stab at it.
Commandline argument support: yes.
Default config value support: yes.
Config file support: yes.
Multiple config file type support: yes.
Adding the above three solutions together: not as far as I can tell, but since you're left with the ArgumentParser object, I imagine it'll be the same solution to wrap configman as argparse.
Config definition and validation: yes.
Adding groups of commandline arguments together: yes.
Adding config definitions together: not sure, but seems plausible.
The ability to lock/log changes to the config: no. configman.namespace.Namespace will take changes silently.
Python 3 support: no. Python 2.7 unicode support: there are enough str() calls that it looks like unicode is a second class citizen at best.
Standardized solution: no.
All-in-one solution: no, for the above limitations.
Docopt simplifies the commandline argument definition and prettifies the help output. However, it's purely a commandline solution, and doesn't support adding groups of commandline options together, so it appears to be oriented towards relatively simple script configuration. It could potentially be added to json-schema definition and validation, as could the argparse-based commandline solutions, for an all-in-two solution. More on that below.json-schema
This looks very promising for an overall config definition + validation schema. The main drawback, as far as I can see so far, is the lack of commandline argument support.
A commandline parser could generate a config object to validate against the schema. (Bonus points for writing a function to validate a parser against the schema before runtime.) However, this would require at least two definitions: one for the schema, one for the hopefully-compliant parser. Alternately, the schema could potentially be extended to support argparse settings for various items, at the expense of full standards compatiblity.
There's already a python jsonschema package.
Commandline argument support: no.
Default config value support: yes.
Config file support: I don't think directly, but anything that can be converted to a dict can be validated.
Multiple config file type support: no.
Adding the above three solutions together: no.
Config definition and validation: yes.
Adding groups of commandline arguments together: no.
Adding config definitions together: sure, you can add dicts together via update().
The ability to lock/log changes to the config: no.
Python 3 support: yes. Python 2.7 unicode support: I'd guess yes since it has python3 support.
Standardized solution: yes, even cross-language.
All-in-one solution: no, for the above limitations.
Scriptharness currently extends argparse and dict for its config. It checks off the most boxes in the requirements list currently. My biggest worry with the ConfigTemplate is that it isn't fully standardized, so people may be hesitant to port all of their configs to it.
An argparse/json-schema solution with enough glue code in between might be a good solution. I think ConfigTemplate is sufficiently close to that that adding jsonschema support shouldn't be too difficult, so I'm leaning in that direction right now. Configman has some nice behind the scenes and cross-file-type support, but the python3 and __getattr__ issues are currently blockers, and it seems like a lateral move in terms of standards.
An alternate solution may be BYOC. If the scriptharness Script takes a config object that you built from somewhere, and gives you tools that you can choose to use to build that config, that may allow for enough flexibility that people can use their preferred style of configuration in their scripts. The cost of that flexibility is familiarity between scriptharness scripts.
Commandline argument support: yes.
Default config value support: yes, both through argparse parsers and script initial_config.
Config file support: yes. You can define multiple required config files, and multiple optional config files.
Multiple config file type support: no. Mozharness had .py and .json. Scriptharness currently only supports json because I was a bit iffy about execfileing python again, and PyYAML doesn't always install cleanly everywhere. It's on the list to add more formats, though. We probably need at least one dynamic type of config file (e.g. python or yaml) or a config-file builder tool.
Adding the above three solutions together: yes.
Config definition and validation: yes.
Adding groups of commandline arguments together: yes.
Adding config definitions together: yes.
The ability to lock/log changes to the config: yes. By default Scripts use LoggingDict that logs runtime changes; StrictScript uses a ReadOnlyDict (sams as mozharness) that prevents any changes after locking.
Python 3 and python 2.7 unicode support: yes.
Standardized solution: no. Extended/abstracted argparse + extended python dict.
All-in-one solution: yes.
As far as I can tell there is no perfect solution here. Thoughts?
A bunch of goodies are included in version 0.6 of hyper.Highlights
- Experimental HTTP2 support for the Client! Thanks to tireless work of @mlalic.
- Redesigned Ssl support. The Server and Client can accept any implementation of the Ssl trait. By default, hyper comes with an implementation for OpenSSL, but this can now be disabled via the ssl cargo feature.
- A thread safe Client. As in, Client is Sync. You can share a Client over multiple threads, and make several requests simultaneously.
- Just about 90% test coverage. @winding-lines has been bumping the number ever higher.
Also, as a reminder, hyper has been following semver more closely, and so, breaking changes mean bumping the minor version (until 1.0). So, to reduce unplanned breakage, you should probably depend on a specific minor version, such as 0.6, and not *.
We’re happy to announce the completion of the first release cycle after Rust 1.0: today we are releasing Rust 1.1 stable, as well as 1.2 beta.
Read on for details the releases, as well as some exciting new developments within the Rust community.What’s in 1.1 Stable
One of the highest priorities for Rust after its 1.0 has been improving compile times. Thanks to the hard work of a number of contributors, Rust 1.1 stable provides a 32% improvement in compilation time over Rust 1.0 (as measured by bootstrapping).
Another major focus has been improving error messages throughout the compiler. Again thanks to a number of contributors, a large portion of compiler errors now include extended explanations accessible using the --explain flag.
Beyond these improvements, the 1.1 release includes a number of important new features:
- New std::fs APIs. This release stabilizes a large set of extensions to the filesystem APIs, making it possible, for example, to compile Cargo on stable Rust.
- musl support. It’s now possible to target musl on Linux. Binaries built this way are statically linked and have zero dependencies. Nightlies are on the way.
- cargo rustc. It’s now possible to build a Cargo package while passing arbitrary flags to the final rustc invocation.
More detail is available in the release notes.What’s in 1.2 Beta
Performance improvements didn’t stop with 1.1 stable. Benchmark compilations are showing an additional 30% improvement from 1.1 stable to 1.2 beta; Cargo’s main crate compiles 18% faster.
In addition, parallel codegen is working again, and can substantially speed up large builds in debug mode; it gets another 33% speedup on bootstrapping on a 4 core machine. It’s not yet on by default, but will be in the near future.
Cargo has also seen some performance improvements, including a 10x speedup on large “no-op” builds (from 5s to 0.5s on Servo), and shared target directories that cache dependencies across multiple packages.
In addition to all of this, 1.2 beta includes our first support for MSVC (Microsoft Visual C): the compiler is able to bootstrap, and we have preliminary nightlies targeting the platform. This is a big step for our Windows support, making it much easier to link Rust code against code built using the native toolchain. Unwinding is not yet available – code aborts on panic – but the implementation is otherwise complete, and all rust-lang crates are now testing on MSVC as a first-tier platform.
Rust 1.2 stable will be released six weeks from now, together with 1.3 beta.Community news
In addition to the above technical work, there’s some exciting news within the Rust community.
In the past few weeks, we’ve formed a new subteam explicitly devoted to supporting the Rust community. The team will have a number of responsibilities, including aggregating resources for meetups and other events, supporting diversity in the community through leadership in outreach, policies, and awareness-raising, and working with our early production users and the core team to help guide prioritization.
In addition, we’ll soon be holding the first official Rust conference: RustCamp, on August 1, 2015, in Berkeley, CA, USA. We’ve received a number of excellent talk submissions, and are expecting a great program.Contributors to 1.1
As with every release, 1.1 stable is the result of work from an amazing and active community. Thanks to the 168 contributors to this release:
- Aaron Gallagher
- Aaron Turon
- Abhishek Chanda
- Adolfo Ochagavía
- Alex Burka
- Alex Crichton
- Alexander Polakov
- Alexis Beingessner
- Andreas Tolfsen
- Andrei Oprea
- Andrew Paseltiner
- Andrew Straw
- Andrzej Janik
- Aram Visser
- Ariel Ben-Yehuda
- Avdi Grimm
- Barosl Lee
- Ben Gesoff
- Björn Steinbrink
- Brad King
- Brendan Graetz
- Brian Anderson
- Brian Campbell
- Carol Nichols
- Chris Morgan
- Chris Wong
- Clark Gaebel
- Cole Reynolds
- Colin Walters
- Conrad Kleinespel
- Corey Farwell
- David Reid
- Diggory Hardy
- Dominic van Berkel
- Don Petersen
- Eduard Burtescu
- Eli Friedman
- Erick Tryzelaar
- Felix S. Klock II
- Florian Hahn
- Florian Hartwig
- Franziska Hinkelmann
- Garming Sam
- Geoffrey Thomas
- Geoffry Song
- Graydon Hoare
- Guillaume Gomez
- Heejong Ahn
- Hika Hibariya
- Huon Wilson
- Isaac Ge
- J Bailey
- Jake Goulding
- James Perry
- Jan Andersson
- Jan Bujak
- Jan-Erik Rediger
- Jannis Redmann
- Jason Yeo
- Johann Hofmann
- Johannes Oertel
- John Gallagher
- John Van Enk
- Jordan Humphreys
- Joseph Crail
- Kang Seonghoon
- Kelvin Ly
- Kevin Ballard
- Kevin Mehall
- Krzysztof Drewniak
- Lee Aronson
- Lee Jeffery
- Liigo Zhuang
- Luke Gallagher
- Luqman Aden
- Manish Goregaokar
- Marin Atanasov Nikolov
- Mathieu Rochette
- Mathijs van de Nes
- Matt Brubeck
- Michael Park
- Michael Rosenberg
- Michael Sproul
- Michael Wu
- Michał Czardybon
- Mike Boutin
- Mike Sampson
- Nelo Onyiah
- Nicholas Mazzuca
- Nick Cameron
- Nick Hamann
- Nick Platt
- Niko Matsakis
- Oliver Schneider
- Pascal Hertleif
- Paul Banks
- Paul Faria
- Paul Quint
- Pete Hunt
- Peter Marheine
- Philip Munksgaard
- Piotr Czarnecki
- Poga Po
- Przemysław Wesołek
- Ralph Giles
- Raphael Speyer
- Ricardo Martins
- Richo Healey
- Rob Young
- Robin Kruppe
- Robin Stocker
- Rory O’Kane
- Ruud van Asseldonk
- Ryan Prichard
- Sean Bowe
- Sean McArthur
- Sean Patrick Santos
- Shmuale Mark
- Simon Kern
- Simon Sapin
- Simonas Kazlauskas
- Sindre Johansen
- Steve Klabnik
- Steven Allen
- Steven Fackler
- Swaroop C H
- Sébastien Marie
- Tamir Duberstein
- Theo Belaire
- Thomas Jespersen
- Ting-Yu Lin
- Tobias Bucher
- Toni Cárdenas
- Tshepang Lekhonkhobe
- Ulrik Sverdrup
- Vadim Chugunov
- Valerii Hiora
- Wangshan Lu
- Wei-Ming Yang
- Wojciech Ogrodowczyk
- Xuefeng Wu
- York Xiang
- Young Wu
Pocos días después de presentarse la versión que corregía el problema presentado con el servicio para obtener el estado de las cuotas, ya está aquí cuentaFox 3.1.1.¿Qué hay de nuevo?
- Ahora se muestra la lista de todos los usuarios que han almacenado sus contraseñas en Firefox.
- Las alertas de consumo ahora se muestran pero sin iconos pues al agregarle un icono no se muestran (probado en Linux).
- También se corrigieron algunos errores menores.
A partir de Firefox 41 se introducirán algunos cambios en la gestión de los complementos en el navegador y solo se podrán instalar complementos firmados por Mozilla. Que un complemento esté firmado por Mozilla significa más seguridad para las usuarios ante extensiones malignas y programas de terceros que intentan instalar add-ons en Firefox.
Para estar preparados cuando llegue Firefox 41 hemos enviando cuentaFox para su revisión en AMO y dentro de poco lo tendremos por aquí.Aún muchas personas utilizan versiones viejas, actualiza a cuentaFox 3.1.1
Desde el panel estadísticas de AMO nos hemos dado cuenta que muchas personas siguen usando versiones viejas que no funcionan y no son recomendadas. Desde aquí hacemos el llamado para que actualicen y difundan la noticia sobre la nueva liberación.
No obstante, cuando el add-on sea aprobado, Firefox lo actualizará según la configuración del usuario. La idea que tenemos es que el complemento se actualice desde Firefoxmanía y no desde Mozilla pero el certificado autofirmado y otros problemas impiden que lo hagamos.
Muchas personas han manifestado que al intentar obtener sus datos se muestra una alerta donde dice “El usuario no es válido o lo contraseña es incorrecta” y nos piden solucionar esto pero no podemos. Nosotros no somos responsables del servicio que brinda cuotas.uci.cu y tampoco sabemos que utiliza para verificar que esos datos son correctos.Instalar cuentaFox 3.1.1
I got to sit in on a great debrief / recap of the recent Webmaker go-to-market strategy. A key takwaway: we’re having promising early success recruiting local volunteers to help tell the story, evangelize for the product, and (crucially) feed local knowledge into making the product better. In short:
Volunteer contribution is working. But now we need to document, systematize and scale up our on-ramps.Documenting and systematizing
It’s been a known issue that we need to update and improve our on-ramps for contributors across MoFo. They’re fragmented, out of date, and don’t do enough to spell out the value for contributors. Or celebrate their stories and successes.
We should prioritize this work in Q3. Our leadership development work, local user research, social marketing for Webmaker, Mozilla Club Captains and Regional Co-ordinators recruitment, the work the Participation Lab is doing — all of that is coming together at an opportune moment.Get the value proposition right
A key learning is: we need to spell out the concrete value proposition for contributors. Particularly in terms of training and gaining relevant work experience.
Don’t assume we know in advance what contributors actually want. They will tell us.
We sometimes assume contributors want something like certification or a badge — but what if what they *really* want is a personalized letter of recommendation, on Mozilla letterhead, from an individual mentor at Mozilla that can vouch for them and help them get a job, or get into a school program? Let’s listen.
An on-boarding and recruiting checklist
Here’s some key steps in the process the group walked through. We can document / systematize / remix these as we go forward.
- Value proposition. Start here first. What’s in it for contributors? (e.g., training, a letter of recommendation, relevant work experience?) Don’t skip this! It’s the foundation for doing this in a real way.
- Role description. Get good at describing those skills and opportunities, in language people can imagine adding to their CV, personal bio or story, etc.
- Open call. Putting the word out. Having the call show up in the right channels, places and networks where people will see and hear about it.
- Application / matching. How do people express interest? How do we sort and match them?
- On-boarding and training. These processes exist, but aren’t well-documented. We need a playbook for how newcomers get on-boarded and integrated.
- Assigning to a specific team and individual mentor. So that they don’t feel disconnected or lost. This could be an expectation for all MoFo: each staff member will mentor at least one apprentice each quarter.
- Goal-setting / tasking. Tickets or some other way to surface and co-ordinate the work they’re doing.
- A letter of recommendation. Once the work is done. Written by their mentor. In a language that an employer / admission officer / local community members understand and value.
- Certification. Could eventually also offer something more formal. Badging, a certificate, something you could share on your linked in profile, etc.
- Co-ordinate across teams. Other teams are doing similar things — need to synch up.
- Tie this to a Mozilla Learning working group. Ben can help here.
- Make it a priority in July. Add yourself to this ticket if you’re interested.
PluotSorbet makes it possible to bring J2ME apps to Firefox OS. J2ME may be a moribund platform, but it still has non-negligible market share, not to mention a number of useful apps. So it retains residual value, which PluotSorbet can extend to Firefox OS devices.
PluotSorbet is also still under development, with a variety of issues to address. To learn more about PluotSorbet, check out its README, clone its Git repository, peruse its issue tracker, and say hello to its developers in irc.mozilla.org#pluotsorbet!
The six-week pilot version of the Mozilla Tech Speakers program wrapped up at the end of May. We learned a lot, made new friends on several continents, and collected valuable practical feedback on how to empower and support volunteer Mozillians who are already serving their regional communities as technical evangelists and educators. We’ve also gathered some good ideas for how to scale a speaker program that’s relevant and accessible to technical Mozillians in communities all over the world. Now we’re seeking your input and ideas as well.
During the second half of 2015, we’ll keep working with the individuals in our pilot group (our pilot pilots) to create technical workshops and presentations that increase developer awareness and adoption of Firefox, Mozilla, and the Open Web platform. We’ll keep in touch as they submit talk proposals and develop Content Kits during the second half of the year, work with them to identify relevant conferences and events, fund speaker travel as appropriate, make sure speakers have access to the latest information (and the latest swag to distribute), and offer them support and coaching to deliver and represent!Why we did it
Our aim is to create a strong community-driven technical speaker development program in close collaboration with Mozilla Reps and the teams at Mozilla who focus on community education and participation. From the beginning we benefited from the wisdom of Rosana Ardila, Emma Irwin, Soumya Deb, and other Mozillian friends. We decided to stand up a “minimum viable” program with trusted, invited participants—Mozillians who are active technical speakers and are already contributing to Mozilla by writing about and presenting Mozilla technology at events around the world. We were inspired by the ongoing work of the Participation Team and Speaker Evangelism program that came before us, thanks to the efforts of @codepo8, Shezmeen Prasad, and many others.
We want this program to scale and stay sustainable, as individuals come and go, and product and platform priorities evolve. We will incorporate the feedback and learnings from the current pilot into all future iterations of the Mozilla Tech Speaker program.What we did
Participants met together weekly on a video call to practice presentation skills and impromptu storytelling, contributed to the MDN Content Kit project for sharing presentation assets, and tried out some new tools for building informative and inspiring tech talks.
Each participant received one session of personalized one-to-one speaker coaching, using “techniques from applied improvisation and acting methods” delivered by People Rocket’s team of coaching professionals. For many participants, this was a peak experience, a chance to step out of their comfort zone, stretch their presentation skills, build their confidence, and practice new techniques.
In our weekly meetings, we worked with the StoryCraft technique, and hacked it a little to make it more geek- and tech speaker-friendly. We also worked with ThoughtBox, a presentation building tool to “organize your thoughts while developing your presentation materials, in order to maximize the effectiveness of the content.” Dietrich took ThoughtBox from printable PDF to printable web-based form, but we came to the conclusion it would be infinitely more usable if it were redesigned as an interactive web app. (Interested in building this? Talk to us on IRC. You’ll find me in #techspeakers or #devrel, with new channels for questions and communication coming soon.)
We have the idea that an intuitive portable tool like ThoughtBox could be useful for any group of Mozillians anywhere in the world who want to work together on practicing speaking and presentation skills, especially on topics of interest to developers. We’d love to see regional communities taking the idea of speaker training and designing the kind of programs and tools that work locally. Let’s talk more about this.What we learned
The pilot was ambitious, and combined several components—speaker training, content development, creating a presentation, proposing a talk—into an aggressive six-week ‘curriculum.’ The team, which included participants in eight timezones, spanning twelve+ hours, met once a week on a video call. We kicked off the program with an introduction by People Rocket and met regularly for the next six weeks.
Between scheduled meetings, participants hung out in Telegram, a secure cross-platform messaging app, sharing knowledge, swapping stickers (the virtual kind) and becoming friends. Our original ambitious plan might have been feasible if our pilots were not also university students, working developers, and involved in multiple projects and activities. But six weeks turned out to be not quite long enough to get it all done, so we focused on speaking skills—and, as it turned out, on building a global posse of talented tech speakers.What’s next
We’re still figuring this out. We collected feedback from all participants and discovered that there’s a great appetite to keep this going. We are still fine-tuning some of the ideas around Content Kits, and the first kits are becoming available for use and re-use. We continue to support Tech Speakers to present at conferences organize workshops and trainings in their communities. And create their own Mozilla Tech Speakers groups with local flavor and focus.
Stay tuned: we’ll be opening a Discourse category shortly, to expand the conversation and share new ideas.And now for some thank yous…
I’d like to quickly introduce you to the Mozilla Tech Speakers pilot pilots. You’ll be hearing from them directly in the days, weeks, months ahead, but for today, huge thanks and hugs all around, for the breadth and depth of their contributions, their passion, and the friendships we’ve formed.
Andre Garzia, @soapdog, Mozilla Rep from Rio de Janeiro, Brazil, web developer, app developer and app reviewer, who will be speaking about Web Components at Expotec at the end of this month. Also, ask him about the Webmaker team LAN Houses program just getting started now in Rio.
Andrzej Mazur, @end3r, HTML5 game developer, active Hacks blog and MDN contributor, creator of a content kit on HTML5 Game Development for Beginners, active Firefox app developer, Captain Rogers creator, and frequent tech speaker, from Warsaw, Poland.
István “Flaki” Szmozsánszky, @slsoftworks, Mozillian and Mozilla Rep, web and mobile developer from Budapest, Hungary. Passionate about Rust, Firefox OS, the web of things. If you ask him anything “mildly related to Firefox OS, be prepared with canned food and sleeping bags, because the answer might sometimes get a bit out of hand.”
Kaustav Das Modak, @kaustavdm, Mozilla Rep from Bengalaru, India; web and app developer; open source evangelist; co-founder of Applait. Ask him about Grouphone. Or, catch his upcoming talk at the JSChannel conference in Bangalore in July.
Michaela R. Brown, @michaelarbrown, self-described “feisty little scrapper,” Internet freedom fighter, and Mozillian from Michigan. Michaela will share skills in San Francisco next week at the Library Freedom Project: Digital Rights in Libraries event.
Rabimba Karanjai, @rabimba, a “full-time graduate researcher, part-time hacker and FOSS enthusiast,” and 24/7 Mozillian. Before the month is out, Rabimba will speak about Firefox OS at OpenSourceBridge in Portland and at the Hong Kong Open Source conference.
Gracias. شكرا. धन्यवाद. Köszönöm. Obrigada. Dziękuję. Thank you. #FoxYeah.
I'm at the Mozilla all-hands week in Whistler. Today (Monday) was a travel day, but many of us arrived yesterday, so today I had most of the day free and chose to go on a long time organized by Sebastian --- because I like hiking, but also lots of exercise outside should help me adjust to the time zone. We took a fairly new trail, the Skywalk South trail: starting in the Alpine Meadows settlement at the Rick's Roost trailhead at the end of Alpine Way, walking up to connect with the Flank trail, turning up 19 Mile Creek to wind up through forest to Iceberg lake above the treeline, then south up and over a ridge on the Skywalk South route, connecting with the Rainbow Ridge Loop route, then down through Switchback 27 to finally reach Alta Lake Rd. This took us a bit over 8 hours including stops. We generally hiked quite fast, but some of the terrain was tough, especially the climb up to and over the ridge heading south from Iceberg Lake, which was more of a rock-climb than a hike in places! We had to get through snow in several places. We had a group of eight, four of us who did the long version and four who did a slightly shorter version by returning from Iceberg Lake the way we came. Though I'm tired, I'm really glad we did this hike the way we did it; the weather was perfect, the scenery was stunning, and we had a good workout. I even went for a dip in Iceberg Lake, which was a little bit crazy and well worth it!
Our work week hasn’t started yet, but since I got to Whistler early I have had lots of adventures.
First the obligatory nostril-flaring over what it is like to travel with a wheelchair. As we started the trip to Vancouver I had an interesting experience with United Airlines as I tried to persuade them that it was OK for me to fold up my mobility scooter and put it into the overhead bin on the plane. Several gate agents and other people got involved telling me many reasons why this could not, should not, and never has or would happen:
* It would not fit
* It is illegal
* The United Airlines handbook says no
* The battery has to go into the cargo hold
* Electric wheelchairs must go in the cargo hold
* The scooter might fall out and people might be injured
* People need room for their luggage in the overhead bins
The Air Carrier Access Act of 1986 says,
Assistive devices do not count against any limit on the number of pieces of carry-on baggage. Wheelchairs and other assistive devices have priority for in-cabin storage space over other passengers’ items brought on board at the same airport, if the disabled passenger chooses to preboard.
In short I boarded the airplane, and my partner Danny folded up the scooter and put it in the overhead bin. Then, the pilot came out and told me that he could not allow my battery on board. One of the gate agents had told him that I have a wet cell battery (like a car battery). It is not… it is a lithium ion battery. In fact, airlines do not allow lithium batteries in the cargo hold! The pilot, nicely, did not demand proof it is a lithium battery. He believed me, and everyone backed down.
The reason I am stubborn about this is that I specially have a very portable, foldable electric wheelchair so that I can fold it up and take it with me. Two times in the past few years, I have had my mobility scooters break in the cargo hold of a plane. That made my traveling very difficult! The airlines never reimbursed me for the damage. Another reason is that the baggage handlers may lose the scooter, or bring it to the baggage pickup area rather than to the gate of the plane.
Onward to Whistler! We took a shuttle and I was pleasantly (and in a way, sadly) surprised that the shuttle liason, and the driver, both just treated me like any other human being. What a relief! It is not so hard! This experience is so rare for me that I am going to email the shuttle company to compliment them and their employees.
The driver, Ivan, took us through Vancouver, across a bridge that is a beautiful turquoise color with stone lions at its entrance, and through Stanley Park. I particularly noticed the tiny beautiful harbor or lagoon full of boats as we got off the bridge. Then, we went up Highway 99, or the Sea to Sky Highway, to Squamish and then Whistler.
When I travel to new places I get very excited about the geology and history and all the geography! I love to read about it beforehand or during a trip.
The Sea to Sky Highway was improved in preparation for the Winter Olympics and Paralympics in 2010. Before it was rebuilt it was much twistier with more steeply graded hills and had many bottlenecks where the road was only 2 lanes. I believe it must also have been vulnerable to landslides or flooding or falling rocks in places. As part of this deal the road signs are bilingual in English and Squamish. I read a bit on the way about the ongoing work to revitalize the Squamish language.
The highway goes past Howe Sound, on your left driving up to Squamish. It is a fjord, created by retreated glaciers around 11,000 years ago. Take my geological knowledge with a grain of salt (or a cube of ice) but here is a basic narrative of the history. AT some point it was a shallow sea here but a quite muddy one, not one with much of a coral reef system, and the mountains were an archipelago of island volcanoes. So there are ocean floor sediments around, somewhat metamorphosed; a lot of shale.
There is a little cove near the beginning of the highway with some boats and tumble-down buildings, called Porteau Cove. Interesting history there. Then you will notice a giant building up the side of a hill, the Britannia Mining Museum. That was once the Britannia Mines, producing billions of dollars’ worth of copper, gold, and other metals. The entire hill behind the building is honeycombed with tunnels! While a lot of polluted groundwater has come out of this mine damaging the coast and the bay waters, it was recently plugged with concrete: the Millenium Plug, and that improved water quality a lot, so that shellfish, fish, and marine mammals are returning to the area. The creek also has trout and salmon returning. That’s encouraging!
Then you will see huge granite cliffs and Shannon Falls. The giant monolith made me think of El Capitan in Yosemite. And also of Enchanted Rock, a huge pink granite dome in central Texas. Granite weathers and erodes in very distinctive ways. Once you know them you can recognize a granite landform from far away! I haven’t had a chance to look close up at any rocks on this trip…. Anyway, there is a lot of granite and also basalt or some other igneous extrusive rock. Our shuttle driver told me that there is columnar basalt near by at a place called French Fry Hill.
The mountain is called Stawamus Chief Mountain. Squamish history tells us it was a longhouse turned to stone by the Transformer Brothers. I want to read more about that! Sounds like a good story! Rock climbers love this mountain.
There are some other good stories, I think one about two sisters turned to stone lions. Maybe that is why there are stone lions on the Vancouver bridge.
The rest of the drive brought us up into the snowy mountains! Whistler is only 2000 feet above sea level but the mountains around it are gorgeous!
The “village” where tourists stay is sort of a giant, upscale, outdoor shopping mall with fake streets in a dystopian labyrinth. It is very nice and pretty but it can also feel, well, weird and artificial! I have spent some time wandering around with maps, backtracking a lot when I come to dead ends and stairways. I am also playing Ingress (in the Resistance) so I have another geographical overlay on the map.
On Sunday I got some groceries and went down paved and then gravel trails to Lost Lake. It was about an hour long trip to get there. The lake was beautiful, cold, and full of people sunbathing, having picnics, and swimming. Lots of bikes and hikers. I ran out of battery (nearly), then realized that the lake is next to a parking lot. I got a taxi back to the Whistler Village hotel! Better for me anyway since the hour long scooter trip over gravel just about killed me (I took painkiller halfway there and then was just laid flat with pain anyway.) Too ambitious of an expedition, sadly. I had many thoughts about the things I enjoyed when I was younger (going down every trail, and the hardest trails, and swimming a lot) Now I can think of those memories, and I can look at beautiful things and also read all the information about an area which is enjoyable in a different way. This is just how life is and you will all come to it when you are old. I have this sneak preview…. at 46…. When I am actually old, I will have a lot of practice and will be really good at it. Have you thought about what kind of old person you would like to be, and how you will become that person?
Today I stayed closer to home just going out to Rebagliati Park. This was fabulous since it wasn’t far away, seriously 5 minutes away! It was very peaceful. I sat in a giant Adirondack chair in a flower garden overlooking the river and a covered bridge. Watching the clouds, butterflies, bees, birds, and a bear! And of course hacking the portals (Ingress again). How idyllic! I wish I had remembered to bring my binoculars. I have not found a shop in the Whistler Mall-Village that stocks binoculars. If I find some, I will buy them.
I also went through about 30 bugs tracked for Firefox 39, approved some for uplift, wontfixed others, emailed a lot of people for work, and started the RC build going. Releng was heroic in fixing some issues with the build infrastructure! But, we planned for coverage for all of us. Good planning! I was working Sunday and Monday while everyone else travelled to get here…. Because of our release schedule for Firefox it made good sense for me to get here early. It also helps that I am somewhat rested from the trip!
I went to the conference center, found the room that is the home base for the release management and other platform teams, and got help from a conference center setup guy to lay down blue tape on the floor of the room from the doorway to the back of the room. The tape marks off a corridor to be kept clear, not full of backpacks or people standing and talking in groups, so that everyone can freely get in and out of the room. I hope this works to make the space easy for me to get around in, in my wheelchair, and it will surely benefit other people as well.
At this work week I hope to learn more about what other teams are doing, any cool projects etc, especially in release engineering and in testing and automated tools and to catch up with the Bugzilla team too. And will be talking a bunch about the release process, how we plan and develop new Firefox features, and so on! Looking forward now to the reception and seeing everyone who I see so much online!Related posts:Blogging Against Disablism Day: How I bought a bikeScreen reader and accessibility bug day