- BMO now has CI, running on Treeherder: https://treeherder.mozilla.org/#/jobs?repo=bmo-master
- see https://wiki.mozilla.org/BMO/Recent_Changes for a full list
- Work has started on backend work needed to support automatic starring, including db simplification, and db unification (so each tree doesn’t have its own database). Bug 1179263 tracks this work. As a side effect of this work, Treeherder code should become less complex and easier to maintain.
- Work has started on identifying what needs to happen in order to turn off Bugzilla comments for intermittents, and to create an alternative notification mechanism instead. Bug 1179310 tracks this work.
- New shortcuts for Logviewer, Delete Classification plus improved classification save
- Agreed on new design for Logviewer, bug 1182178
- Design work is in progress for collapsing chunks in Treeherder in order to reduce visual noise in bug 1163064
- Evaluating alerts generated from PerfHerder
- Improvements to compare chooser and viewer inside of PerfHerder
- Work towards building a new tab switching test (bug 1166132)
- wlach did a blog post summarizing recent notable developments in perfherder: http://wrla.ch/blog/2015/07/perfherder-update/
- Automatic publishing of reviews upon pushing
- Known bug: people using cookie auth may experience bug 1181886
- Better error message when MozReview’s Bugzilla session has expired (bug 1178811)
- Pruned user database to improve user searching (bug 1171274)
- Fix automatic reviewer selection (bug 1177454)
- Work is progressing on autoland-to-inbound (bug 1128039)
- Ability to schedule Linux64 tests on try (tests not running yet due to a couple blockers) – bug 1171033
- Working on OSX cross-compilation, which will allow us to move OSX builds to the cloud; this will make OSX builds much faster in CI.
- Autophone detects USB lock-ups and gracefully restarts. This is a huge improvement in system reliability.
- Continued work on getting Android Talos tests ported to Autophone (bug 1170685)
- Updated manifests and mozharness configs for mochitest-chrome (bug 1026290)
- Determined total-chunks requirements for Android 4.3 Debug reftests (bug 1140471)
- Re-wrote robocop harness to significantly improve run-time efficiency (bug 1179981)
- Helped RelEng resolve some problems that were preventing them from landing mozharness in the tree. This opens the door to a lot of future dev workflow improvements, including better unification of the ways we run automated tests in continuous integration and locally. We’ve wanted this for years and it’s great to see it finally happen.
- Did some work on top of jgraham’s patch to make mach use mozlog structured logging
- We had to respond to the breakup of .tests.zip into several files to keep our Jenkins instance running.
- Getting firefox-media-tests to satisfy Tier-2 Treeherder visibility requirements involves changing how Treeherder accommodates non-buildbot jobs (e.g bug 1182299)
- Working on running multiple tests/manifests through reftests harness as a prelude for supporting |mach try| for more test types.
- Created patch to move mozlog.structured to the top level package (and what was previously there to mozlog.unstructured)
- Figured out the series of steps needed to produce a usable Thread Sanitizer enabled linux build on our infra
- Separating out gTest into a separate job in CI – bug 1179955.
- Support has been added to mozregression for downloading inbound builds from S3 – bug 1177923. More work is needed on both mozregression and mozdownload to fully adapt them to the migration of builds off the netapp.
- Work is underway to allow the Python mozillapulse client to consume from multiple exchanges – bug 1180897.
- mozregression 0.37 has been released.
- mozci 0.8.2 now allows you to use Treeherder as a source of job data.
- More memory optimizations (motivation: releng query for Chris Atlee: query slow tests)
- run staging environment as stability test for production
- change etl procedure so pushing changes to prod are easier (moving toward standard procedure)
- import treeherder data markup to active data (motivation: characterizing test failures
This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.
Did an interview with George Hulme about Devops and Security
Muntner: Thinking security testing through and automating as much as possible will yield results, but that can happen with or without devops. I’m not saying devops is invalid, rather that it alone is not responsible for good outcomes. Thinking that an approach delivers more than it really does is only a false sense of security, arguably worse than awareness of insufficient security.
Secure systems and software development practices like command-safe APIs, network-layer features in TLS, HTTP layer features like CSP, improvements in application and protocol layer firewalls, developers learning to do proper encoding for the appropriate output context, automated testing with tools like OWASP ZAP or commercial equivalents as appropriate for the type of application are all high-impact but have nothing to do with devops.
DevOps.com: Security should be part of the flow, an integral part of QA, security and functional testing.
Muntner: Security isn’t a state, it’s a process. It’s a verb, not a noun. Security ‘what’ should be part of the workflow? Security activities and tests, personnel, all of the above? Should a security organization report to the business management and governance side of management, or the technical side? And why is DevOps better for security maturity than separation of duties?
Filed under: code review, Devops, infosec, web security Tagged: appsec, code review, devops, Mozilla, sdlc, webappsec
In spite of being a programmer, I’m not much of a DIY person when it comes to computer hardware. For example, I’ve never built a computer from parts, or performed maintenance more complicated than a RAM or hard drive upgrade.
One thing I’ve been doing that has a DIY air to it, though, is cleaning the dust from the inside of my desktop computer using a can of compressed air – a habit I picked up from one of my first-year university roommates.
The first time I did this, I was very hesitant, afraid that I would break something. It went smoothly, though, and I continued cleaning my computer this way regularly (about once a year) without any trouble.
Until last weekend, that is.
After last weekend’s cleaning, my computer booted up fine, and everything seemed OK, but a short while after booting it up, I stepped away from it to talk on the phone, and returned to find it mysteriously powered off.
Powering it back on led to more strangeness: the computer itself powered on fine, but the monitors were receiving no signal, just as if the computer were off.
I powered it off again, and opened the case back up to inspect the internals, thinking that perhaps the cleaning loosened or dislodged a connector; however, everything seemed to be in order.
Powering it on one more time, the monitors were working again, and everything seemed fine. I was ready to write down the mysterious symptoms as a fluke and move on to other things, but within 20 or so minutes, the computer suddenly powered off again.
This time, though, I was sitting in front of it when it did, and I got a fraction-of-a-second glimpse of a Windows dialog opening up before the power-off. I didn’t get a chance to read what it said, but it made me realize that rather than the power-off being a pure hardware failure, it could be something triggered by the OS for some reason.
So I powered on again (monitors working fine this time), and took a look at the Windows system event log, and indeed, there were “Error” entries whose times matched the sudden shutdowns. Most of the event information was pretty cryptic, but once I realized you can double-click on the event to get more details, there was a descriptive message: “System shutdown due to graphics card overheating”.
That explained why the computer was shutting down after running for a short while, and also why the monitors weren’t engaging that one time I powered it back on (the graphics card must not have had a chance to cool down enough). It also gave me a direction to continue investigating in.
I researched the problem of graphics cards overheating a bit, and found that the problem was commonly caused by a fan malfunctioning, or airflow being obstructed by dust.
So I powered off again, opened up the case, and inspected the fans. I saw two, a case fan, and a CPU fan (and possibly one inside the PSU but that was enclosed so I wasn’t sure); the graphics card didn’t appear to have its own fan. The fans seemed to be in order; to be sure, I powered the computer on with the case open and verified that the fans were spinning fine. Nor did I detect any obstruction to airflow.
Nonetheless, the overheating and subsequent shutdown recurred.
I downloaded a program to monitor the internal temperatures of the computer, and verified that the graphics card did indeed get very hot – while the CPU temperatures remained around 40-45ºC, the graphics card’s temperature would slowly rise over time, reaching close to 110ºC, which seemed to be the point where the shutdown was triggered.
Determined to get to the bottom of the issue, I opened the case again and decided to try to remove the graphics card and inspect it more closely; I never got around to removing it, though, because in the process I discovered the cause of the problem.
It turns out the graphics card did have its own fan: a small one, oriented horizontally, built into the bottom of the platform that held the card. You had to be looking at it from underneath, which I didn’t do before, to see it.
This fan wasn’t spinning, and it was readily apparent why: there were large clumps of dust in it, that were too clumped together to have been disloged by the compressed air. In fact, most likely the compressed air treatment caused additional dust from further above to collect there, pushing the fan over the edge to the point where it couldn’t spin any more.
Cleaning out the dust with some tweezers, the fan started working again, the graphics card stayed cool, and all was well.
This sort of problem and diagnosis is probably very trivial for a lot of tinkerers, but for me it was exploring new ground. I’m glad that I persisted in fixing the problem myself and didn’t resort to bringing my computer in to a repair shop.
As of today, ~15.6% of commits landing in Firefox in July have gone through MozReview or have been produced on machines that have used MozReview. This is still a small percentage of overall commits. But, signs are that the percentage is going up. Last month, about half as many commits exhibited the same signature. It's only July 16 and we've already passed the total from June.
What I find interesting is the differences between commits that have gone through MozReview versus the rest. When you look at the diff statistics (a quick proxy of change size), we find that MozReview commits tend to be smaller. The median adds as reported by diff stat (basically lines that were changed) is 12 for MozReview versus 17 elsewhere. The average is 58 for MozReview versus 100 elsewhere. For number of files modified, MozReview averages 2.59 versus elsewhere's 2.71. (These numbers exclude some specific large commits that appeared to be bulk imports of external projects and drove up the non-MozReview figures.)
It's entirely possible the root cause behind the discrepancy is a side-effect of the population of MozReview users: perhaps MozReview users just write smaller commits. However, I'd like to think it's because MozReview makes it easier to manage multiple commits and people are taking advantage of that (this is an explicit design goal of MozReview). Whatever the root cause, I'm glad diffs are smaller. As I've written about before, smaller commits are easier to review and land, thus enabling projects to move faster.
I have a quarterly goal to remove the requirement for a Mozilla LDAP account to push to MozReview. That will allow first time contributors to use MozReview. This will be a huge win, as we can do much more magic in the MozReview world than we can from vanilla Bugzilla (automatic bug filing, automatic reviewer assignment, etc). Unofficially, I'd like to have more than 50% of Firefox commits go through MozReview by the end of the year.
Many “free” wifi hotspots give you a limited time per computer. If you’re traveling light and forgot to bring extra devices, it’s easy to give a Linux laptop multiple personalities:$ ip link 1: lo 2: wlp4s0 3: enp0s25 $ ip link set dev wlp4s0 down $ macchanger -r wlp4s0 $ ip link set dev wlp4s0 up
... And then connect to the wifi and jump through its silly captive portal hoops again!
Changing your MAC address occasionally can be part of a healthy security diet, making your device slightly more difficult to track, as well.
I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series
Another late-binding scheme that is already necessary is to get away from direct protocol matching when a new object shows up in a system of objects. In other words, if someone sends you an object from halfway around the world it will be unusual if it conforms to your local protocols. At some point it will be easier to have it carry even more information about itself–enough so its specifications can be “understood” and its configuration into your mix done by the more subtle matching of inference.
This higher computational finesse will be needed as the next paradigm shift–that of pervasive networking–takes place over the next five years. Objects will gradually become active agents and will travel the networks in search of useful information and tools for their managers. Objects brought back into a computational environment from halfway around the world will not be able to configure themselves by direct protocol matching as do objects today. Instead, the objects will carry much more information about themselves in a form that permits inferential docking. Some of the ongoing work in specification can be turned to this task.
An object, sent over the network; it does not exactly have a common protocol, class, or API, but enough information so it can be understood, matched up with some function or purpose according to inference. We could also assume given this is from Alan Kay that the vision here is that code, not just data, is part of the object and information (though to consider code to be information: that is quite a challenge to our modern sensibilities).
When I read this, it struck me that we have these objects all around us. The web page: remote, transferable, transformable, embodying functionality and data, with rich information suitable for inference.
The web page has a kind of minimal protocol, though nothing is entirely forbidden in how it is interpreted. For instance the page is named in its <title>. But probably it has a better name in its <meta name=og:title>, should one exist; nothing is truly formal except by how it will be conventionally interpreted. The protocol is flexible. It has internal but opaque state. The object can initiate activity in a few ways, primarily XMLHttpRequest and a small number of APIs available to it. The page receives copious input in the form of events.
It’s an impoverished object in so many ways. And it’s hardly what you would call universal, it’s always representing visual pages for the browser. When programming if the browser isn’t our intended audience then we choose something like JSON or REST: one dead data, one a possessed and untransferable object (I would assert that in REST the object is the server and not the document).
And yet the web page is an incredible object! Web pages are sophisticated and well cared for. Our understanding of them is meticulously documented, including the ambiguity. The web stack is something that has not just been “defined” or “fixed”, but also discovered. Web pages contain gateways into a tremendous number of systems, defined around a surprisingly small set of operations.
But we don’t look at them as objects, we don’t try to deduce or infer much about them. They don’t look like the objects we would define were we to design such a system. But if we shift our gaze from design to discovery then the wealth becomes apparent: these might not be the objects we would ask for, but given the breadth and comprehensiveness of web pages they are the objects we should use. And they actually work, they do a ton of useful things.
Stepping back from the specific product of PageShot, this is the broad direction that excites me: to understand and make use of these objects that are all around us. (Objects to which Mozilla, with its user agent, has unique access.) But we need to look more broadly at what we can do with these objects. PageShot tries one fairly small thing: capture the visual state at one moment, maybe do something with that state. If we just had a handful of these operations, exposed properly (not trapped in the depths of monolithic browsers) I think there are some incredible things to be done. Maybe even a way to bridge from the ad hoc to something more formal; as crazy as the web page execution model seems, it has some nice features, and is the widest deployed sandboxing execution model we have.
I find this all exciting, but I am somewhat half-hearted in my excitement. Reading The Early History Of Smalltalk there’s a certain spirit to their work that I love and often despair at recreating. There is a visionary aspect, but I think more importantly they took a holistic approach. There’s something exciting about opening your mind to far off concepts (a vision) but then try to tie them together creatively, trying different approaches in an effort to maintain simplicity, avoid compromises. The computing systems they worked on were like Microworlds of their own creation, they could redefine problems, throw away state, reinvent any interface they chose. And maybe that is also available to us: only when we hopelessly despair about problems we cannot fix are we trapped by our legacy. That is, if you accept the web as it is there is a freedom, an agency in that, because you’ve put aside the things you can’t change.
I suspect Alan Kay would take a dim view of this whole notion. He is not a fan of the web. Another observation from that history:
Four techniques used together—persistent state, polymorphism, instantiation, and methods-as-goals for the object—account for much of the power. None of these require an “object-oriented language” to be employed—ALGOL 68 can almost be turned to this style—an OOPL merely focuses the designer’s mind in a particular fruitful direction. However, doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming.
I can’t even begin to phrase web pages in these terms. State is a mess: much hosted on remote servers, some in the URL, some in the process of the running page, some in cookies or localStorage, all of it constantly being copied and thrown away. Is the URL the class and the HTML served over HTTP the instantiation? These are just painful contortions to find analogs. Methods-as-goals is the one that seems most interesting and challenging, because I cannot quite identify the goals behind this whole endeavour. Automation? Insight? Detection? Creation? Is it different from what Google is doing with its spiders? Is there something distinct about interpretation in the context of a user agent? And when the objects are not willing – I am proposing we bend pages to our will, wrestling control from the expectations of site owners – can you do any delegation? Is there an object waiting to be smithed that encapsulates the page?
More tensions than resolutions. Wish I had time to bathe in those tensions a bit longer.
Bug 1175934 – [B2G] Add support to build blobfree images
has landed and is now available on task cluster :
What is Blob free? see https://developer.mozilla.org/en-US/Firefox_OS/Building#Building_a_blob_free_full_system_zip
That’s right. if you follow Bug 1166276 – (b2g-addon) [meta] Getting a B2G Installer Addon, you will see that there’s an addon to the desktop firefox version that will allow you to flash your device, and these blobfree images are to be available to the public.
\o/ Dev team!
Filed under: B2G, Gaia, mobile, Planet, QA, QMO Tagged: B2G, gaia, mobile, Planet, QA, QMO