Firefox wil met Quantum internet versnellen
Kortom: ze worden steeds zwaarder. Mozilla heeft besloten dat het zijn Firefox-browser moet updaten. Daartoe heeft de non-profit organisatie gisteren het project Quantum aangekondigd, dat de browser tegen eind 2017 drastisch sneller moet maken.
Mozilla wil eind volgend jaar opvolger Gecko-engine in Firefox implementeren
Mozilla wil dat Firefox vanaf eind volgend jaar een nieuwe render engine heeft. Die engine noemt Mozilla Project Quantum en is gebouwd voor moderne hardware met multicore-processors en het moderne internet, zo claimt de browserbouwer. Quantum zal ...
en meer »
Mozilla strives for performance boost with new Project Quantum
As the web becomes less about static webpages and more about intricate web apps, browsers are being pushed to their limits to display interactive content without lag and erratic frame rates. Today, in a blog post, Mozilla outlined the development of a ...
Firefox Quantum project aims for a radically faster webCNET
alle 2 nieuwsartikelen »Google Nieuws
Mozilla pushes the White House to do more to prevent cyberattacks
Mozilla is pressing the White House to do more to prevent cyberattacks by revealing details of security vulnerabilities, in an effort to prevent another massive internet outage which last week left millions unable to access major websites and services.
Mozilla Presses White House to Do More to Prevent CyberattacksThe Intercept
'Happy' one-year anniversary, crypto petitionPolitico
alle 3 nieuwsartikelen »
Mozilla lanceert eind 2017 nieuwe web-engine "Quantum"
Mozilla zal volgend jaar een nieuwe web-engine voor Firefox lanceren die ervoor moet zorgen dat websites supersnel geladen worden wat voor een betere gebruikerservaring moet zorgen. "Project Quantum" zal volgens de opensource-ontwikkelaar een ...
Over the past year, our top priority for Firefox was the Electrolysis project to deliver a multi-process browsing experience to users. Running Firefox in multiple processes greatly improves security and performance. This is the largest change we’ve ever made to Firefox, and we’ll be rolling out the first stage of Electrolysis to 100% of Firefox desktop users over the next few months.
But, that doesn’t mean we’re all out of ideas in terms of how to improve performance and security. In fact, Electrolysis has just set us up to do something we think will be really big.
We’re calling it Project Quantum.
Quantum is our effort to develop Mozilla’s next-generation web engine and start delivering major improvements to users by the end of 2017. If you’re unfamiliar with the concept of a web engine, it’s the core of the browser that runs all the content you receive as you browse the web. Quantum is all about making extensive use of parallelism and fully exploiting modern hardware. Quantum has a number of components, including several adopted from the Servo project.
The resulting engine will power a fast and smooth user experience on both mobile and desktop operating systems — creating a “quantum leap” in performance. What does that mean? We are striving for performance gains from Quantum that will be so noticeable that your entire web experience will feel different. Pages will load faster, and scrolling will be silky smooth. Animations and interactive apps will respond instantly, and be able to handle more intensive content while holding consistent frame rates. And the content most important to you will automatically get the highest priority, focusing processing power where you need it the most.
So how will we achieve all this?
Web browsers first appeared in the era of desktop PCs. Those early computers only had single-core CPUs that could only process commands in a single stream, so they truly could only do one thing at a time. Even today, in most browsers an individual web page runs primarily on a single thread on a single core.
But nowadays we browse the web on phones, tablets, and laptops that have much more sophisticated processors, often with two, four or even more cores. Additionally, it’s now commonplace for devices to incorporate one or more high-performance GPUs that can accelerate rendering and other kinds of computations.
One other big thing that has changed over the past fifteen years is that the web has evolved from a collection of hyperlinked static documents to a constellation of rich, interactive apps. Developers want to build, and consumers expect, experiences with zero latency, rich animations, and real-time interactivity. To make this possible we need a web platform that allows developers to tap into the full power of the underlying device, without having to agonize about the complexities that come with parallelism and specialized hardware.
And so, Project Quantum is about developing a next-generation engine that will meet the demands of tomorrow’s web by taking full advantage of all the processing power in your modern devices. Quantum starts from Gecko, and replaces major engine components that will benefit most from parallelization, or from offloading to the GPU. One key part of our strategy is to incorporate groundbreaking components of Servo, an independent, community-based web engine sponsored by Mozilla. Initially, Quantum will share a couple of components with Servo, but as the projects evolve we will experiment with adopting even more.
A number of the Quantum components are written in Rust. If you’re not familiar with Rust, it’s a systems programming language that runs blazing fast, while simplifying development of parallel programs by guaranteeing thread and memory safety. In most cases, Rust code won’t even compile unless it is safe.
We’re taking on a lot of separate but related initiatives as part of Quantum, and we’re revisiting many old assumptions and implementations. The high-level approach is to rethink many fundamental aspects of how a browser engine works. We’ll be re-engineering foundational building blocks, like how we apply CSS styles, how we execute DOM operations, and how we render graphics to your screen.
Quantum is an ambitious project, but users won’t have to wait long to start seeing improvements roll out. We’re going to ship major improvements next year, and we’ll iterate from there. A first version of our new engine will ship on Android, Windows, Mac, and Linux. Someday we hope to offer this new engine for iOS, too.
We’re confident Quantum will deliver significantly improved performance. If you’re a developer and you’d like to get involved, you can learn more about Quantum on the the Mozilla wiki, and explore ways that you can contribute. We hope you’ll take the Quantum leap with us.
One of my Q3 goals was to migrate the Legacy Test Pilot users into our new Test Pilot program (some background on the two programs). The previous program was similar in that people could give feedback on experiments, but different enough that we didn't feel comfortable simply moving the users to the new program without some kind of notification and opting-in.
We decided the best way to do that was simply push out a new version of the legacy add-on which opened a new tab to the Test Pilot website and then uninstalled itself. This lets people interested in testing experiments know about the new program without being overbearing. Worst case scenario, they close the tab and have one less add-on loading every time Firefox is started.
In our planning meeting it was suggested that getting three percent of users from the old program to the new would be a reasonable compromise between realistic and optimistic. I guffawed, suggested that the audience had already opted-in once, and put 6% in as our goal and figured it would be way higher. Spoiler alert: I was wrong.
I'll spare you the pain of writing the add-on (most of the trouble was that the legacy add-on was so old you had to restart Firefox to uninstall it which really broke up the flow). On August 24th, we pushed the update to the old program.
In the daily usage graph, you can see we successfully uninstalled ourselves from several hundred thousand profiles, but we still have a long tail that doesn't seem to be uninstalling. Fortunately, AMO has really great statistics dashboards (these stats are public, by the way) and we can dig a little deeper. So, as of today there are around 150k profiles with the old add-on still installed. About half of those are reporting running the latest version (the one that uninstalls itself) and about half are disabled by the user. I suspect those halves overlap and account for 75k of the installed profiles.
The second 75k profiles are on older add-on versions and are not upgrading to a new version. There could be many reasons when we're dealing with profiles this old: they could be broken, they might not have write permissions to their profile, their network traffic could be being blocked, an internet security suite could be misbehaving, etc. I don't think there is much more we can do for these folks right now, unfortunately.
Let's talk about the overall goal though - how many people joined the new program as a result of the new tab?
As of the end of Q3, we had just over 26k conversions making for a 3.6% conversion rate. Quite close to what was suggested in the original meeting by the people who do this stuff for a living, and quite short of my brash guess.
Overall we got a 0.6 score on the quarterly OKR.
Since I'm writing this post a few weeks after the end of Q3, I can see that we're continuing to get about 80 new users per day from the add-on prompt. Overall that makes for about 28.5k total conversions as of Oct 27th.
Once again I resolve to write about my work at Mozilla as a Firefox release manager. It’s hard to do, because even the smallest thing could fill LONG paragraphs with many links! Since I keep daily notes on what I work on, let me try translating that in brief. When moved, maybe I’ll go into depth.
This week we are coming into the home stretch of a 7 week release cycle. “My” release for right now is Firefox 49, which was released on I’m still juggling problems and responses and triaging for that every day. In a week and a half, we were scheduled to release Firefox 50. Today after some discussion we pushed back that schedule by a week.
Meanwhile, I am also helping a new release manager (Gerry) to go through tracked bugs, new regressions, top crash reports, and uplift requests for Aurora/Developer Edition (Firefox 51). I’m going through uplift requests for Firefox 45.5.0esr, the extended support release. There’s still more – I paid some attention to our “update orphaning” project to bring users stuck on older versions of Firefox forward to the current, better and safer versions.
As usual, this means talking to developers and managers across pretty much all the teams at Mozilla, so it is never boring. Our goal is to get fixes and improvements as fast as possible while making sure, as best we can, that those fixes aren’t causing worse problems. We also have the interesting challenges of working across many time zones around the world.
Today I had a brief 1:1 meeting with my manager and went to the Firefox Product cross-functional meeting. I always find useful as it brings together many teams. There was a long Firefox team all hands discussion and then I skipped going to another hour long triage meeting with the platform/firefox engineering managers. Whew! We had a lively discussion over the last couple of days about a performance regression (Bug 1304434). The issues are complicated to sort out. Everyone involved is super smart and the discussions have a collegiate quality. No one is “yelling at each other”, while we regularly challenge each other’s assumptions and are free to disagree – usually in public view on a mailing list or in our bug tracker. This is part of why I really love Mozilla. While we can get a bit heated and stressed, overall, the culture is good. YMMV of course.
By that time (11am) I had been working since 7:30am, setting many queries in bugs, and on IRC, and in emails into motion and made a lot of small but oddly difficult decisions. Often this meant exercising my wontfix powers on bugs — deferring uplift (aka “backport” ) to 50, 51, or leaving a fix in 52 to ride the trains to release some time next year. As I’m feeling pretty good I headed out to have lunch and work from a cafe downtown (Mazarine – the turkey salad sandwich was very good!)
This afternoon I’m focusing on ESR 45, and Aurora 51, doing a bit more bug triage. There are a couple of ESR uplifts stressing me out — seriously, I was having kittens over these patches — but now that we have an extra week until we release, it feels like a better position for asking for a 2nd code review, a bit more time for QA, and so on.
Heading out soon for drinks with friends across the street from this cafe, and then to the Internet Archive’s 20th anniversary party. Yay, Internet Archive!Related posts:ADA struggle at my workplaceRunning mochitests for Firefox
The Bugzilla Project developers meeting.
I want to tell you about an important new Balrog feature that we're working on. But I also want to tell you about how we planned it, because I think that part is even more interesting that the project itself.The Project
Balrog is Mozilla’s update server. It is responsible for deciding which updates to deliver for a given update request. Because updates deliver arbitrary code to users this means that a bad data in update server could result in orphaning users, or be used as an attack vector to infect users with malware. It is crucial that we make it more difficult for a single user’s account to make changes that affect large populations of users. Not only does this provide some footgun protection, but it safeguards our users from attacks if an account is compromise or an employee goes rogue.
While the current version of Balrog has a notion of permissions, most people effectively have carte-blanche access to one or more products. This means that an under-caffeinated Release Engineer could ship the wrong thing, or a single compromised account can begin an attack. Requiring multiple different accounts to sign off on any sensitive changes will protect us against both of these scenarios.
Multiple sign offs may also be used to enhance Balrog’s ability to support workflows that are more reflective of reality. For example, the Release Management team are the final gatekeepers for most products (ie: we can’t ship without their sign off), but they are usually not the people in the best place to propose changes to Rules. A multiple sign off system that supports different types of roles would allow some people to propose changes and others to sign off on them.The Planning Process
Earlier this year I blogged about Streamlining the throttled rollout of Firefox releases, which was the largest Balrog projects to-date at the time. While we did some up-front planning for it, it took significantly longer to implement than I'd originally hoped. This isn't uncommon for software projects, but I was still very disappointed with the slow pace. One of the biggest reasons for this was discovering new edge cases or implementation difficulties after we were deep into coding. Often this would result in needing to rework code that was thought to be finished already, or require new non-trivial enhancements to be made. For Multiple Signoff, I wanted to do better. Instead of a few hours of brainstorming, we've taken a more formal approach with it, and I'd like to share both the process, and the plan we've come up with.Setting Requirements
I really enjoy writing code. I find it intellectually challenging and fun. This quality is usually very useful, but I think it can be a hinderance when in the early stages of large projects, as I tend to jump straight to thinking about implementation before even knowing the full set of requirements. Recognizing this, I made a concious effort to purge implementation-related thoughts until we had a full set of requirements for Multiple Signoff reviewed by all stakeholders. Forcing myself not to go down the (more fun) path of thinking about code made me spend more time thinking about what we want and why we want it. All of this, particularly the early involvement of stakeholders, uncovered many hidden requirements and things that not everyone agreed. I believe that identifying them at such an early stage made them much easier to resolve, largely because there was no sunk-cost to consider.Planning the Implementation
Once our full set of requirements were written, I was amazed at how obvious much of the implementation was. New objects and pieces of data stood out like neon signs, and I simply plucked them out of the requirements section. Most of the interactions between them came very naturally as well. I wrote some use cases that acted almost as unit tests for the implementation proposal, and identified a lot of edge cases and bugs in the first pass of the implementation proposal. In retrospect, I probably should've written the use cases at the same time as the requirments. Between all of that and another round of review from stakeholders, I have significantly more confidence that the proposed implementation will look like the actual implementation than I have with any other projects of similar size.Bugs and Dependencies
Just like the implementation flowed easily from the requirements, the bugs and dependencies between them were easy to find by rereading the implementation proposal. In the end, I identified 18 distinct pieces of work, and filed them all as separate bugs. Because the dependencies were easy to identify, I was able to convince Bugzilla to give me a decent graph that helps identify the critical path, and which bugs are ready to be worked on.
But will it even help?
Overall, we probably spend a couple of people-weeks of active time on the requirements and implementation proposal. This isn't an overwhelming amount of time upfront, but it's enough that it's important to know if it's worthwhile next time. This is a question that can only be answered in retrospect. If the work goes faster and the implementation has less churn, I think it's safe to say that it was time well spent. Those are both things that are relatively easy to measure, so I hope to be able to measure this fairly objectively in the end.The Plan
If you're interested in reading the full set of requirements, implementation plan, and use cases, I've published them here.
mconley livehacks on real Firefox bugs while thinking aloud.
This is the sumo weekly call
Mozilla slaps ban on China's WoSign: Firefox drops trust for certs over 'deception'
Firefox-maker Mozilla will ban newly-issued digital certificates from WoSign and StartCom from January. Image: ZDNet/Mozilla. Starting in January, any website using a new certificate from Qihoo 360-owned certificate authority WoSign will have troubles ...
FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.
This email contains information about:
- Real-Time communications dev-room and lounge,
- speaking opportunities,
- volunteering in the dev-room and lounge,
- related events around FOSDEM, including the XMPP summit,
- social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
- the Planet aggregation sites for RTC blogs
The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.
The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.
To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.
To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.Speaking opportunities
Note: if you used FOSDEM Pentabarf before, please use the same account/username
Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.
Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.
Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.First-time speaking?
FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.Submission guidelines
The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.
In the "Submission notes", please tell us about:
- the purpose of your talk
- any other talk applications (dev-rooms, lightning talks, main track)
- availability constraints and special needs
You can use HTML and links in your bio, abstract and description.
If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.
We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.
Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.
Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.Volunteers needed
To make the dev-room and lounge run successfully, we are looking for volunteers:
- FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
- organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
- participation in the Real-Time lounge
- helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
- circulating this Call for Participation (text version) to other mailing lists
See the mailing list discussion for more details about volunteering.Related events - XMPP and RTC summits
The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.
We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.Social events and dinners
The traditional FOSDEM beer night occurs on Friday, 3 February.
On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.Spread the word and discuss
If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.
If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:Planet site Admin contact All projects Free-RTC Planet (http://planet.freertc.org) contact firstname.lastname@example.org XMPP Planet Jabber (http://planet.jabber.org) contact email@example.com SIP Planet SIP (http://planet.sip5060.net) contact firstname.lastname@example.org SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact email@example.com
Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.Contact
The dev-room administration team:
OK, I’ve given up the charade that these are weekly now. Welcome back.
Thanks to an amazing effort from Ed Morley and the rest of the Treeherder team, Treeherder has been migrated to Heroku, giving us significantly more flexible infrastructure.
Git-internal is no longer a standalone single point of failure (SPOF)! A warm standby host is running, and repository mirroring is in place. We now also have a fully matching staging environment for testing.
Improve Release Pipeline:
Aki and Catlee attended the security offsite and came away with todo items and a list to prioritize to improve release security.
Aki released scriptworker 0.8.0; this gives us signed chain of trust artifacts from scriptworkers, and gpg key management for chain of trust verification.
Improve CI Pipeline:
We now have nightly Linux64, Linux32 and Android 4.0 API15+ builds running on the date branch on taskcluster. Kim’s work to refactor the nightly task graph to transform the existing build “kind” into a signing “kind” has made adding new platforms quite straightforward. See https://bugzilla.mozilla.org/show_bug.cgi?id=1267426 and https://bugzilla.mozilla.org/show_bug.cgi?id=1277579 for more details.
There is still some remaining setup to be done, mostly around updates and moving artifacts into the proper locations (beetmover). Releng will then begin internal testing of these new nightlies (essentially dogfooding) to ensure that important things like updates are working correctly before we uplift this code to mozilla-central.
We hope to make that switch for Linux/Android nightlies within the next month, with Mac and Windows coming later this quarter.
During a recent tree closing window (TCW), the database team managed to successfully switch the buildbot database from MyISAM to InnoDB format for improved stability. This is something we’ve wanted to do for many years and it’s good to see it finally done.
Release:We’re currently on beta 10 for Firefox 50. This is noteworthy because in the next release cycle, Firefox 52 will be uplifted to the Aurora (developer edition) and Firefox 52 will be the last version of Firefox to support Windows XP, Windows Vista, and Universal binaries on Mac. Firefox 52 is due for release in March of 2017. Don’t worry though, all these platforms will be moving to the Firefox 52 ESR branch where they will continue to receive security updates for another year beyond that.
See you soon!
Last week’s cyber attack on Dyn that blocked access to popular websites like Amazon, Spotify, and Twitter is the latest example of the increasing threats to Internet security, making it more important that we acknowledge cybersecurity is a shared responsibility. Governments, companies, and users all need to work together to protect Internet security.
This is why Mozilla applauds Sens. Angus King Jr. (I-ME) and Martin Heinrich (D-NM) for calling on President Obama to establish enduring government-wide policies for the discovery, review, and sharing of security vulnerabilities. They suggest creating bug bounty programs and formalizing the Vulnerabilities Equities Process (VEP) – the government’s process for reviewing and coordinating the disclosure of vulnerabilities that it learns about or creates.
“The recent intrusions into United States networks and the controversy surrounding the Federal Bureau of Investigation’s efforts to access the iPhone used in the San Bernardino attacks have underscored for us the need to establish more robust and accountable policies regarding security vulnerabilities,” Senators King and Heinrich wrote in their letter.
Mozilla prioritizes the privacy and security of users and we work to find and fix vulnerabilities in Firefox as quickly as possible. We created one of the first bug bounty programs more than 10 years ago to encourage security researchers to report security vulnerabilities.
Mozilla has also called for five specific, important reforms to the VEP:
- All security vulnerabilities should go through the VEP and there should be public timelines for reviewing decisions to delay disclosure.
- All relevant federal agencies involved in the VEP must work together to evaluate a standard set of criteria to ensure all relevant risks and interests are considered.
- Independent oversight and transparency into the processes and procedures of the VEP must be created.
- The VEP Executive Secretariat should live within the Department of Homeland Security because they have built up significant expertise, infrastructure, and trust through existing coordinated vulnerability disclosure programs (for example, US CERT).
- The VEP should be codified in law to ensure compliance and permanence.
These changes to the discovery, review, and sharing of security vulnerabilities would be a great start to strengthening the shared responsibility of cybersecurity and reducing the countless cyber attacks we see today.
We are switching the sign in provider in Pontoon from Persona to Firefox Accounts. This means you will have to connect your existing Pontoon profile with a Firefox Account before continuing to use Pontoon. You need to do this before November 1, 2016 by following these steps:
1. Go to https://pontoon.mozilla.org/ and sign in with Persona as usually.
2. After you’re redirected to the Firefox Accounts migration page, click Sign in with Firefox Account and follow the instructions.
3. And that’s it! From this point on, you can log in with your Firefox Account.
Note that the email address of your Firefox Account and Pontoon account do not need to match. And if you don’t have a Firefox Account yet, you will be able to create it during the sign in process.
November 1, 2016 will the last day for you to sign in to Pontoon using Persona and connect your existing Pontoon profile with a Firefox Account. We recognize this is an inconvenience, and we apologize for it. Unfortunately, it is out of our control.
Huge thanks to Jarek for making the migration process so simple!
The past month has seen some significant and exciting improvements to Balrog. We've had the usual flow of feature work, but also a lot of improvements to the infrastructure and Docker image structure. Let's have a look at all the great work that's been done!Core Features
Most recently, two big changes have landed that allow us to use multifile updates for SystemAddons. This type of update configuration let us simplify configuration of Rules and Releases, which is one of the main design goals of Balrog. Part of this project involved implementing "fallback" Releases - which are used if an incoming request fails a throttle dice roll. This benefits Firefox updates as as well, because it will allow us to continue serving updates to older users while we're in the middle of a throttled rollout.
Some house cleaning work has been done to remove attributes that Firefox no longer supports, and add support for the new "backgroundInterval" attribute. The latter will give us server-side control of how fast Firefox downloads updates, which we hope will help speed up uptake of new Releases.
There's been some significant refactoring of our Domain Whitelist system. This doesn't change the way it works at all, but cleaning up the implementation has paved the way for some future enhancements.General Improvements
E-mails are now sent to a mailing list for some types of changes to Balrog's database. This serves as an alert system that ensures unexpected changes don't go unnoticed. In a similar vein, we also started aggregating exceptions to CloudOps' Sentry instance, which has already covered numerous production-only errors that have gone unnoticed for months.
Significant improvements have been made to the way our Docker images are structured. Instead of sharing one single Dockerfile for production and local dev, we've split them out. This has allowed the production image to get a lot smaller (mostly thanks to Benson's changes). On the dev side, it has let us improve the local development workflow - all code (including frontend) is now automatically rebuilt and reloaded when changed on the host machine. And thanks to Stefan, we even support development on Windows now!
We now have a script that extracts what the "active data" from the production database. When imported into a fresh database (ie: local database, or stage), it will serve exactly the same updates as production without all of the unnecessary history. This should make it much easier to reproduce issues locally, and verify that stage is functionally correctly.
Finally, something that's been on my wishlist for a long time finally happened as well: the Balrog Admin API Client code has been moved into the Balrog repo! Because it is so closely linked with the server side API, integrating them makes it much easier to keep them in sync.The People
The work above was only possible because of all the great contributors to Balrog. A big thanks goes to Ninad, Johan, Varun, Stefan, Simon, Meet, Benson, and Njira for all their time and hard work on Balrog!
Google To Make Certificate Transparency Mandatory As Mozilla Bans New ...
Google announced that, starting in October 2017, all publicly trusted website certificates will have to comply with Chrome's Certificate Transparency policy to be trusted by its Chrome browser. Mozilla also unveiled its plans to respond to the ...
Chris has been around the web block several times and knows a lot about standards and how developers make them applicable to various different environments. He worked on various browsers and has a high passion for the open web and empowering developers with standards and great browsers.
Here are the questions we covered:
- One of the worries with Web Components was that it would allow developers to hide a lot of complexity in custom elements. Do we have a problem understanding that modules are meant to be simple?
- Isn’t part of the issue that the web was built on the premise of documents and that a nature of modules needs to be forced into it? CSS has cascade in its name, yet modules shouldn’t inherit styles from the document.
- One thing that seems to be wasteful is that a lot of research that went into helper libraries in the past dies with them. YUI had a lot of great information about animation and interaction. Can we prevent this somehow?
- Do you feel that hacks die faster these days? Is a faster release schedule of browsers the solution to not keeping short-term solutions clog up the web?
- It amazes me what browsers allow me to do these days and create working layouts and readable fonts for me. Do you think developers don’t appreciate the complexity of standards and CSS enough?