Mozilla rebrand enters design development stage
Creative Review (blog)
Mozilla announced it would be rebranding back in June and has taken the unusual step of documenting the creative process online. The company has set up an 'open design' blog and has been posting content at each stage of the rebrand, inviting discussion ...
With the change of the season, we’ve worked hard to release a new version of Firefox that delivers the best possible experience across desktop and Android.
Expanding Multiprocess Support
Last month, we began rolling out the most significant update in our history, adding multiprocess capabilities to Firefox on desktop, which means Firefox is more responsive and less likely to freeze. In fact, our initial tests show a 400% improvement in overall responsiveness.
Our first phase of the rollout included users without add-ons. In this release, we’re expanding support for a small initial set of compatible add-ons as we move toward a multiprocess experience for all Firefox users in 2017.
Desktop Improvement to Reader Mode
This update also brings two improvements to Reader Mode. This feature strips away clutter like buttons, ads and background images, and changes the page’s text size, contrast and layout for better readability. Now we’re adding the option for the text to be read aloud, which means Reader Mode will narrate your favorite articles, allowing you to listen and browse freely without any interruptions.
We also expanded the ability to customize in Reader Mode so you can adjust the text and fonts, as well as the voice. Additionally, if you’re a night owl like some of us, you can read in the dark by changing the theme from light to dark.
Offline Page Viewing on Android
On Android, we’re now making it possible to access some previously viewed pages when you’re offline or have an unstable connection. This means you can interact with much of your previously viewed content when you don’t have a connection. The feature works with many pages, though it is dependent on your specific device specs. Give it a try by opening Firefox while your phone is in airplane mode.
We’re continuing to work on updates and new features that make your Firefox experience even better. Download the latest Firefox for desktop and Android and let us know what you think.
Mozilla Patching Firefox Certificate Pinning Vulnerability
Mozilla is expected tomorrow to patch a critical vulnerability in Firefox's automated update process for extensions that should put the wraps on a confusing set of twists surrounding this bug. The flaw also affected the Tor Browser and was patched ...
en meer »
Over the last weekend I was reinstalling my older MacBookPro (late 2011 model) again after replacing its hard drive with a fresh and modern SSD drive from Crucial 512GB. That change was really necessary given that simple file operations took about a minute, and every system tools claimed that the HDD was fine.
So after installing Mavericks I moved my home folder to another partition to make it easier later to reinstall OS X again. But as it turned out it is not that easy, especially not given that OS X doesn’t support mounting of other encrypted partitions beside the system partition during start-up yet. If you had a single user only, you will be busted after the home dir move and a reboot. That’s what I experienced. As fix under such a situation put back OS X into the “post install” state, and create a new administrator account via single-user mode. With this account you can at least sign-in again, and after unlocking the other encrypted partition you will have access to your original account again.
Having to first login via an account which data is still hosted on the system partition is not a workable solution for me. So I was continuing to find a solution which let me unlock the second encrypted partition during startup. After some search I finally found a tool which actually let me do this. It’s called Unlock and can be found on Github. To make it work it installs a LaunchDaemon which retrieves the encryption password via the System keychain, and unlocks the partition during start-up. To actually be on the safe side I compiled the code myself with Xcode and got it installed with some small modifications to the install script (I may want to contribute those modifications back into the repository for sure :).
In case you have similar needs, I hope this post will help you to avoid those hassles as I have experienced.
Last week, we wrote about the shared responsibility of protecting Internet security. Today, we want to dive deeper into this issue and focus on one very important obligation governments have: proper disclosure of security vulnerabilities.
Software vulnerabilities are at the root of so much of today’s cyber insecurity. The revelations of recent attacks on the DNC, the state electoral systems, the iPhone, and more, have all stemmed from software vulnerabilities. Security vulnerabilities can be created inadvertently by the original developers, or they can be developed or discovered by third parties. Sometimes governments acquire, develop, or discover vulnerabilities and use them in hacking operations (“lawful hacking”). Either way, once governments become aware of a security vulnerability, they have a responsibility to consider how and when (not whether) to disclose the vulnerability to the affected company so that developer can fix the problem and protect their users. We need to work with governments on how they handle vulnerabilities to ensure they are responsible partners in making this a reality today.
In the U.S., the government’s process for reviewing and coordinating the disclosure of vulnerabilities that it learns about or creates is called the Vulnerabilities Equities Process (VEP). The VEP was established in 2010, but not operationalized until the Heartbleed vulnerability in 2014 that reportedly affected two thirds of the Internet. At that time, White House Cybersecurity Coordinator Michael Daniel wrote in a blog post that the Obama Administration has a presumption in favor of disclosing vulnerabilities. But, policy by blog post is not particularly binding on the government, and as Daniel even admits, “there are no hard and fast rules” to govern the VEP.
It has now been two years since Heartbleed and the U.S. government’s blog post, but we haven’t seen improvement in the way that vulnerabilities disclosure is being handled. Just one example is the alleged hack of the NSA by the Shadow Brokers, which resulted in the public release of NSA “cyberweapons”, including “zero day” vulnerabilities that the government knew about and apparently had been exploiting for years. Companies like Cisco and Fortinet whose products were affected by these zero day vulnerabilities had just that, zero days to develop fixes to protect users before the vulnerabilities were possibly exploited by hackers.
The government may have legitimate intelligence or law enforcement reasons for delaying disclosure of vulnerabilities (for example, to enable lawful hacking), but these same vulnerabilities can endanger the security of billions of people. These two interests must be balanced, and recent incidents demonstrate just how easily stockpiling vulnerabilities can go awry without proper policies and procedures in place.
Cybersecurity is a shared responsibility, and that means we all must do our part – technology companies, users, and governments. The U.S. government could go a long way in doing its part by putting transparent and accountable policies in place to ensure it is handling vulnerabilities appropriately and disclosing them to affected companies. We aren’t seeing this happen today. Still, with some reforms, the VEP can be a strong mechanism for ensuring the government is striking the right balance.
More specifically, we recommend five important reforms to the VEP:
- All security vulnerabilities should go through the VEP and there should be public timelines for reviewing decisions to delay disclosure.
- All relevant federal agencies involved in the VEP must work together to evaluate a standard set of criteria to ensure all relevant risks and interests are considered.
- Independent oversight and transparency into the processes and procedures of the VEP must be created.
- The VEP Executive Secretariat should live within the Department of Homeland Security because they have built up significant expertise, infrastructure, and trust through existing coordinated vulnerability disclosure programs (for example, US CERT).
- The VEP should be codified in law to ensure compliance and permanence.
These changes would improve the state of cybersecurity today.
We’ll dig into the details of each of these recommendations in a blog post series from the Mozilla Policy team over the coming weeks – stay tuned for that.
Today, you can watch Heather West, Mozilla Senior Policy Manager, discuss this issue at the New America Open Technology Institute event on the topic of “How Should We Govern Government Hacking?” The event can be viewed here.
Mozilla verhelpt man-in-the-middle-lek in Firefox - Computer ...
Mozilla brengt 20 september een update uit voor Firefox die een man-in-the-middle-lek dicht. Het lek treft gebruikers van add-ons en werd vrijdag gedicht bij de ...
Mozilla dicht ernstig Firefox-lek op 20 september - Security.NLSecurity.nl
alle 3 nieuwsartikelen »
A couple weeks ago I started writing a game and i-can-management is the directory I made for the project so that’ll be the codename for now. I’m going to write these updates to journal the process of making this game. As I’m going through this process alone, you’ll see all aspects of the game development process as I go through them. That means some weeks may be art heavy, while others game rules, or maybe engine refactoring. I also want to give a glance how I’m feeling about the project and rules I make for myself.
Speaking of rules, those are going to be a central theme on how I actually keep this project moving forward.
- Optimize only when necessary. This seems obvious, but folks define necessary differently. 60 frames per second with 750×750 tiles on the screen is my current benchmark for whether I need to optimize. I’ll be adding numbers for load times and other aspects once they grow beyond a size that feels comfortable.
- Abstractions are expensive, use them sparingly.This is something I learned from a Jonathan Blow talk I mention in my previous post. Abstractions can increase or remove flexibility. On one hand reusing components may allow more rapid iteration. On the other hand it may take considerable effort to make systems communicate that weren’t designed to pass messages.I’m making it clear in each effort whether I’m in exploration mode so I work mostly with just 1 function, or if I’m in architect mode where I’m trying to make the next feature a little easier to implement. This may mean 1000 line functions and lots of global like use for a while until I understand how the data will be used. Or it may mean abstracting a concept like the camera to a struct because the data is always used together.
- Try the easier to implement answer before trying the better answer.I have two goals with this. First, it means I get to start trying stuff faster so I know if I want to pursue it or if I’m kinda off on the idea. Maybe this first implementation will show some other subsystem needs features first so I decide to delay the more correct answer. So in short quicker to test and expose unexpected requirements.The other goal is to explore building games in a more holistic way. Knowing a quick and dirty way to implement something may help when trying to get an idea thrown together really quick. Then knowing how to evolve that code into a better long term solution means next games or ideas that cross pollinate are faster to compose because the underlying concepts are better known.
The last couple weeks have been an exploration of OpenGL via glium the library I’m using to access OpenGL from Rust as well as abstract away the window creation. I’d only ever ran the example before this dive into building a game. From what I remember of doing this in C++ the abstraction it provides for the window creation and interaction, using the glutin library is pretty great. I was able to create a window of whatever size, hook up keyboard and mouse events, and render to the screen pretty fast after going through the tutorial in the glium book.
This brings me to one of the first frustrating points in this project. So many things are focused on 3d these days that finding resources for 2d rendering is harder. If you find them, they are for old versions of OpenGL or use libraries to handle much of the tile rendering. I was hoping to find an article like “I built a 2d tile engine that is pretty fast and these are the techniques I used!” but no such luck. OpenGL guides go immediately into 3d space after getting past basic polygons. But it just means I get to explore more which is probably a good thing.
I already had a deterministic map generator built to use as the source of the tiles on the screen. So, I copy and pasted some of the matrices from the glium book and then tweak the numbers I was using for my tiles until they show up on the screen and looked ok. From here I was pretty stoked. I mean if I have 25×40 tiles on the screen what more could someone ask for. I didn’t know how to make the triangle strips work well for the tiles to be drawn all at once, so I drew each tile to the screen separately, calculating everything on every frame.
I started to add numbers here and there to see how to adjust the camera in different directions. I didn’t understand the math I was working with yet so I was mostly treating it like a black box and I would add or multiply numbers and recompile to see if it did anything. I quickly realized I needed it to be more dynamic so I added detection for the mouse scrolling. Since I’m on my macbook most of the time I’m doing development I can scroll vertically as well as horizontally, making a natural panning feeling.
I noticed that my rendering had a few quirks, and I didn’t understand any of the math that was being used, so I went seeking more sources of information on how these transforms work. At first I was directed to the OpenGL transformations page which set me on the right path, including a primer on the linear algebra I needed. Unfortunately, it quickly turned toward 3d graphics and I didn’t quite understand how to apply it to my use case. In looking for more resources I found Solarium Programmers’ OpenGL 101 page which took some more time with orthographic projects, what I wanted for my 2d game.
Over a few sessions I rewrote all the math to use a coordinate system I understood. This was greatly satisfying, but if I hadn’t started with ignoring the math, I wouldn’t have had a testbed to see if I actually understood the math. A good lesson to remember, if you can ignore a detail for a bit and keep going, prioritize getting something working, then transforming it into something you understand more thoroughly.
I have more I learned in the last week, but this post is getting quite long. I hope to write a post this week about changing from drawing individual tiles to using a single triangle strip for the whole map.
In the coming week my goal is to have mouse clicks interacting with the map working. This involves figuring out what tile the mouse has clicked which I’ve learned isn’t trivial. In parallel I’ll be developing the first set of tiles using Pyxel Edit and hopefully integrating them into the game. Then my map will become richer than just some flat colored tiles.
Here is a screenshot of the game so far for posterity’s sake. It is showing 750×750 tiles with deterministic weighted distribution between grass, water, and dirt::
In the last week, we landed 68 PRs in the Servo organization’s repositories.Planning and Status
Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.
This week’s status updates are here.
Special thanks to canaltinova for their work on implementing the matrix transition algorithms for CSS3 transform animation. This allows (both 2D and 3D) rotate(), perspective() and matrix() functions to be interpolated, as well as interpolations between arbitrary transformations, though the last bit is yet to be implemented. In the process of implementation, we had to deal with many spec bugs, as well as implementation bugs in other browsers, which complicated things immensely – it’s very hard to tell if your code has a mistake or if the spec itself is wrong in complicated algorithms like these. Great work, canaltinova!Notable Additions
- glennw added support for scrollbars
- canaltinova implemented the matrix decomposition/interpolation algorithm
- nox landed a rustup to the 9/14 rustc nightly
- ejpbruel added a websocket server for use in the remote debugging protocol
- creativcoder implemented the postMessage() API for ServiceWorkers
- ConnorGBrewster made Servo recycle session entries when reloading
- mrobinson added support for transforming rounded rectangles
- glennw improved webrender startup times by making shaders compile lazily
- canaltinova fixed a bug where we don’t normalize the axis of rotate() CSS transforms
- peterjoel added the DOMMatrix and DOMMatrixReadOnly interfaces
- Ms2ger corrected an unsound optimization in event dispatch
- tizianasellitto made DOMTokenList iterable
- aneeshusa excised SubpageId from the codebase, using PipelineId instead
- gilbertw1 made the HTTP authentication cache use origins intead of full URLs
- jmr0 fixed the event suppression logic for pages that have navigated
- zakorgy updated some WebBluetooth APIs to match new specification changes
Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!Screenshot
Some screencasts of matrix interpolation at work:
This one shows all the basic transformations together (running a tweaked version of this page. The 3d rotate, perspective, and matrix transformation were enabled by the recent change.
Servo’s new scrollbars!
Mozilla will patch zero-day Firefox bug to fiddle man-in-the-middle diddle
Mozilla will patch a flaw in its Firefox browser that could allow well-resourced attackers to launch man-in-the-middle impersonation attacks that also affects the Tor anonymity network. The flaw was first noticed by researchers describing the attacks ...
Busy week without much things done for bugs. W3C is heading to Lisbon for the TPAC, so tune of the week: Amalia Rodrigues. I'll be there in spirit.Webcompat Life
Progress this week:326 open issues ---------------------- needsinfo 12 needsdiagnosis 106 needscontact 8 contactready 28 sitewait 158 ----------------------
You are welcome to participate
- 21 lines for the new devtools debugger
- A lot of administrative tasks this last week: Job interviews for the Web Compatibility Engineer position, Meeting with a Yahoo! Japan about Yahoo! homepage Web Compatibility issue, account transition for work and catching up with internal video announcements. It doesn't feel productive but they are necessary and sometimes very useful.
(a selection of some of the bugs worked on this week).
- yet another appearance: none implemented in Blink. This time for meter.
- Document how to write tests on webcompat.com using test fixtures.
- ToWrite: Amazon prefetching resources with <object> for Firefox only.
Google and Mozilla Block Access to 'The Pirate Bay'; TPB is Run by FBI to Catch ... - University Herald
Google and Mozilla Block Access to 'The Pirate Bay'; TPB is Run by FBI to Catch ...
PC Mag reported that Google and Mozilla are denying users from access to "The Pirate Bay" download pages. When downloading an actual torrent, users are met with warning messages. For Chrome, "The site ahead contains harmful programs" and Firefox ...
en meer »Google Nieuws
The TFSA is a savings account for Canadians that was introduced in 2009.
As a quick check I wanted to see how much or little my TFSA had changed against what it should be. That meant a double check of how much room I had in the TFSA each year. So this is a quick cacluation the theoretical case: that you are able to invest the maximum amount each year, at the beginning of the year and get 5% return (after fees) on that.Year Maximum Total invested Compounded 2009$5,000.00$5,000.00$5,250.00 2010$5,000.00$10,000.00$10,762.50 2011$5,000.00$15,000.00$16,550.63 2012$5,000.00$20,000.00$22,628.16 2013$5,500.00$25,500.00$29,534.56 2014$5,500.00$31,000.00$36,786.29 2015$10,000.00$41,000.00$49,125.61 2016$5,500.00$46,500.00$57,356.89
Which always raises the question for me of what is a reasonable rate to calculate at these days. It always used to be 10%, but that's very hard to get these days. Since 2006 the annualized return on the S&P 500 is 5.158% for example. Perhaps 5% represents too conversative a number.
I don’t understand. CyanogenMod 13 introduced new Weather widget and lock screen support. Great! Unfortunately, the widget requires specific providers for weather services and CM does not provide any in the default installation. There exists Weather Underground provider, which works, but only other provider I found (Yahoo! Weather provider) does not work with my CM without Google Play!.
I would a way prefer OpenWeatherMap provider, but although CyanogenMod has the GitHub repository for one , but no APK anywhere (and certainly not one for F-Droid). Fortunately, I have found a blogpost which describes how simple it is build the APK from the given code. Unfortunately, author did not provide APK on his site. I am not sure, whether there is not some hook, but here is mine.
Back in February 2016, I started my journey as a professional game developer. I joined Sparkypants to work on the backend for Dropzone. This was about 7 months ago at time of writing. I didn’t enter the game development world in the standard ways. I wasn’t at one of the various schools with game dev programs, I didn’t intern at a studio, I haven’t spent much of my personal development time building my own indie games. I had on the other hand, spent years building backend services, writing dev tools, competing in AI competitions, and building a slew of half finished open source projects. In short, I was not a game developer when I started.
My stark contrast in background works to my advantage in many parts of my job. Most of our engineers haven’t worked on backend services and haven’t needed to scale that sort of infrastructure. My lead and friend Johannes has been instrumental in many of my successes so far in the company. He has background in backend development as well as game development and has often been a translator and guide to me as I learn what being a game developer means.
At first, I assumed my contrast would work itself out naturally and I’d just become a game developer by osmosis. If I am surrounded by folks doing this and I’m actively developing a game, I will become a game developer. But that presupposes success, which was only coming to me in limited amounts. The other conclusion would be leaving game development because I wasn’t compatible with it, something I’m unwilling to accept at this time.
I shared my concerns around not fitting the culture at Sparkypants with Johannes, as well as some productivity worries. I’ve learned over the years that if I’m feeling problems like this, my boss may be as well. Johannes with his typical wonderful encouraging personality reminded me that there are large aspects of my personality that fit in with the culture, just maybe my development style and conflict resolution needed work and recommended this talk by Jonathan Blow to show me the mental model that is closer to how many of the other developers operate, among some other advice.
That talk by Jonathan Blow spends a fair amount of its time on the topic of optimization. Whether it is using data oriented techniques to make data series processing faster or drawing in a specific way to make the graphics card use less memory or any number of topics, optimization comes up in nearly every game development talk or article at some point. His point though was that we often spend too much time optimizing the wrong things. If you’ve been in computer science for a bit you’ve inevitably heard at least a fragment of the following quote from Donald Knuth, if not you’re in for a treat, this is a good one:
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
The bolded text is the part most folks quote, implying the rest. I had heard this, quoted it, and used it as justification for doing or not doing things many times in the past. But, I’d also forgotten it, I’d apply it when it was convenient for me, but not generally to my software development. Blow starts with the more traditional overthinking algorithms and code in general that most bring up when they speak on premature optimization. Then he followed on with the idea that selecting data structures is a form of optimization. That follow on was a segue to point out that any time you are thinking about a problem, you should keep in mind if it is the most important or urgent problems for you to think about.
The end of the day, your job as a game developer is not to optimize for speed or correctness, but to optimize for fun. This means trying a lot of ideas and throwing many of them out. If you spent a lot of time optimizing for a million users of a feature and only some folks in the company use it before you decide to remove it, you’ve wasted a lot of effort. Maybe not completely, since you’ve probably learned during the process, but that effort could have been put into other features or parts of the system that may actually need attention. This shift in thinking has me letting go of details in more cases, spend less time on projects and focusing on “functional” over “correct and scalable.”
The next day after watching that talk and discussing with Johannes, I attended RustConf and saw a series of amazing talks on Rust and programming in general. Of particular note for changing my mental model was Julia Evan’s closing keynote about learning systems programming with Rust. There were so many things that struck me during that talk, but I’ll just focus on the couple that were most relevant.
First and foremost was the humility in the talk. Julia’s self described experience level was “intermediate developer” while having about as many years of experience as I have and I considered myself a more “senior developer.” At many points over the last couple years I’ve wrestled with this, considering myself senior then seeing evidence that I’m not. As more confident person, it is an easy trap for me to fall into. I’m in my first year as a game developer, regardless of other experience I’m a junior game developer at best.
Starting to internalize this humility has resulted in fighting my coworkers less when they bring up topics that I think I have enough knowledge to weigh in on. The more experienced folks at work have decades of building games behind them. I’m not saying my input to these discussions is worthless, I still have a lot to contribute, but I’ve been able to check my ego at the door more easily and collaborate through topics instead of being contrary.
The humility in the talk makes another major concept from it, life long learning, take on a new light. I’ve always been striving for more knowledge in the computer science space, so life long learning isn’t new to me, but like the optimization discussion above there is more nuance to be discovered. Having humility when trying to learn makes the experience so much richer for all parties. Teachers being humble will not over explain a topic and recognize that their way is not the only way. Learners being humble will be more receptive to ideas that don’t fit their current mental model and seek more information about them.
This post has become quite long, so I’ll try to wrap things up and use further blog posts to explore these ideas with more concrete examples. Writing this has been a mechanism for me to understand some of this change in myself as well as help others who may end up in similar shoes.
If this blog post were a tweet, I think it’d be summarized into “Pay attention to the important things, check your ego at the door, and keep learning.” which I’m sure would get me some retweets and stars or hearts or whatever. And if someone else said it, I’d go “of course, yeah folks mess this up all the time!” But, there is so much more nuance in those ideas. I now realize I’m just a very junior game developer with some other sometimes relevant experience, I’ve so much to learn from my peers and am extremely excited to do so.
If you have additional resources that you’d think I or others who read this would find valuable, please comment below or send me at tweet.
Mozilla dicht ernstig Firefox-lek op 20 september
Mozilla zal aanstaande dinsdag 20 september met een beveiligingsupdate komen voor een ernstige kwetsbaarheid waardoor een aanvaller gebruikers die extensies hebben geïnstalleerd met malware kan infecteren. Het beveiligingslek werd gisteren al in ...
en meer »
Earlier this week, security researchers published reports that Firefox and Tor Browser were vulnerable to “man-in-the-middle” (MITM) attacks under special circumstances. Firefox automatically updates installed add-ons over an HTTPS connection. As a backup protection measure against mis-issued certificates, we also “pin” Mozilla’s web site certificates, so that even if an attacker manages to get an unauthorized certificate for our update site, they will not be able to tamper with add-on updates.
Due to flaws in the process we used to update “Preloaded Public Key Pinning” in our releases, the pinning for add-on updates became ineffective for Firefox release 48 starting September 10, 2016 and ESR 45.3.0 on September 3, 2016. As of those dates, an attacker who was able to get a mis-issued certificate for a Mozilla Web site could cause any user on a network they controlled to receive malicious updates for add-ons they had installed.
Users who have not installed any add-ons are not affected. However, Tor Browser contains add-ons and therefore all Tor Browser users are potentially vulnerable. We are not presently aware of any evidence that such malicious certificates exist in the wild and obtaining one would require hacking or compelling a Certificate Authority. However, this might still be a concern for Tor users who are trying to stay safe from state-sponsored attacks. The Tor Project released a security update to their browser early on Friday; Mozilla is releasing a fix for Firefox on Tuesday, September 20.
To help users who have not updated Firefox recently, we have also enabled Public Key Pinning Extension for HTTP (HPKP) on the add-on update servers. Firefox will refresh its pins during its daily add-on update check and users will be protected from attack after that point.
Mozilla checks if Firefox is affected by same malware vulnerability as Tor
The vulnerability allows an attacker who has a man-in-the-middle position and is able to obtain a forged certificate to impersonate Mozilla servers, Tor officials warned in an advisory. From there, the attacker could deliver a malicious update for ...
en meer »
Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...
With Easy Passwords I develop a product which could be considered a Last Pass competitor. In this particular case however, my interest was sparked by the reports of two Last Pass security vulnerabilities (1, 2) which were published recently. It’s a fascinating case study given that Last Pass is considered security software and as such should be hardened against attacks.
I decided to dig into Last Pass 4.1.21 (latest version for Firefox at that point) in order to see what their developer team did wrong. The reported issues sounded like there might be structural problems behind them. The first surprise was the way Last Pass is made available to users however: on Addons.Mozilla.Org you only get the outdated Last Pass 3 as the stable version, the current Last Pass 4 is offered on the development channel and Last Pass actively encourages users to switch to the development channel.
My starting point were already reported vulnerabilities and the approach that Last Pass developers took in order to address those. In the process I discovered two similar vulnerabilities and a third one which had even more disastrous consequences. All issues have been reported to Last Pass and resolved as of Last Pass 4.1.26.Password autocomplete
Last Pass on the other hand supports filling in passwords without any user interaction whatsoever, even though that feature doesn’t seem to be enabled by default. But that’s not even the main issue, as Mathias Karlsson realized the code recognizing which website you are on is deeply flawed. So you don’t need to control a website to steal passwords for it, you can make Last Pass think that your website malicious.com is actually twitter.com and then fill in your Twitter password. This is possible because Last Pass uses a huge regular expression to parse URLs and this part of it is particularly problematic:(?:(([^:@]*):?([^:@]*))?@)?
This is meant to match the username/password part before the hostname, but it will actually skip anything until a @ character in the URL. So if that @ character is in the path part of the URL then the regular expression will happily consider the real hostname part of the username and interpret anything following the @ character the hostname — oops. Luckily, Last Pass already recognized that issue even before Karlsson’s findings. Their solution? Add one more regular expression and replace all @ characters following the hostname by %40. Why not change the regular expressions so that it won’t match slashes? Beats me.
The bug that Karlsson found was then this band-aid code only replacing the last @ character but not any previous ones (greedy regular expression). As a response, Last Pass added more hack-foo to ensure that other @ characters are replaced as well, not by fixing the bug (using a non-greedy regular expression) but by making the code run multiple times. My bug report then pointed out that this code still wasn’t working correctly for data: URLs or URLs like http://email@example.com:firstname.lastname@example.org/. While it’s not obvious whether the issues are still exploitable, this piece of code is just too important to have such bugs.
Of course, improving regular expressions isn’t really the solution here. Last Pass just shouldn’t do their own thing when parsing URLs, it should instead let the browser do it. This would completely eliminate the potential for Last Pass and the browser disagreeing on the hostname of the current page. Modern browsers offer the URL object for that, old ones still allow achieving the same effect by creating a link element. And guess what? In their fix Last Pass is finally doing the right thing. But rather than just sticking with the result returned by the URL object they compare it to the output of their regular expression. Guess they are really attached to that one…Communication channels
I didn’t know the details of the other report when I looked at the source code, I only knew that it somehow managed to interfere with extension’s internal communication. But how is that even possible? All browsers provide secure APIs that allow different extension parts to communicate with each other, without any websites listening in or interfering. To my surprise, Last Pass doesn’t limit itself to these communication channels and relies on window.postMessage() quite heavily in addition. The trouble with this API is: anybody could be sending messages, so receivers should always verify the origin of the message. As Tavis Ormandy discovered, this is exactly what Last Pass failed to do.
In the code that I saw origin checks have been already added to most message receivers. However, I discovered another communication mechanisms: any website could add a form with id="lpwebsiteeventform" attribute. Submitting this form triggered special actions in Last Pass and could even produce a response, e.g. the getversion action would retrieve details about the Last Pass version. There are also plenty of actions which sound less harmless, such as those related to setting up and verifying multifactor authentication.
For my proof-of-concept I went with actions that were easier to call however. There was get_browser_history_tlds action for example which would retrieve a list of websites from your browsing history. And there were setuuid and getuuid actions which allowed saving an identifier in the Last Pass preferences which could not be removed by regular means (unlike cookies).
Last Pass resolved this issue by restricting this communication channel to lastpass.com and lastpass.eu domains. So now these are the only websites that can read out your browsing history. What they need it for? Beats me.Full compromise
When looking into other interactions with websites, I noticed this piece of code (reduced to the relevant parts):var src = window.frameElement.getAttribute("lpsrc"); if (src && 0 < src.indexOf("lpblankiframeoverlay.local")) window.location.href = g_url_prefix + "overlay.html" + src.substring(src.indexOf("?"));
This is how Last Pass injects its user interface into websites on Firefox: since content scripts don’t have the necessary privileges to load extension pages into frames, they create a frame with an attribute like lpsrc="http://lpblankiframeoverlay.local/?parameters". Later, the code above (which has the necessary privileges) looks at the frame and loads the extension page with the correct parameters.
Of course, a website can create this frame as well. And it can use a value for lpsrc that doesn’t contain question marks, which will make the code above add the entire attribute value to the URL. This allows the website to load any Last Pass page, not just overlay.html. Doesn’t seem to be a big deal but there is a reason why websites aren’t allowed to load extension pages: these pages often won’t expect this situation and might do something stupid.
The security issues discovered in Last Pass are not an isolated incident. The base concept of the extension seems sound, for example the approach they use to derive the encryption key and to encrypt your data before sending it to the server is secure as far as I can tell. The weak point is the Last Pass browser extension however which is necessarily dealing with decrypted data. This extension is currently violating best practices which opens up unnecessary attack surfaces, the reported security vulnerabilities are a consequence of that. Then again, if Tavis Ormandy is right then Last Pass is in good company.