Mozilla veröffentlicht Firefox 49.0.1
Mozilla hat nur wenige Tage nach Veröffentlichung von Firefox 49.0 ein Update auf Firefox 49.0.1 veröffentlicht. Wie bereits bei beiden außerplanmäßigen Updates für Firefox 48 ist auch dieses Update der Drittanbieter-Software WebSense geschuldet.
Diversity and Inclusion is more than having people of different demographics in a group. It is also about having the resulting diversity of perspectives included in the decision-making and action of the group in a fundamental way.
I’ve had this experience lately, and it demonstrated to me both why it can be hard and why it’s so important. I’ve been working on a project where I’m the individual contributor doing the bulk of the work. This isn’t because there’s a big problem or conflict; instead it’s something I feel needs my personal touch. Once the project is complete, I’m happy to describe it with specifics. For now, I’ll describe it generally.
There’s a decision to be made. I connected with the person I most wanted to be comfortable with the idea to make sure it sounded good. I checked with our outside attorney just in case there was something I should know. I checked with the group of people who are most closely affected and would lead the decision and implementation if we proceed. I received lots of positive response.
Then one last person checked in with me from my first level of vetting and spoke up. He’s sorry for the delay, etc but has concerns. He wants us to explore a bunch of different options before deciding if we’ll go forward at all, and if so how.
At first I had that sinking feeling of “Oh bother, look at this. I am so sure we should do this and now there’s all this extra work and time and maybe change. Ugh!” I got up and walked around a bit and did a few thing that put me in a positive frame of mind. Then I realized — we had added this person to the group for two reasons. One, he’s awesome — both creative and effective. Second, he has a different perspective. We say we value that different perspective. We often seek out his opinion precisely because of that perspective.
This is the first time his perspective has pushed me to do more, or to do something differently, or perhaps even prevent me from something that I think I want to do. So this is the first time the different perspective is doing more than reinforcing what seemed right to me.
That lead me to think “OK, got to love those different perspectives” a little ruefully. But as I’ve been thinking about it I’ve come to internalize the value and to appreciate this perspective. I expect the end result will be more deeply thought out than I had planned. And it will take me longer to get there. But the end result will have investigated some key assumptions I started with. It will be better thought out, and better able to respond to challenges. It will be stronger.
I still can’t say I’m looking forward to the extra work. But I am looking forward to a decision that has a much stronger foundation. And I’m looking forward to the extra learning I’ll be doing, which I believe will bring ongoing value beyond this particular project.
I want to build Mozilla into an example of what a trustworthy organization looks like. I also want to build Mozilla so that it reflects experience from our global community and isn’t living in a geographic or demographic bubble. Having great people be part of a diverse Mozilla is part of that. Creating a welcoming environment that promotes the expression and positive reaction to different perspectives is also key. As we learn more and more about how to do this we will strengthen the ways we express our values in action and strengthen our overall effectiveness.
Mozilla Firefox 49.0 and Thunderbird 45.3 Land in All Supported Ubuntu OSes
Today, September 22, 2016, Chris Coulson from Canonical published two security advisories to inform the Ubuntu Linux community about the availability of the latest Mozilla products in all supported releases. Mozilla announced the other day that its ...
Mozilla Narrows Down New Logo Options to Four
Softpedia News (blog)
The Mozilla Foundation has cut down, refined, and proposed new logos for its rebranding project that started three months ago, after initially putting forward seven logos for user review in mid-August. Realizing that most people still identify Firefox ...
Mozilla Gets Rid of Firefox Hello in Firefox 49
In October 2014, as part of the Firefox 34 beta release, Mozilla introduced its Firefox Hello communications technology enabling users to make calls directly from the browser. On Sept. 20, 2016, Mozilla formally removed support for Firefox Hello as ...
Bug that hit Firefox and Tor browsers was hard to spot—now we know whyArs Technica
alle 2 nieuwsartikelen »Google Nieuws
Software-update: Mozilla Firefox 49.0
Mozilla Firefox 2013 logo (75 pix) Mozilla heeft versie 49 van zijn webbrowser Firefox uitgebracht. In versie 49 is onder meer de login manager aangepast, zodat deze nu gegevens die voor een onbeveiligde http-verbinding zijn opgeslagen kan gebruiken ...
Mozilla Patches Certificate Pinning Vulnerability in Firefox
As expected, Mozilla patched a highly scrutinized flaw in its automated update process for add-ons in Firefox, specifically around the expiration of certificate pins. The vulnerability allowed attackers to intercept encrypted browser traffic, inject a ...
Mozilla slowly grants multi-process Firefox to more usersComputerworld
Mozilla Launches Firefox 49PC Perspective
New Mozilla Firefox Expands Multi-Process Support And MoreGeeky Gadgets
VentureBeat -The Mobile Indian -Softpedia News
alle 26 nieuwsartikelen »
Apple patcht lekken in iCloud, Safari, macOS en iTunes
Naast Mozilla en Symantec heeft ook Apple belangrijke beveiligingsupdates voor gebruikers uitgebracht. Zo verscheen er een nieuwe macOS-versie genaamd Sierra. In deze versie van Apples besturingssysteem zijn in totaal 65 kwetsbaarheden gepatcht.
Mozilla dicht 18 lekken in Firefox 49, stopt support oude Macs
Mozilla heeft een nieuwe versie van Firefox uitgebracht waarin 18 beveiligingslekken zijn gedicht, waarvan vier zo ernstig dat een aanvaller kwetsbare systemen in het ergste geval zou kunnen overnamen als gebruikers een kwaadaardige of gehackte ...
en meer »
Mozilla slowly grants multi-process Firefox to more users
Mozilla today upgraded Firefox to version 49, and said it is expanding the pool of users who receive the multiple process browser that started reaching a few customers weeks ago. "In this release, we're expanding support for a small initial set of ...
Mozilla Patching Firefox Certificate Pinning Vulnerability | Threatpost ...Threatpost
Firefox 49 arrives with Reader Mode improvements, offline viewing on Android ...VentureBeat
Mozilla Launches Firefox 49PC Perspective
Softpedia News -BetaNews -Engadget
alle 14 nieuwsartikelen »
Mozilla shortlists four designs in open-source rebrand project
It is working with design consultancy Johnson Banks on its open-source rebrand project, which has seen it seeking feedback from the Mozilla community and general public through the comments section on the Mozilla blog, social media and live events over ...
Mozilla rebrand enters design development stage
Creative Review (blog)
Mozilla announced it would be rebranding back in June and has taken the unusual step of documenting the creative process online. The company has set up an 'open design' blog and has been posting content at each stage of the rebrand, inviting discussion ...
With the change of the season, we’ve worked hard to release a new version of Firefox that delivers the best possible experience across desktop and Android.
Expanding Multiprocess Support
Last month, we began rolling out the most significant update in our history, adding multiprocess capabilities to Firefox on desktop, which means Firefox is more responsive and less likely to freeze. In fact, our initial tests show a 400% improvement in overall responsiveness.
Our first phase of the rollout included users without add-ons. In this release, we’re expanding support for a small initial set of compatible add-ons as we move toward a multiprocess experience for all Firefox users in 2017.
Desktop Improvement to Reader Mode
This update also brings two improvements to Reader Mode. This feature strips away clutter like buttons, ads and background images, and changes the page’s text size, contrast and layout for better readability. Now we’re adding the option for the text to be read aloud, which means Reader Mode will narrate your favorite articles, allowing you to listen and browse freely without any interruptions.
We also expanded the ability to customize in Reader Mode so you can adjust the text and fonts, as well as the voice. Additionally, if you’re a night owl like some of us, you can read in the dark by changing the theme from light to dark.
Offline Page Viewing on Android
On Android, we’re now making it possible to access some previously viewed pages when you’re offline or have an unstable connection. This means you can interact with much of your previously viewed content when you don’t have a connection. The feature works with many pages, though it is dependent on your specific device specs. Give it a try by opening Firefox while your phone is in airplane mode.
We’re continuing to work on updates and new features that make your Firefox experience even better. Download the latest Firefox for desktop and Android and let us know what you think.
Mozilla Patching Firefox Certificate Pinning Vulnerability
Mozilla is expected tomorrow to patch a critical vulnerability in Firefox's automated update process for extensions that should put the wraps on a confusing set of twists surrounding this bug. The flaw also affected the Tor Browser and was patched ...
en meer »
Over the last weekend I was reinstalling my older MacBookPro (late 2011 model) again after replacing its hard drive with a fresh and modern SSD drive from Crucial 512GB. That change was really necessary given that simple file operations took about a minute, and every system tools claimed that the HDD was fine.
So after installing Mavericks I moved my home folder to another partition to make it easier later to reinstall OS X again. But as it turned out it is not that easy, especially not given that OS X doesn’t support mounting of other encrypted partitions beside the system partition during start-up yet. If you had a single user only, you will be busted after the home dir move and a reboot. That’s what I experienced. As fix under such a situation put back OS X into the “post install” state, and create a new administrator account via single-user mode. With this account you can at least sign-in again, and after unlocking the other encrypted partition you will have access to your original account again.
Having to first login via an account which data is still hosted on the system partition is not a workable solution for me. So I was continuing to find a solution which let me unlock the second encrypted partition during startup. After some search I finally found a tool which actually let me do this. It’s called Unlock and can be found on Github. To make it work it installs a LaunchDaemon which retrieves the encryption password via the System keychain, and unlocks the partition during start-up. To actually be on the safe side I compiled the code myself with Xcode and got it installed with some small modifications to the install script (I may want to contribute those modifications back into the repository for sure :).
In case you have similar needs, I hope this post will help you to avoid those hassles as I have experienced.
Last week, we wrote about the shared responsibility of protecting Internet security. Today, we want to dive deeper into this issue and focus on one very important obligation governments have: proper disclosure of security vulnerabilities.
Software vulnerabilities are at the root of so much of today’s cyber insecurity. The revelations of recent attacks on the DNC, the state electoral systems, the iPhone, and more, have all stemmed from software vulnerabilities. Security vulnerabilities can be created inadvertently by the original developers, or they can be developed or discovered by third parties. Sometimes governments acquire, develop, or discover vulnerabilities and use them in hacking operations (“lawful hacking”). Either way, once governments become aware of a security vulnerability, they have a responsibility to consider how and when (not whether) to disclose the vulnerability to the affected company so that developer can fix the problem and protect their users. We need to work with governments on how they handle vulnerabilities to ensure they are responsible partners in making this a reality today.
In the U.S., the government’s process for reviewing and coordinating the disclosure of vulnerabilities that it learns about or creates is called the Vulnerabilities Equities Process (VEP). The VEP was established in 2010, but not operationalized until the Heartbleed vulnerability in 2014 that reportedly affected two thirds of the Internet. At that time, White House Cybersecurity Coordinator Michael Daniel wrote in a blog post that the Obama Administration has a presumption in favor of disclosing vulnerabilities. But, policy by blog post is not particularly binding on the government, and as Daniel even admits, “there are no hard and fast rules” to govern the VEP.
It has now been two years since Heartbleed and the U.S. government’s blog post, but we haven’t seen improvement in the way that vulnerabilities disclosure is being handled. Just one example is the alleged hack of the NSA by the Shadow Brokers, which resulted in the public release of NSA “cyberweapons”, including “zero day” vulnerabilities that the government knew about and apparently had been exploiting for years. Companies like Cisco and Fortinet whose products were affected by these zero day vulnerabilities had just that, zero days to develop fixes to protect users before the vulnerabilities were possibly exploited by hackers.
The government may have legitimate intelligence or law enforcement reasons for delaying disclosure of vulnerabilities (for example, to enable lawful hacking), but these same vulnerabilities can endanger the security of billions of people. These two interests must be balanced, and recent incidents demonstrate just how easily stockpiling vulnerabilities can go awry without proper policies and procedures in place.
Cybersecurity is a shared responsibility, and that means we all must do our part – technology companies, users, and governments. The U.S. government could go a long way in doing its part by putting transparent and accountable policies in place to ensure it is handling vulnerabilities appropriately and disclosing them to affected companies. We aren’t seeing this happen today. Still, with some reforms, the VEP can be a strong mechanism for ensuring the government is striking the right balance.
More specifically, we recommend five important reforms to the VEP:
- All security vulnerabilities should go through the VEP and there should be public timelines for reviewing decisions to delay disclosure.
- All relevant federal agencies involved in the VEP must work together to evaluate a standard set of criteria to ensure all relevant risks and interests are considered.
- Independent oversight and transparency into the processes and procedures of the VEP must be created.
- The VEP Executive Secretariat should live within the Department of Homeland Security because they have built up significant expertise, infrastructure, and trust through existing coordinated vulnerability disclosure programs (for example, US CERT).
- The VEP should be codified in law to ensure compliance and permanence.
These changes would improve the state of cybersecurity today.
We’ll dig into the details of each of these recommendations in a blog post series from the Mozilla Policy team over the coming weeks – stay tuned for that.
Today, you can watch Heather West, Mozilla Senior Policy Manager, discuss this issue at the New America Open Technology Institute event on the topic of “How Should We Govern Government Hacking?” The event can be viewed here.
Mozilla verhelpt man-in-the-middle-lek in Firefox - Computer ...
Mozilla brengt 20 september een update uit voor Firefox die een man-in-the-middle-lek dicht. Het lek treft gebruikers van add-ons en werd vrijdag gedicht bij de ...
Mozilla dicht ernstig Firefox-lek op 20 september - Security.NLSecurity.nl
alle 3 nieuwsartikelen »
A couple weeks ago I started writing a game and i-can-management is the directory I made for the project so that’ll be the codename for now. I’m going to write these updates to journal the process of making this game. As I’m going through this process alone, you’ll see all aspects of the game development process as I go through them. That means some weeks may be art heavy, while others game rules, or maybe engine refactoring. I also want to give a glance how I’m feeling about the project and rules I make for myself.
Speaking of rules, those are going to be a central theme on how I actually keep this project moving forward.
- Optimize only when necessary. This seems obvious, but folks define necessary differently. 60 frames per second with 750×750 tiles on the screen is my current benchmark for whether I need to optimize. I’ll be adding numbers for load times and other aspects once they grow beyond a size that feels comfortable.
- Abstractions are expensive, use them sparingly.This is something I learned from a Jonathan Blow talk I mention in my previous post. Abstractions can increase or remove flexibility. On one hand reusing components may allow more rapid iteration. On the other hand it may take considerable effort to make systems communicate that weren’t designed to pass messages.I’m making it clear in each effort whether I’m in exploration mode so I work mostly with just 1 function, or if I’m in architect mode where I’m trying to make the next feature a little easier to implement. This may mean 1000 line functions and lots of global like use for a while until I understand how the data will be used. Or it may mean abstracting a concept like the camera to a struct because the data is always used together.
- Try the easier to implement answer before trying the better answer.I have two goals with this. First, it means I get to start trying stuff faster so I know if I want to pursue it or if I’m kinda off on the idea. Maybe this first implementation will show some other subsystem needs features first so I decide to delay the more correct answer. So in short quicker to test and expose unexpected requirements.The other goal is to explore building games in a more holistic way. Knowing a quick and dirty way to implement something may help when trying to get an idea thrown together really quick. Then knowing how to evolve that code into a better long term solution means next games or ideas that cross pollinate are faster to compose because the underlying concepts are better known.
The last couple weeks have been an exploration of OpenGL via glium the library I’m using to access OpenGL from Rust as well as abstract away the window creation. I’d only ever ran the example before this dive into building a game. From what I remember of doing this in C++ the abstraction it provides for the window creation and interaction, using the glutin library is pretty great. I was able to create a window of whatever size, hook up keyboard and mouse events, and render to the screen pretty fast after going through the tutorial in the glium book.
This brings me to one of the first frustrating points in this project. So many things are focused on 3d these days that finding resources for 2d rendering is harder. If you find them, they are for old versions of OpenGL or use libraries to handle much of the tile rendering. I was hoping to find an article like “I built a 2d tile engine that is pretty fast and these are the techniques I used!” but no such luck. OpenGL guides go immediately into 3d space after getting past basic polygons. But it just means I get to explore more which is probably a good thing.
I already had a deterministic map generator built to use as the source of the tiles on the screen. So, I copy and pasted some of the matrices from the glium book and then tweak the numbers I was using for my tiles until they show up on the screen and looked ok. From here I was pretty stoked. I mean if I have 25×40 tiles on the screen what more could someone ask for. I didn’t know how to make the triangle strips work well for the tiles to be drawn all at once, so I drew each tile to the screen separately, calculating everything on every frame.
I started to add numbers here and there to see how to adjust the camera in different directions. I didn’t understand the math I was working with yet so I was mostly treating it like a black box and I would add or multiply numbers and recompile to see if it did anything. I quickly realized I needed it to be more dynamic so I added detection for the mouse scrolling. Since I’m on my macbook most of the time I’m doing development I can scroll vertically as well as horizontally, making a natural panning feeling.
I noticed that my rendering had a few quirks, and I didn’t understand any of the math that was being used, so I went seeking more sources of information on how these transforms work. At first I was directed to the OpenGL transformations page which set me on the right path, including a primer on the linear algebra I needed. Unfortunately, it quickly turned toward 3d graphics and I didn’t quite understand how to apply it to my use case. In looking for more resources I found Solarium Programmers’ OpenGL 101 page which took some more time with orthographic projects, what I wanted for my 2d game.
Over a few sessions I rewrote all the math to use a coordinate system I understood. This was greatly satisfying, but if I hadn’t started with ignoring the math, I wouldn’t have had a testbed to see if I actually understood the math. A good lesson to remember, if you can ignore a detail for a bit and keep going, prioritize getting something working, then transforming it into something you understand more thoroughly.
I have more I learned in the last week, but this post is getting quite long. I hope to write a post this week about changing from drawing individual tiles to using a single triangle strip for the whole map.
In the coming week my goal is to have mouse clicks interacting with the map working. This involves figuring out what tile the mouse has clicked which I’ve learned isn’t trivial. In parallel I’ll be developing the first set of tiles using Pyxel Edit and hopefully integrating them into the game. Then my map will become richer than just some flat colored tiles.
Here is a screenshot of the game so far for posterity’s sake. It is showing 750×750 tiles with deterministic weighted distribution between grass, water, and dirt::
In the last week, we landed 68 PRs in the Servo organization’s repositories.Planning and Status
Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.
This week’s status updates are here.
Special thanks to canaltinova for their work on implementing the matrix transition algorithms for CSS3 transform animation. This allows (both 2D and 3D) rotate(), perspective() and matrix() functions to be interpolated, as well as interpolations between arbitrary transformations, though the last bit is yet to be implemented. In the process of implementation, we had to deal with many spec bugs, as well as implementation bugs in other browsers, which complicated things immensely – it’s very hard to tell if your code has a mistake or if the spec itself is wrong in complicated algorithms like these. Great work, canaltinova!Notable Additions
- glennw added support for scrollbars
- canaltinova implemented the matrix decomposition/interpolation algorithm
- nox landed a rustup to the 9/14 rustc nightly
- ejpbruel added a websocket server for use in the remote debugging protocol
- creativcoder implemented the postMessage() API for ServiceWorkers
- ConnorGBrewster made Servo recycle session entries when reloading
- mrobinson added support for transforming rounded rectangles
- glennw improved webrender startup times by making shaders compile lazily
- canaltinova fixed a bug where we don’t normalize the axis of rotate() CSS transforms
- peterjoel added the DOMMatrix and DOMMatrixReadOnly interfaces
- Ms2ger corrected an unsound optimization in event dispatch
- tizianasellitto made DOMTokenList iterable
- aneeshusa excised SubpageId from the codebase, using PipelineId instead
- gilbertw1 made the HTTP authentication cache use origins intead of full URLs
- jmr0 fixed the event suppression logic for pages that have navigated
- zakorgy updated some WebBluetooth APIs to match new specification changes
Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!Screenshot
Some screencasts of matrix interpolation at work:
This one shows all the basic transformations together (running a tweaked version of this page. The 3d rotate, perspective, and matrix transformation were enabled by the recent change.
Servo’s new scrollbars!