mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Ben Kelly: That Event Is So Fetch

Mozilla planet - mo, 23/02/2015 - 16:00

The Service Workers builds have been updated as of yesterday, February 22:

Firefox Service Worker Builds

Notable contributions this week were:

  • Josh Mathews landed Fetch Event support in Nightly. This is important, of course, because without the Fetch Event you cannot actually intercept any network requests with your Service Worker. | bug 1065216
  • Catalin Badea landed more of the Service Worker API in Nightly, including the ability to communicate with the Service Worker using postMessage(). | bug 982726
  • Nikhil Marathe landed some more of his spec implementations to handle unloading documents correctly and to treat activations atomically. | bug 1041340 | bug 1130065
  • Andrea Marchesini landed fixes for FirefoxOS discovered by the team in Paris. | bug 1133242
  • Jose Antonio Olivera Ortega contributed a work-in-progress patch to force Service Worker scripts to update when dom.serviceWorkers.test.enabled is set. | bug 1134329
  • I landed my implementation of the Fetch Request and Response clone() methods. | bug 1073231

As always, please let us know if you run into any problems. Thank you for testing!

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 36 beta10 to rc

Mozilla planet - mo, 23/02/2015 - 15:52

For the RC build, we landed a few last minutes changes. We disabled <meta referrer> because of a last minute issue, we landed a compatibility fix for addons and, last but not least, some graphic crash fixes.

Note that a RC2 has been build from the same source code in order to tackle the AMD CPU bug (see comment #22).

  • 11 changesets
  • 32 files changed
  • 220 insertions
  • 48 deletions

ExtensionOccurrences cpp6 js5 jsm2 ini2 xml1 sh1 json1 hgtags1 h1

ModuleOccurrences mobile12 gfx5 browser5 toolkit3 dom2 testing1 parser1

List of changesets:

Robert StrongBug 945192 - Followup to support Older SDKs in loaddlls.cpp. r=bbondy a=Sylvestre - cce919848572 Armen Zambrano GasparnianBug 1110286 - Pin mozharness to 2264bffd89ca instead of production. r=rail a=testing - 948a2c2e31d4 Jared WeinBug 1115227 - Loop: Add part of the UITour PageID to the Hello tour URLs as a query parameter. r=MattN, a=sledru - 1a2baaf50371 Boris ZbarskyBug 1134606 - Disable <meta referrer> in Firefox 36 pending some loose ends being sorted out. r=sstamm, a=sledru - 521cf86d194b Milan SreckovicBug 1126918 - NewShSurfaceHandle can return null. Guard against it. r=jgilbert, a=sledru - 89cfa8ff9fc5 Ryan VanderMeulenMerge beta to m-r. a=merge - 2f2abd6ffebb Matt WoodrowBug 1127925 - Lazily open shared handles in DXGITextureHostD3D11 to avoid holding references to textures that might not be used. r=jrmuizel, a=sledru - 47ec64cc562f Rob WuBug 1128478 - sdk/panel's show/hide events not emitted if contentScriptWhen != 'ready'. r=erikvold, a=sledru - c2a6bab25617 Matt WoodrowBug 1128170 - Use UniquePtr for TextureClient KeepAlive objects to make sure we don't leak them. r=jrmuizel, a=sledru - 67d9db36737e Hector ZhaoBug 1129287 - Fix not rejecting partial name matches for plugin blocklist entries. r=gfritzsche, a=sledru - 7d4016a05dd3 Ryan VanderMeulenMerge beta to m-r. a=merge - a2ffa9047bf4

Categorieën: Mozilla-nl planet

Adam Lofting: The week ahead 23 Feb 2015

Mozilla planet - mo, 23/02/2015 - 12:40

First, I’ll note that even taking the time to write these short ‘note to self’ type blog posts each week takes time and is harder to do than I expected. Like so many priorities, the long term important things often battle with the short term urgent things. And that’s in a culture where working open is more than just acceptable, it’s encouraged.

Anyway, I have some time this morning sitting in an airport to write this, and I have some time on a plane to catch up on some other reading and writing that hasn’t made it to the top of the todo list for a few weeks. I may even get to post a blog post or two in the near future.

This week, I have face-to-face time with lots of colleagues in Toronto. Which means a combination of planning, meetings, running some training sessions, and working on tasks where timezone parity is helpful. It’s also the design team work week, and though I’m too far gone from design work to contribute anything pretty, I’m looking forward to seeing their work and getting glimpses of the future Webmaker product. Most importantly maybe, for a week like this, I expect unexpected opportunities to arise.

One of my objectives this week is working with Ops to decide where my time is best spent this year to have the most impact, and to set my goals for the year. That will get closer to a metrics strategy this year to improve on last years ‘reactive’ style of work.

IMG_0456If you’re following along for the exciting stories of my shed>to>office upgrades. I don’t have much to show today, but I’m building a new desk next and insulation is my new favourite thing. This photo shows the visible difference in heat loss after fitting my first square of insulation material to the roof.

Categorieën: Mozilla-nl planet

Mozilla overweegt blacklist voor Superfish-certificaat - Security.nl

Nieuws verzameld via Google - mo, 23/02/2015 - 12:04

Mozilla overweegt blacklist voor Superfish-certificaat
Security.nl
Mozilla overweegt om het Superfish-certificaat dat op laptops van Lenovo werd geïnstalleerd op een blacklist te zetten. Dat blijkt uit een discussie op Mozilla's Bugzilla, waar ontwikkelaars problemen en bugs in Mozilla-software bespreken. Door het ...

en meer »
Categorieën: Mozilla-nl planet

The Mozilla Blog: MWC 2015: Experience the Latest Firefox OS Devices, Discover what Mozilla is Working on Next

Mozilla planet - mo, 23/02/2015 - 11:14

Preview TVs and phones powered by Firefox OS and demos such as an NFC payment prototype at the Mozilla booth. Hear Mozilla speakers discuss privacy, innovation for inclusion and future of the internet.

Panasonic unveiled their new line of 4K Ultra HD TVs powered by Firefox OS at their convention in Frankfurt today. The Panasonic 2015 4k UHD (Ultra HD) LED VIERA TV, which will be shipping this spring, will also be showcased at Mozilla’s stand at Mobile World Congress 2015 in Barcelona. Like last year, Firefox OS will take its place in Hall 3, Stand 3C30, alongside major operators and device manufacturers.

Mozilla's stand at Mobile World Congress 2015, Hall 3, Stand 3C30

Mozilla’s stand at Mobile World Congress 2015, Hall 3, Stand 3C30

In addition to the Panasonic TV and the latest Firefox OS smartphones announced, visitors have the opportunity to learn more about Mozilla’s innovation projects during talks at the “Fox Den” at Mozilla’s stand, Hall 3, Stand 3C30. Just one example from the demo program:

Mozilla, in collaboration with its partners at Deutsche Telekom Innovation Labs (centers in Silicon Valley and Berlin) and T-Mobile Poland, developed the design and implementation of Firefox OS’s NFC infrastructure to enable several applications including mobile payments, transportation services, door access and media sharing. The mobile wallet demo covering ‘MasterCard® Contactless’ technology together with few non-payment functionalities will be showcased in “FoxDen” talks.

Visit www.firefoxos.com/mwc for the full list of topics and schedule of “Fox Den” talks.

 How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

Schedule of Events and Speaking Appearances

Hear from Mozilla executives on trending topics in mobile at the following sessions:

‘Digital Inclusion: Connecting an additional one billion people to the mobile internet’ Seminar

Executive Director of the Mozilla Foundation Mark Surman will join a seminar that will explore the barriers and opportunities relating to the growth of mobile connectivity in developing markets, particularly in rural areas.
Date: Monday 2 March 12:00 – 13:30 CET
Location: GSMA Seminar Theatre CC1.1

‘Ensuring User-Centred Privacy in a Connected World’ Panel

Denelle Dixon-Thayer, SVP of business and legal affairs at Mozilla, will take part in a session that explores user-centric privacy in a connected world.
Date: Monday, 2 March 16:00 – 17:30 CET
Location: Hall 4, Auditorium 3

‘Innovation for Inclusion’ Keynote Panel

Mozilla Executive Chairwoman and Co-Founder Mitchell Baker will discuss how mobile will continue to empower individuals and societies.
Date: Tuesday, 3 March 11:15 – 12:45 CET
Location: Hall 4, Auditorium 1 (Main Conference Hall)

‘Connected Citizens, Managing Crisis’ Panel

Mark Surman, Executive Director of the Mozilla Foundation, will contribute to a panel on how mobile technology is playing an increasingly central role in shaping responses to some of the most critical humanitarian problems facing the global community today.
Date: Tuesday, 3 March 14:00 – 15:30 CET
Location: Hall 4, Auditorium 2

‘Defining the Future of the Internet’ Panel

Andreas Gal, CTO at Mozilla, will take part in a session that explores the future of the Internet, bringing together industry leaders to the forefront of the net neutrality debate.
Date: Wednesday, 4 March 15:15 – 16:15 CET
Location: Hall 4, Auditorium 5

More information:

  • Please visit Mozilla and experience Firefox OS in Hall 3, Stand 3C30, at the Fira Gran Via, Barcelona from March 2-5, 2015
  • To learn more about Mozilla at MWC, please visit: www.firefoxos.com/mwc
  • For further details or to schedule a meeting at the show please contact press@mozilla.com
  • For additional resources, such as high-resolution images and b-roll video, visit: https://blog.mozilla.org/press
Categorieën: Mozilla-nl planet

MWC 2015: Experience the Latest Firefox OS Devices, Discover what Mozilla is Working on Next

Mozilla Blog - mo, 23/02/2015 - 11:14

Preview TVs and phones powered by Firefox OS and demos such as an NFC payment prototype at the Mozilla booth. Hear Mozilla speakers discuss privacy, innovation for inclusion and future of the internet.

Panasonic unveiled their new line of 4K Ultra HD TVs powered by Firefox OS at their convention in Frankfurt today. The Panasonic 2015 4k UHD (Ultra HD) LED VIERA TV, which will be shipping this spring, will also be showcased at Mozilla’s stand at Mobile World Congress 2015 in Barcelona. Like last year, Firefox OS will take its place in Hall 3, Stand 3C30, alongside major operators and device manufacturers.

Mozilla's stand at Mobile World Congress 2015, Hall 3, Stand 3C30

Mozilla’s stand at Mobile World Congress 2015, Hall 3, Stand 3C30

In addition to the Panasonic TV and the latest Firefox OS smartphones announced, visitors have the opportunity to learn more about Mozilla’s innovation projects during talks at the “Fox Den” at Mozilla’s stand, Hall 3, Stand 3C30. Just one example from the demo program:

Mozilla, in collaboration with its partners at Deutsche Telekom Innovation Labs (centers in Silicon Valley and Berlin) and T-Mobile Poland, developed the design and implementation of Firefox OS’s NFC infrastructure to enable several applications including mobile payments, transportation services, door access and media sharing. The mobile wallet demo covering ‘MasterCard® Contactless’ technology together with few non-payment functionalities will be showcased in “FoxDen” talks.

Visit www.firefoxos.com/mwc for the full list of topics and schedule of “Fox Den” talks.

 How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

Schedule of Events and Speaking Appearances

Hear from Mozilla executives on trending topics in mobile at the following sessions:

‘Digital Inclusion: Connecting an additional one billion people to the mobile internet’ Seminar

Executive Director of the Mozilla Foundation Mark Surman will join a seminar that will explore the barriers and opportunities relating to the growth of mobile connectivity in developing markets, particularly in rural areas.
Date: Monday 2 March 12:00 – 13:30 CET
Location: GSMA Seminar Theatre CC1.1

‘Ensuring User-Centred Privacy in a Connected World’ Panel

Denelle Dixon-Thayer, SVP of business and legal affairs at Mozilla, will take part in a session that explores user-centric privacy in a connected world.
Date: Monday, 2 March 16:00 – 17:30 CET
Location: Hall 4, Auditorium 3

‘Innovation for Inclusion’ Keynote Panel

Mozilla Executive Chairwoman and Co-Founder Mitchell Baker will discuss how mobile will continue to empower individuals and societies.
Date: Tuesday, 3 March 11:15 – 12:45 CET
Location: Hall 4, Auditorium 1 (Main Conference Hall)

‘Connected Citizens, Managing Crisis’ Panel

Mark Surman, Executive Director of the Mozilla Foundation, will contribute to a panel on how mobile technology is playing an increasingly central role in shaping responses to some of the most critical humanitarian problems facing the global community today.
Date: Tuesday, 3 March 14:00 – 15:30 CET
Location: Hall 4, Auditorium 2

‘Defining the Future of the Internet’ Panel

Andreas Gal, CTO at Mozilla, will take part in a session that explores the future of the Internet, bringing together industry leaders to the forefront of the net neutrality debate.
Date: Wednesday, 4 March 15:15 – 16:15 CET
Location: Hall 4, Auditorium 5

More information:

  • Please visit Mozilla and experience Firefox OS in Hall 3, Stand 3C30, at the Fira Gran Via, Barcelona from March 2-5, 2015
  • To learn more about Mozilla at MWC, please visit: www.firefoxos.com/mwc
  • For further details or to schedule a meeting at the show please contact press@mozilla.com
  • For additional resources, such as high-resolution images and b-roll video, visit: https://blog.mozilla.org/press
Categorieën: Mozilla-nl planet

Nick Thomas: FileMerge bug

Mozilla planet - mo, 23/02/2015 - 10:23

FileMerge is a nice diff and merge tool for OS X, and I use it a lot for larger code reviews where lots of context is helpful. It also supports intra-line diff, which comes in pretty handy.

filemerge screenshot

However in recent releases, at least in v2.8 which comes as part of XCode 6.1, it assumes you want to be merging and shows that bottom pane. Adjusting it away doesn’t persist to the next time you use it, *gnash gnash gnash*.

The solution is to open a terminal and offer this incantation:

defaults write com.apple.FileMerge MergeHeight 0

Unfortunately, if you use the merge pane then you’ll have to do that again. Dear Apple, pls fix!

Categorieën: Mozilla-nl planet

Mozilla to ditch Adobe Flash - GMA News

Nieuws verzameld via Google - mo, 23/02/2015 - 09:54

Mozilla to ditch Adobe Flash
GMA News
Adobe's ubiquitous but oft-targeted Flash Player is about to lose support from yet another Internet entity - Mozilla's popular open-source Firefox browser. Mozilla is now experimenting with Project Shumway, a new technology that plays Flash content ...

Categorieën: Mozilla-nl planet

Ahmed Nefzaoui: A Bit Of Consulting: Entering the RTL Web Market [Part 1]

Mozilla planet - mo, 23/02/2015 - 09:34

As I’m writing this article, we (as Mozilla and everyone else working together on Firefox OS) were never closer to releasing a version of Firefox Operating System containing this much RTL (Right-To-Left UI) support inside, which is (as the one who were/is responsible for most of it I can tell you we have) a pretty damn competitive implementation that *no* one else currently has.

v2.2 which is the version that we’re talking about here. And as you already know by now (since, you know, assuming know about Firefox OS since you’re reading this) the OS is web-based, which means it relates a lot to the web, in fact it *is* the web we want!

So I decided to write a little blog post for anyone out there who wants to get started extending their web products/websites/services support to the RTL Market.
For starters, the RTL Market is anywhere in the world where a country has a language starting from the right as their native language. This means North Africa, Middle East and bits of Asia.

RTL Is NOT Translating To Arabic

Localizing for RTL means more than translating your website into Arabi, Farsi, Urdu or Hebrew and calling it a day. It might sound harsh but here’s my advice: Do RTL right or don’t bother. If you half-arse it, it will be obvious, and you will lose money and credibility. Know who you’re talking to and how they use the web and you’ll be that much closer to a meaningful connection with your users.

RTL Is UI, Patterns And More

So there’s this one time after I had a new Ubuntu install and used Gnome for it instead of Unity, and while personalizing it, I chose for the time & calendar the setting “Arabic (Tunisia)” and suddenly 4:45 PM became litteraly PM 45:4.
Seriously, WTF ^^ I can’t read this even though I’m a native Arabic speaker (so native RTL user), even though the time pattern is flipped and so on, but still it’s WRONG, we don’t read time that way.
So, moral of the story: RTL is not about flipping UI wherever you see a horizontal list of items near each other.

So I ended up switching to English (UK) for my time and date format. And again, if you do RTL wrong it will be obvious and people will not use it.

Back to the topic, in RTL there are exceptions that you should know about before kicking off, don’t be shy to ask the community, especially from the open source community, basically any native could give you valid advices about most of the UI.

Feel free to drop in any questions you’ve got :)

Categorieën: Mozilla-nl planet

Daniel Stenberg: Bug finding is slow in spite of many eyeballs

Mozilla planet - mo, 23/02/2015 - 07:39
“given enough eyeballs, all bugs are shallow”

The saying (also known as Linus’ law) doesn’t say that the bugs are found fast and neither does it say who finds them. My version of the law would be much more cynical, something like: “eventually, bugs are found“, emphasizing the ‘eventually’ part.

(Jim Zemlin apparently said the other day that it can work the Linus way, if we just fund the eyeballs to watch. I don’t think that’s the way the saying originally intended.)

Because in reality, many many bugs are never really found by all those given “eyeballs” in the first place. They are found when someone trips over a problem and is annoyed enough to go searching for the culprit, the reason for the malfunction. Even if the code is open and has been around for years it doesn’t necessarily mean that any of all the people who casually read the code or single-stepped over it will actually ever discover the flaws in the logic. The last few years several world-shaking bugs turned out to have existed for decades until discovered. In code that had been read by lots of people – over and over.

So sure, in the end the bugs were found and fixed. I would argue though that it wasn’t because the projects or problems were given enough eyeballs. Some of those problems were found in extremely popular and widely used projects. They were found because eventually someone accidentally ran into a problem and started digging for the reason.

Time until discovery in the curl project

I decided to see how it looks in the curl project. A project near and dear to me. To take it up a notch, we’ll look only at security flaws. Not only because they are the probably most important bugs we’ve had but also because those are the ones we have the most carefully noted meta-data for. Like when they were reported, when they were introduced and when they were fixed.

We have no less than 30 logged vulnerabilities for curl and libcurl so far through-out our history, spread out over the past 16 years. I’ve spent some time going through them to see if there’s a pattern or something that sticks out that we should put some extra attention to in order to improve our processes and code. While doing this I gathered some random info about what we’ve found so far.

On average, each security problem had been present in the code for 2100 days when fixed – that’s more than five and a half years. On average! That means they survived about 30 releases each. If bugs truly are shallow, it is still certainly not a fast processes.

Perhaps you think these 30 bugs are really tricky, deeply hidden and complicated logic monsters that would explain the time they took to get found? Nope, I would say that every single one of them are pretty obvious once you spot them and none of them take a very long time for a reviewer to understand.

Vulnerability ages

This first graph (click it for the large version) shows the period each problem remained in the code for the 30 different problems, in number of days. The leftmost bar is the most recent flaw and the bar on the right the oldest vulnerability. The red line shows the trend and the green is the average.

The trend is clearly that the bugs are around longer before they are found, but since the project is also growing older all the time it sort of comes naturally and isn’t necessarily a sign of us getting worse at finding them. The average age of flaws is aging slower than the project itself.

Reports per year

How have the reports been distributed over the years? We have a  fairly linear increase in number of lines of code but yet the reports were submitted like this (now it goes from oldest to the left and most recent on the right – click for the large version):

vuln-trend

Compare that to this chart below over lines of code added in the project (chart from openhub and shows blanks in green, comments in grey and code in blue, click it for the large version):

curl source code growth

We received twice as many security reports in 2014 as in 2013 and we got half of all our reports during the last two years. Clearly we have gotten more eyes on the code or perhaps users pay more attention to problems or are generally more likely to see the security angle of problems? It is hard to say but clearly the frequency of security reports has increased a lot lately. (Note that I here count the report year, not the year we announced the particular problems, as they sometimes were done on the following year if the report happened late in the year.)

On average, we publish information about a found flaw 19 days after it was reported to us. We seem to have became slightly worse at this over time, the last two years the average has been 25 days.

Did people find the problems by reading code?

In general, no. Sure people read code but the typical pattern seems to be that people run into some sort of problem first, then dive in to investigate the root of it and then eventually they spot or learn about the security problem.

(This conclusion is based on my understanding from how people have reported the problems, I have not explicitly asked them about these details.)

Common patterns among the problems?

I went over the bugs and marked them with a bunch of descriptive keywords for each flaw, and then I wrote up a script to see how the frequent the keywords are used. This turned out to describe the flaws more than how they ended up in the code. Out of the 30 flaws, the 10 most used keywords ended up like this, showing number of flaws and the keyword:

9 TLS
9 HTTP
8 cert-check
8 buffer-overflow

6 info-leak
3 URL-parsing
3 openssl
3 NTLM
3 http-headers
3 cookie

I don’t think it is surprising that TLS, HTTP or certificate checking are common areas of security problems. TLS and certs are complicated, HTTP is huge and not easy to get right. curl is mostly C so buffer overflows is a mistake that sneaks in, and I don’t think 27% of the problems tells us that this is a problem we need to handle better. Also, only 2 of the last 15 flaws (13%) were buffer overflows.

The discussion following this blog post is on hacker news.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 71

Mozilla planet - mo, 23/02/2015 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

The big news

Rust 1.0.0-alpha.2 was released on Friday, but keep using nightlies. Six more weeks until the beta, which should become 1.0. Only six more weeks.

What's cooking on master?

157 pull requests were merged in the last week, and 15 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes Other Changes New Contributors
  • Adam Jacob
  • Alexander Bliskovsky
  • Brian Brooks
  • caipre
  • Darrell Hamilton
  • Dave Huseby
  • Denis Defreyne
  • Elantsev Serj
  • Henrik Schopmans
  • Ingo Blechschmidt
  • Jormundir
  • Lai Jiangshan
  • posixphreak
  • Ryan Riginding
  • Wesley Wiser
  • Will
  • wonyong kim
Approved RFCs

This covers two weeks since last week I wasn't able review RFCs in time.

New RFCs Friend of the Tree

The Rust Team likes to occassionally recognize people who have made outstanding contributions to The Rust Project, its ecosystem, and its community. These people are 'friends of the tree'.

This week's friend of the tree was ... Toby Scrace.

"Today I would like to nominate Toby Scrace as Friend of the Tree. Toby emailed me over the weekend about a login vulnerability on crates.io where you could log in to whomever the previously logged in user was regardless of whether the GitHub authentication was successful or not. I very much appreciate Toby emailing me privately ahead of time, and I definitely feel that Toby has earned becoming Friend of the Tree."

Quote of the Week <Manishearth> In other news, I have r+ on rust now :D <Ms2ger> No good deed goes unpunished

From #servo. Thanks to SimonSapin for the tip.

Notable Links Project Updates Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Categorieën: Mozilla-nl planet

Mozilla mulls Superfish torpedo - The Register

Nieuws verzameld via Google - mo, 23/02/2015 - 03:30

The Register

Mozilla mulls Superfish torpedo
The Register
Mozilla may neuter the likes of Superfish by blacklisting dangerous root certificates revealed less than a week ago to be used in Lenovo laptops. The move will be another blow against Superfish, which is under a sustained barrage of criticism for its ...
Lenovo Releases 'Crapware'; Tool To Remove 'Superfish' Hidden AdwareFrontline Desk
Lenovo releases tool to remove Superfish 'crapware'The Next Digit

alle 75 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Nick Desaulniers: Hidden in Plain Sight - Public Key Crypto

Mozilla planet - snein, 22/02/2015 - 20:48

How is it possible for us to communicate securely when there’s the possibility of a third party eavesdropping on us? How can we communicate private secrets through public channels? How do such techniques enable us to bank online and carry out other sensitive transactions on the Internet while trusting numerous relays? In this post, I hope to explain public key cryptography, with actual code examples, so that the concepts are a little more concrete.

First, please check out this excellent video on public key crypto:

Hopefully that explains the gist of the technique, but what might it actually look like in code? Let’s take a look at example code in JavaScript using the Node.js crypto module. We’ll later compare the upcoming WebCrypto API and look at a TLS handshake.

Meet Alice. Meet Bob. Meet Eve. Alice would like to send Bob a secret message. Alice would not like Eve to view the message. Assume Eve can intercept, but not tamper with, everything Alice and Bob try to share with each other.

Alice chooses a modular exponential key group, such as modp4, then creates a public and private key.

1 2 3 var group = "modp4"; var aliceDH = crypto.getDiffieHellman(group); aliceDH.generateKeys();

A modular exponential key group is simply a “sufficiently large” prime number, paired with a generator (specific number), such as those defined in RFC2412 and RFC3526.

The public key is meant to be shared; it is ok for Eve to know the public key. The private key must not ever be shared, even with the person communicating to.

Alice then shares her public key and group with Bob.

1 2 3 4 Public Key: <Buffer 96 33 c5 9e b9 07 3e f2 ec 56 6d f4 1a b4 f8 4c 77 e6 5f a0 93 cf 32 d3 22 42 c8 b4 7b 2b 1f a9 55 86 05 a4 60 17 ae f9 ee bf b3 c9 05 a9 31 31 94 0f ... > Group: modp14

Bob now creates a public and private key pair with the same group as Alice.

1 2 var bobDH = crypto.getDiffieHellman(group); bobDH.generateKeys();

Bob shares his public key with Alice.

1 2 Public key: <Buffer ee d7 e2 00 e5 82 11 eb 67 ab 50 20 30 81 b1 74 7a 51 0d 7e 2a de b7 df db cf ac 57 de a4 f0 bd bc b5 7e ea df b0 3b c3 3a e2 fa 0e ed 22 90 31 01 67 ... >

Alice and Bob now compute a shared secret.

1 2 var aliceSecret = aliceDH.computeSecret(bobDH.getPublicKey(), null, "hex"); var bobSecret = bobDH.computeSecret(aliceDH.getPublicKey(), null, "hex");

Alice and Bob have now derived a shared secret from each others’ public keys.

1 aliceSecret === bobSecret; // => true

Meanwhile, Eve has intercepted Alice and Bob’s public keys and group. Eve tries to compute the same secret.

1 2 3 4 5 var eveDH = crypto.getDiffieHellman(group); eveDH.generateKeys(); var eveSecret = eveDH.computeSecret(aliceDH.getPublicKeys(), null, "hex"); eveSecret === aliceSecret; // => false

This is because Alice’s secret is derived from Alice and Bob’s private keys, which Eve does not have. Eve may not realize her secret is not the same as Alice and Bob’s until later.

That was asymmetric encryption; using different keys. The shared secret may now be used in symmetric encryption; using the same keys.

Alice creates a symmetric block cypher using her favorite algorithm, a hash of their secret as a key, and random bytes as an initialization vector.

1 2 3 4 5 var cyper = "aes-256-ctr"; var hash = "sha256"; var aliceIV = crypto.randomBytes(128); var aliceHashedSecret = crypto.createHash(hash).update(aliceSecret).digest("binary"); var aliceCypher = crypto.createCypher(alg, aliceHashedSecret, aliceIV);

Alice then uses her cypher to encrypt her message to Bob.

1 var cypherText = aliceCypher.update("...");

Alice then sends the cypher text, cypher, and hash to Bob.

1 2 3 4 5 6 cypherText: <Buffer bd 29 96 83 fa a8 7d 9c ea 90 ab> cypher: aes-256-ctr hash: sha256

Bob now constructs a symmetric block cypher using the algorithm from Alice, and a hash of their shared secret.

1 2 var bobHasedSecret = crypto.createHash(hash).update(bobSecret).digest("binary"); var bobCypher = crypto.createDecipher(alg, bobHashedSecret);

Bob now decyphers the encrypted message (cypher text) from Alice.

1 2 var plainText = bobCypher.update(cypherText); console.log(plainText); // => "I love you"

Eve has intercepted the cypher text, cypher, hash, and tries to decrypt it.

1 2 3 4 5 var eveHashedSecret = crypto.createHash(hash).update(eveSecret).digest("binary"); var eveCypher = crypto.createDecipher(alg, eveHashedSecret); console.log(eveCypher.update(cypherText).toString()); // => ��_r](�i)

Here’s where Eve realizes her secret is not correct.

This prevents passive eavesdropping, but not active man-in-the-middle (MITM) attacks. For example, how does Alice know that the messages she was supposedly receiving from Bob actually came from Bob, not Eve posing as Bob?

Today, we use a system of certificates to provide authentication. This system certainly has its flaws, but it is what we use today. This is more advanced topic that won’t be covered here. Trust is a funny thing.

What’s interesting to note is that the prime and generator used to generate Diffie-Hellman public and private keys have strings that represent the corresponding modular key exponential groups, ie “modp14”. Web crypto’s API gives you finer grain control to specify the generator and large prime number in a Typed Array. I’m not sure why this is; if it allows you to have finer grain control, or allows you to support newer groups before the implementation does? To me, it seems like a source for errors to be made; hopefully someone will make a library to provide these prime/generator pairs.

One issue with my approach is that I assumed that Alice and Bob both had support for the same hashing algorithms, modular exponential key group, and symmetric block cypher. In the real world, this is not always the case. Instead, it is much more common for the client to broadcast publicly all of the algorithms it supports, and the server to pick one. This list of algorithms is called a “suite,” ie “cypher suit.” I learned this the hard way recently trying to upgrade the cypher suit on my ssh server and finding out that my client did not support the lastest cyphers. In this case, Alice and Bob might not have the same versions of Node.js, which statically link their own versions of OpenSSL. Thus, one should use cryto.getCiphers() and crypto.getHashes() before assuming the party they’re communicating to can do the math to decrypt. We’ll see “cypher suites” come up again in TLS handshakes. The NSA publishes a list of endorsed cryptographic components, for what it’s worth. There are also neat tricks we can do to prevent the message from being decrypted at a later time should the private key be compreomised and encrytped message recorded, called Perfect Forward Secrecy.

Let’s take a look now at how a browser does a TLS handshake. Here’s a capture from Wireshark of me navigating to https://google.com. First we have a TLSv1.2 Client Hello to start the handshake. Here we can see a list of the cypher suites.

Next is the response from the server, a TLSv1.2 Server Hello. Here you can see the server has picked a cypher to use.

The server then sends its certificate, which contains a copy of its public key.

Now that we’ve agreed on a cypher suite, the client now sends its public key. The server sets up a session, that way it may abbreviate the handshake in the future. Finally, the client may now start making requests to the server with encrypted application data.

For more information on TLS handshakes, you should read Ilya Grigorik’s High Performance Browser Networking book chapter TLS Handshake, Mozilla OpSec’s fantastic wiki, and this exellent Stack Exchange post. As you might imagine, all of these back and forth trips made during the TLS handshake add latency overhead when compared to unencrypted HTTP requests.

I hope this post helped you understand how we can use cryptography to exchange secret information through public channels. This isn’t enough information to implement a perfectly secure system; end to end security means one single mistake can compromise the entire system. Peer review and open source, battle tested implementations go a long way.

A cryptosystem should be secure even if everything about the system, except the
key, is public knowledge.

Kerckhoffs’s principle

I wanted to write this post because I believe abstinence-only crypto education isn’t working and I cant stand when anyone acts like part of a cabal from their ivory tower to those trying to learn new things. Someone will surely cite Javascript Cryptography Considered Harmful, which while valid, misses my point of simply trying to show people more concrete basics with code examples. The first crypto system you implement will have its holes, but you can’t go from ignorance of crypto to perfect knowledge without implementing a few imperfect systems. Don’t be afraid to, just don’t start with trying to protect high value data. Crypto is dangerous, because it can be difficult to impossible to tell when your system fails. Assembly is also akin to juggling knives, but at least you’ll usually segfault if you mess up and program execution will halt.

With upcoming APIs like Service Workers requiring TLS, protocols like HTTP2, pushes for all web traffic to be encrypted, and shitty things governments, politicians, and ISPs do, web developers are going to have to start boning up on their crypto knowledge.

What are your recommendations for crrectly learning crypto? Leave me some thoughts in the comments below.

Categorieën: Mozilla-nl planet

Rizky Ariestiyansyah: Github Pages Auto Publication Based on Master Git Commits

Mozilla planet - snein, 22/02/2015 - 19:31

A simple configuration to auto publish the git commits on master to gh-pages,just add few line code (git command) in .git/config file. Here is the code to mirrored the master branch to gh-pages

Categorieën: Mozilla-nl planet

Microsoft Teams Up With Mozilla; IE and Spartan Browsers Getting Firefox ... - Chinatopix

Nieuws verzameld via Google - snein, 22/02/2015 - 14:50

Chinatopix

Microsoft Teams Up With Mozilla; IE and Spartan Browsers Getting Firefox ...
Chinatopix
Mozilla has expressed its excitement that one of its rival browsers has joined forces with them in adopting the asm.js code. Meanwhile, the Chakra team is praising asm.js for greatly improving web program performance. They added this is the reason they ...

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Memory Management In Oxischeme

Mozilla planet - snein, 22/02/2015 - 09:00

I've recently been playing with the Rust programming language, and what better way to learn a language than to implement a second language in the language one wishes to learn?! It almost goes without saying that this second language being implemented should be Scheme. Thus, Oxischeme was born.

Why implement Scheme instead of some other language? Scheme is a dialect of LISP and inherits the simple parenthesized list syntax of its LISP-y origins, called s-expressions. Thus, writing a parser for Scheme syntax is rather easy compared to doing the same for most other languages' syntax. Furthermore, Scheme's semantics are also minimal. It is a small language designed for teaching, and writing a metacircular interpreter (ie a Scheme interpreter written in Scheme itself) takes only a few handfuls of lines of code. Finally, Scheme is a beautiful language: its design is rooted in the elegant λ-calculus.

Scheme is not "close to the metal" and doesn't provide direct access to the machine's hardware. Instead, it provides the illusion of infinite memory and its structures are automatically garbage collected rather than explicitly managed by the programmer. When writing a Scheme implementation in Scheme itself, or any other language with garbage collection, one can piggy-back on top of the host language's garbage collector, and use that to manage the Scheme's structures. This is not the situation I found myself in: Rust is close to the metal, and does not have a runtime with garbage collection built in (although it has some other cool ideas regarding lifetimes, ownership, and when it is safe to deallocate an object). Therefore, I had to implement garbage collection myself.

Faced with this task, I had a decision to make: tracing garbage collection or reference counting?

Reference counting is a technique where we keep track of the number of other things holding references to any given object or resource. When new references are taken, the count is incremented. When a reference is dropped, the count is decremented. When the count reaches zero, the resource is deallocated and it decrements the reference count of any other objects it holds a reference to. Reference counting is great because once an object becomes unreachable, it is reclaimed immediately and doesn't sit around consuming valuable memory space while waiting for the garbage collector to clean it up at some later point in time. Additionally, the reclamation process happens incrementally and program execution doesn't halt while every object in the heap is checked for liveness. On the negative side, reference counting runs into trouble when it encounters cycles. Consider the following situation:

A -> B ^ | | v D <- C

A, B, C, and D form a cycle and all have a reference count of one. Nothing from outside of the cycle holds a reference to any of these objects, so it should be safe to collect them. However, because each reference count will never get decremented to zero, none of these objects will be deallocated. In practice, the programmer must explicitly use (potentially unsafe) weak references, or the runtime must provide a means for detecting and reclaiming cycles. The former defeats the general purpose, don't-worry-about-it style of managed memory environments. The latter is equivalent to implementing a tracing collector in addition to the existing reference counting memory management.

Tracing garbage collectors start from a set of roots and recursively traverse object references to discover the set of live objects in the heap graph. Any object that is not an element of the live set cannot be used again in the future, because the program has no way to refer to that object. Therefore, the object is available for reclaiming. This has the advantage of collecting dead cycles, because if the cycle is not reachable from the roots, then it won't be in the live set. The cyclic references don't matter because those edges are never traversed. The disadvantage is that, without a lot of hard work, when the collector is doing its bookkeeping, the program is halted until the collector is finished analyzing the whole heap. This can result in long, unpredictable GC pauses.

Reference counting is to tracing as yin is to yang. The former operates on dead, unreachable objects while the latter operates on live, reachable things. Fun fact: every high performance GC algorithm (such as generational GC or reference counting with "purple" nodes and trial deletion) uses a mixture of both, whether it appears so on the surface or not. "A Unified Theory of Garbage Collection" by Bacon, et all discusses this in depth.

I opted to implement a tracing garbage collector for Oxischeme. In particular, I implemented one of the simplest GC algorithms: stop-the-world mark-and-sweep. The steps are as follows:

  1. Stop the Scheme program execution.
  2. Mark phase. Trace the live heap starting from the roots and add every reachable object to the marked set.
  3. Sweep phase. Iterate over each object x in the heap:
    • If x is an element of the marked set, continue.
    • If x is not an element of the marked set, reclaim it.
  4. Resume execution of the Scheme program.

Because the garbage collector needs to trace the complete heap graph, any structure that holds references to a garbage collected type must participate in garbage collection by tracing the GC things it is holding alive. In Oxischeme, this is implemented with the oxischeme::heap::Trace trait, whose implementation requires a trace function that returns an iterable of GcThings:

pub trait Trace { /// Return an iterable of all of the GC things referenced by /// this structure. fn trace(&self) -> IterGcThing; }

Note that Oxischeme separates tracing (generic heap graph traversal) from marking (adding live nodes in the heap graph to a set). This enables using Trace to implement other graph algorithms on top of the heap graph. Examples include computing dominator trees and retained sizes of objects, or finding the set of retaining paths of an object that you expected to be reclaimed by the collector, but hasn't been.

If we were to introduce a Trio type that contained three cons cells, we would implement tracing like this:

struct Trio { first: ConsPtr, second: ConsPtr, third: ConsPtr, } impl Trace for Trio { fn trace(&self) -> IterGcThing { let refs = vec!(GcThing::from_cons_ptr(self.first), GcThing::from_cons_ptr(self.second), GcThing::from_cons_ptr(self.third)); refs.into_iter() } }

What causes a garbage collection? As we allocate GC things, GC pressure increases. Once that pressure crosses a threshold — BAM! — a collection is triggered. Oxischeme's pressure application and threshold are very naive at the moment: every N allocations a collection is triggered, regardless of size of the heap or size of individual allocations.

A root is an object in the heap graph that is known to be live and reachable. When marking, we start tracing from the set of roots. For example, in Oxischeme, the global environment is a GC root.

In addition to permanent GC roots, like the global environment, sometimes it is necessary to temporarily root GC things referenced by pointers on the stack. Garbage collection can be triggered by any allocation, and it isn't always clear which Rust functions (or other functions called by those functions, or even other functions called by those functions called from the first function, and so on) might allocate a GC thing, triggering collection. The situation we want to avoid is a Rust function using a temporary variable that references a GC thing, then calling another function which triggers a collection and collects the GC thing that was referred to by the temporary variable. That results in the temporary variable becoming a dangling pointer. If the Rust function accesses it again, that is Undefined Behavior: it might still get the value it was pointing at, or it might be a segfault, or it might be a freshly allocated value being used by something else! Not good!

let a = pointer_to_some_gc_thing; function_which_can_trigger_gc(); // Oops! A collection was triggered and dereferencing this // pointer leads to Undefined Behavior! *a;

There are two possible solutions to this problem. The first is conservative garbage collection, where we walk the native stack and if any value on the stack looks like it might be a pointer and if coerced to a pointer happens to point to a GC thing in the heap, we assume that it is in fact a pointer. Under this assumption, it isn't safe to reclaim the object pointed to, and so we treat that GC thing a root. Note that this strategy is simple and easy to retrofit because it doesn't involve changes in any code other than adding the stack scanning, but it results in false positives. The second solution is precise rooting. With precise rooting, it is the responsibility of the Rust function's author to explicitly root and unroot pointers to GC things used in variables on the stack. The advantage this provides is that there are no false positives: you know exactly which stack values are pointers to GC things. The disadvantage is the requirement of explicitly telling the GC about every pointer to a GC thing you ever reference on the stack.

Almost every modern, high performance tracing collector for managed runtimes uses precise rooting because it is a prerequisite* of a moving collector: a GC that relocates objects while performing collection. Moving collectors are desirable because they can compact the heap, creating a smaller memory footprint for programs and better cache locality. They can also implement pointer bumping allocation, that is both simpler and faster than maintaining a free list. Finally, they can split the heap into generations. Generational GCs gain performance wins from the empirical observation that most allocations are short lived, and those objects that are most recently allocated are most likely to be garbage, so we can focus the GC's efforts on them to get the most bang for our buck. Precise rooting is a requirement for a moving collector because it has to update all references to point to the new address of each moved GC thing. A conservative collector doesn't know for sure if a given value on the stack is a reference to a GC thing or not, and if the value just so happens not to be a reference to a GC thing (it is a false positive), and the collector "helpfully" updates that value to the moved address, then the collector is introducing migraine-inducing bugs into the program execution.

* Technically, there do exist some moving and generational collectors that are "mostly copying" and conservatively mark the stack but precisely mark the heap. These collectors only move objects which are not conservatively reachable.

Oxischeme uses precise rooting, but is not a moving GC (yet). Precise rooting is implemented with the oxischeme::heap::Rooted<T> smart pointer RAII type, which roots its referent upon construction and unroots it when the smart pointer goes out of scope and is dropped.

Using precise rooting and Rooted, we can solve the dangling stack pointer problem like this:

{ // The pointed to GC thing gets rooted when wrapped // with `Rooted`. let a = Rooted::new(heap, pointer_to_some_gc_thing); function_which_can_trigger_gc(); // Dereferencing `a` is now safe, because the referent is // a GC root, and can't be collected! *a; } // `a` is now out of scope, and its referent is unrooted.

With all of that out of the way, here is the implementation of our mark-and-sweep collector:

impl Heap { pub fn collect_garbage(&mut self) { self.reset_gc_pressure(); // First, trace the heap graph and mark everything that // is reachable. let mut pending_trace = self.get_roots(); while !pending_trace.is_empty() { let mut newly_pending_trace = vec!(); for thing in pending_trace.drain() { if !thing.is_marked() { thing.mark(); for referent in thing.trace() { newly_pending_trace.push(referent); } } } pending_trace.append(&mut newly_pending_trace); } // Second, sweep each `ArenaSet`. self.strings.sweep(); self.activations.sweep(); self.cons_cells.sweep(); self.procedures.sweep(); } }

Why do we have four calls to sweep, one for each type that Oxischeme implements? To explain this, first we need to understand Oxischeme's allocation strategy.

Oxischeme does not allocate each individual object directly from the operating system. In fact, most Scheme "allocations" do not actually perform any allocation from the operating system (eg, call malloc or Box::new). Oxischeme uses a set of oxischeme::heap::Arenas, each of which have a preallocated object pool with each item in the pool either being used by live GC things, or waiting to be used in a future allocation. We keep track of an Arena's available objects with a "free list" of indices into its pool.

type FreeList = Vec<usize>; /// An arena from which to allocate `T` objects from. pub struct Arena<T> { pool: Vec<T>, /// The set of free indices into `pool` that are available /// for allocating an object from. free: FreeList, /// During a GC, if the nth bit of `marked` is set, that /// means that the nth object in `pool` has been marked as /// reachable. marked: Bitv, }

When the Scheme program allocates a new object, we remove the first entry from the free list and return a pointer to the object at that entry's index in the object pool. If the every Arena is at capacity (ie, its free list is empty), a new Arena is allocated from the operating system and its object pool is used for the requested Scheme allocation.

impl<T: Default> ArenaSet<T> { pub fn allocate(&mut self) -> ArenaPtr<T> { for arena in self.arenas.iter_mut() { if !arena.is_full() { return arena.allocate(); } } let mut new_arena = Arena::new(self.capacity); let result = new_arena.allocate(); self.arenas.push(new_arena); result } } impl<T: Default> Arena<T> { pub fn allocate(&mut self) -> ArenaPtr<T> { match self.free.pop() { Some(idx) => { let self_ptr : *mut Arena<T> = self; ArenaPtr::new(self_ptr, idx) }, None => panic!("Arena is at capacity!"), } } }

For simplicity, Oxischeme has separate arenas for separate types of objects. This sidesteps the problem of finding an appropriately sized free block of memory when allocating different sized objects from the same pool, the fragmentation that can occur because of that, and lets us use a plain old vector as the object pool. However, this also means that we need a separate ArenaSet<T> for each T object that a Scheme program can allocate and why oxischeme::heap::Heap::collect_garbage has four calls to sweep().

During the sweep phase of Oxischeme's garbage collector, we return the entries of any dead object back to the free list. If the Arena is empty (ie, the free list is full) then we return the Arena's memory to the operating system. This prevents retaining the peak amount of memory used for the rest of the program execution.

impl<T: Default> Arena<T> { pub fn sweep(&mut self) { self.free = range(0, self.capacity()) .filter(|&n| { !self.marked.get(n) .expect("marked should have length == self.capacity()") }) .collect(); // Reset `marked` to all zero. self.marked.set_all(); self.marked.negate(); } } impl<T: Default> ArenaSet<T> { pub fn sweep(&mut self) { for arena in self.arenas.iter_mut() { arena.sweep(); } // Deallocate any arenas that do not contain any // reachable objects. self.arenas.retain(|a| !a.is_empty()); } }

This concludes our tour of Oxischeme's current implementation of memory management, allocation, and garbage collection for Scheme programs. In the future, I plan to make Oxischeme's collector a moving collector, which will pave the way for a compacting and generational GC. I might also experiment with incrementalizing marking for lower latency and shorter GC pauses, or making sweeping lazy. Additionally, I intend to declare to the rust compiler that operations on un-rooted GC pointers unsafe, but I haven't settled on an implementation strategy yet. I would also like to experiment with writing a syntax extension for the Rust compiler so that it can derive Trace implementations, and they don't need to be written by hand.

Thanks to Tom Tromey and Zach Carter for reviewing drafts.

References
Categorieën: Mozilla-nl planet

Tantek Çelik: November Project Book Survey Answers #NP_Book

Mozilla planet - snein, 22/02/2015 - 07:35

The November Project recently wrapped up a survey for a book project. I had the tab open and finally submitted my answers, but figured why not post them on my own site as well. Some of this I've blogged about before, some of it is new.

The basics
Tribe Location
San Francisco
Member Name
Tantek Çelik
Date of Birth
March 11th
Profession
Internet
Date and Location of First NP Workout
2013-10-30 Alamo Square, San Francisco, CA, USA
Contact Info
tantek.com
Pre-NP fitness

Describe your pre-NP fitness background and routine.

  • 2011 started mixed running/jogging/walking every week, short distances 0.5-3 miles.
  • 2008 started bicycling regularly around SF
  • 2007 started rock climbing, eventually 3x a week
  • 1998 started regular yoga and pilates as part of recovering from a back injury
First hear about NP

How did you first hear about the group?

I saw chalkmarks in Golden Gate Park for "NovemberProject 6:30am Kezar!" and thought what the heck is that? 6:30am? Sounds crazy. More: Learning About NP

First NP workout

Recount your first workout, along with the vibe, and how they may have differed from your expectations.

My first NovemberProject workout was a 2013 NPSF PR Wednesday workout, and it was the hardest physical workout I'd ever done. However before it destroyed me, I held my hand up as a newbie, and was warmly welcomed and hugged. My first NP made a strong positive impression. More: My First Year at NP: Newbie

Meeting BG and Bojan

For those who've crossed paths, what was your first impression of BG? Of Bojan?

I first met BG and Bojan at a traverbal Boston destination deck workout. BG and Bojan were (are!) larger than life, with voices to match. Yet their booming matched with self-deprecating humor got everyone laughing and feeling like they belonged.

First Harvard Stadium workout

Boston Only: If you had a particularly memorable newbie meeting and virgin workout at Harvard Stadium, I'd like to know about it for a possible separate section. If so, please describe.

My first Boston Harvard Stadium workout was one to remember. Two days after my traverbal Boston destination deck workout, I joined the newbie orientation since I hadn't done the stadium before. I couldn't believe how many newbies there were. By the time we got to the starting steps I was ready to bolt. I completed 26 sections, far more than I thought I would.

Elevated my fitness

How has NP elevated your fitness level? How have you measured this?

NP has made me a lot faster. After a little over 6 months of NPSF, I cut over 16 minutes in my Bay To Breakers 12km personal record.

Affected personal life

Give an example of how NP has affected your personal life and/or helped you overcome a challenge.

NP turned me from a night person to a morning person, with different activities, and different people. NP inspired me to push myself to overcome my inability to run hills, one house at a time until I could run half a block uphill, then I started running NPSF hills. More: My First Year at NP: Scared of Hills

Impacted relationship with my city

How has NP impacted your relationship with your city?

I would often run into NPSF regulars on my runs to and from the workout, so I teamed up with a couple of them and started an unofficial "rungang". We posted times and corners of our running routes, including to hills. NPSF founder Laura challenged our rungang to run ~4 miles (more than halfway across San Francisco) south to a destination hills workout at Bernal Heights and a few of us did. After similar pre-workout runs North to the Marina, and East to the Embarcadero, I feel like I can confidently run to anywhere in the city, which is an amazing feeling.

Why rapid traction?

Why do you think NP has gained such traction so rapidly?

Two words: community positivity. Yes there's a workout too, but there are lots of workout groups. What makes NP different (beyond that it's free), are the values of community and barrier-breaking positivity that the leaders instill into every single workout. More: My First Year at NP: Positive Community — Just Show Up

Most memorable moment

Describe your most memorable workout or a quintessential NP moment.

Catching the positivity award when it was thrown at me across half of NPSF. Tantek holding up the NPSF positivity award backlit by the rising sun.

Weirdest thing

Weirdest thing about NP?

That so many people get up before sunrise, nevermind in sub-freezing temperatures in many cities, to go to a workout. Describe that to anyone who isn't in NP, and it sounds beyond weird.

NP and regular life

How has NP bled into your "regular" life? (Do you inadvertently go in for a hug when meeting a new client? Do you drop F-bombs at inopportune times? Have you gone from a cubicle brooder to the meeting goofball? Are you kinder to strangers?)

I was already a bit of a hugger, but NP has taught me to better recognize when people might quietly want (or be ok with) a hug, even outside of NP. #huglife

The Positivity Award

If you've ever won the Positivity Award, please describe that moment and what it meant to you.

It's hard to describe. I certainly was not expecting it. I couldn't believe how excited people were that I was getting it. There was a brief moment of fear when Braden tossed it at me over dozens of my friends, all the sound suddenly muted while I watched it flying, hands outstretched. Caught it solidly with both hands, and I could hear again. It was a beautiful day, the sun had just risen, and I could see everyone's smiling faces. More than the award itself, it meant a lot to me to see the joy in people's faces.

Non-NP friends and family

What do your non-NP friends and family think of your involvement?

My family is incredibly supportive and ecstatic with my increased fitness. My non-NP friends are anywhere from curious (at best), to wary or downright worried that it's a cult, which they only half-jokingly admit.

NP in one word

Describe NP in one word.

Community

Additional Thoughts

Additional thoughts? Include them here.

You can follow my additional thoughts on NP, fitness, and other things on my site & blog: tantek.com.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Biggus diskus (plus: 31.5.0 and how to superphish in your copious spare time)

Mozilla planet - snein, 22/02/2015 - 06:33

One of the many great advances that Mac OS 9 had over the later operating system was the extremely flexible (and persistent!) RAM disk feature, which I use on almost all of my OS 9 systems to this day as a cache store for Classilla and temporary work area. It's not just for laptops!

While OS X can configure and use RAM disks, of course, it's not as nicely integrated as the RAM Disk in Classic is and it isn't natively persistent, though the very nice Esperance DV prefpane comes pretty close to duplicating the earlier functionality. Esperance will let you create a RAM disk up to 2GB in size, which for most typical uses of a transient RAM disk (cache, scratch volume) would seem to be more than enough, and can back it up to disk when you exit. But there are some heavy duty tasks that 2GB just isn't enough for -- what if you, say, wanted to compile a PowerPC fork of Firefox in one, he asked nonchalantly, picking a purpose at random not at all intended to further this blog post?

The 2GB cap actually originates from two specific technical limitations. The first applies to G3 and G4 systems: they can't have more than 2GB total physical RAM anyway. Although OS X RAM disks are "sparse" and only actually occupy the amount of RAM needed to store their contents, if you filled up a RAM disk with 2GB of data even on a 2GB-equipped MDD G4 you'd start spilling memory pages to the real hard disk and thrashing so badly you'd be worse off than if you had just used the hard disk in the first place. The second limit applies to G5 systems too, even in Leopard -- the RAM disk is served by /System/Library/PrivateFrameworks/DiskImages.framework/Resources/diskimages-helper, a 32-bit process limited to a 4GB address space minus executable code and mapped-in libraries (it didn't become 64-bit until Snow Leopard). In practice this leaves exactly 4629672 512-byte disk blocks, or approximately 2.26GB, as the largest possible standalone RAM disk image on PowerPC. A full single-architecture build of TenFourFox takes about 6.5GB. Poop.

It dawned on me during one of my careful toilet thinking sessions that the way awound, er, around this pwobproblem was a speech pathology wefewwal to RAID volumes together. I am chagrined that others had independently came up with this idea before, but let's press on anyway. At this point I'm assuming you're going to do this on a G5, because doing this on a G4 (or, egad, G3) would be absolutely nuts, and that your G5 has at least 8GB of RAM. The performance improvement we can expect depends on how the RAM disk is constructed (10.4 gives me the choices of concatenated, i.e., you move from component volume process to component volume process as they fill up, or striped, i.e., the component volume processes are interleaved [RAID 0]), and how much the tasks being performed on it are limited by disk access time. Building TenFourFox is admittedly a rather CPU-bound task, but there is a non-trivial amount of disk access, so let's see how we go.

Since I need at least 6.5GB, I decided the easiest way to handle this was 4 2+GB images (roughly 8.3GB all told). Obviously, the 8GB of RAM I had in my Quad G5 wasn't going to be enough, so (an order to MemoryX and) a couple days later I had a 16GB memory kit (8 x 2GB) at my doorstep for installation. (As an aside, this means my quad is now pretty much maxed out: between the 16GB of RAM and the Quadro FX 4500, it's now the most powerful configuration of the most powerful Power Mac Apple ever made. That's the same kind of sheer bloodymindedness that puts 256MB of RAM into a Quadra 950.)

Now to configure the RAM disk array. I ripped off a script from someone on Mac OS X Hints and modified it to be somewhat more performant. Here it is (it's a shell script you run in the Terminal, or you could use Platypus or something to make it an app; works on 10.4 and 10.5):


% cat ~/bin/ramdisk
#!/bin/sh

/bin/test -e /Volumes/BigRAM && exit

diskutil erasevolume HFS+ r1 \
`hdiutil attach -nomount ram://4629672` &
diskutil erasevolume HFS+ r2 \
`hdiutil attach -nomount ram://4629672` &
diskutil erasevolume HFS+ r3 \
`hdiutil attach -nomount ram://4629672` &
diskutil erasevolume HFS+ r4 \
`hdiutil attach -nomount ram://4629672` &
wait
diskutil createRAID stripe BigRAM HFS+ \
/Volumes/r1 /Volumes/r2 /Volumes/r3 /Volumes/r4

Notice that I'm using stripe here -- you would substitute concat for stripe above if you wanted that mode, but read on first before you do that. Open Disk Utility prior to starting the script and watch the side pane as it runs if you want to understand what it's doing. You'll see the component volume processes start, reconfigure themselves, get aggregated, and then the main array come up. It's sort of a nerdily beautiful disk image ballet.

One complication, however, is you can't simply unmount the array and expect the component RAM volumes to go away by themselves; instead, you have to go seek and kill the component volumes first and then the array will go away by itself. If you fail to do that, you'll run out of memory verrrrry quickly because the RAM will not be reclaimed! Here's a script for that too. I haven't tested it on 10.5, but I don't see why it wouldn't work there either.


% cat ~/bin/noramdisk
#!/bin/sh

/bin/test -e /Volumes/BigRAM || exit

diskutil unmountDisk /Volumes/BigRAM
diskutil checkRAID BigRAM | tail -5 | head -4 | \
cut -c 3-10 | grep -v 'Unknown' | \
sed 's/s3//' | xargs -n 1 diskutil eject

This script needs a little explanation. What it does is unmount the RAM disk array so it can be modified, then goes through the list of its component processes, isolates the diskn that backs them and ejects those. When all the disk array's components are gone, OS X removes the array, and that's it. Naturally shutting down or restarting will also wipe the array away too.

(If you want to use these scripts for a different sized array, adjust the number of diskutil erasevolume lines in the mounter script, and make sure the last line has the right number of images [like /Volumes/r1 /Volumes/r2 by themselves for a 2-image array]. In the unmounter script, change the tail and head parameters to 1+images and images respectively [e.g., tail -3 | head -2 for a 2-image array].)

Since downloading the source code from Mozilla is network-bound (especially on my network), I just dumped it to the hard disk, and patched it on the disk as well so a problem with the array would not require downloading and patching everything again. Once that was done, I made a copy on the RAM disk with hg clone esr31g /Volumes/BigRAM/esr31g and started the build. My hard disk, for comparison, is a 7200rpm 64MB buffer Western Digital SATA drive; remember that all PowerPC OS X-compatible controllers only support SATA I. Here's the timings, with the Quad G5 in Highest performance mode:

hard disk: 2 hours 46 minutes
concatenated: 2 hours 15 minutes (18.7% improvement)
striped: 2 hours 8 minutes (22.9% improvement)

Considering how much of this is limited by the speed of the processors, this is a rather nice boost, and I bet it will be even faster with unified builds in 38ESR (these are somewhat more disk-bound, particularly during linking). Since I've just saved almost two hours of build time over all four CPU builds, this is the way I intend to build TenFourFox in the future.

The 5.2% delta observed here between striping and concatenation doesn't look very large, but it is statistically significant, and actually the difference is larger than this test would indicate -- if our task were primarily disk-bound, the gulf would be quite wide. The reason striping is faster here is because each 2GB slice of the RAM disk array is an independent instance of diskimages-helper, and since we have four slices, each slice can run on one of the Quad's cores. By spreading disk access equally among all the processes, we share it equally over all the processors and achieve lower latency and higher efficiencies. This would probably not be true if we had fewer cores, and indeed for dual G5s two slices (or concatenating four) may be better; the earliest single processor G5s should almost certainly use concatenation only.

Some of you will ask how this compares to an SSD, and frankly I don't know. Although I've done some test builds in an SSD, I've been using a Patriot Blaze SATA III drive connected to my FW800 drive toaster to avoid problems with interfacing, so I doubt any numbers I'd get off that setup would be particularly generalizable and I'd rather use the RAM disk anyhow because I don't have to worry about TRIM, write cycles or cleaning up. However, I would be very surprised if an SSD in a G5 achieved speeds faster than RAM, especially given the (comparatively, mind you) lower SATA bandwidth.

And, with that, 31.5.0 is released for testing (release notes, hashes, downloads). This only contains ESR security/stability fixes; you'll notice the changesets hash the same as 31.4.0 because they are, in fact, the same. The build finalizes Monday PM Pacific as usual.

31.5.0 would have been out earlier (experiments with RAM disks notwithstanding) except that I was waiting to see what Mozilla would do about the Superfish/Komodia debacle: the fact that Lenovo was loading adware that MITM-ed HTTPS connections on their PCs ("Superfish") was bad enough, but the secret root certificate it possessed had an easily crackable private key password allowing a bad actor to create phony certificates, and now it looks like the company that developed the technology behind Superfish, Komodia, has by their willful bad faith actions caused the same problem to exist hidden in other kinds of adware they power.

Assuming you were not tricked into accepting their root certificate in some other fashion (their nastyware doesn't run on OS X and near as I can tell never has), your Power Mac is not at risk, but these kinds of malicious, malfeasant and incredibly ill-constructed root certificates need to be nuked from orbit (as well as the companies that try to sneak them on user's machines; I suggest napalm, castration and feathers), and they will be marked as untrusted in future versions of TenFourFox and Classilla so that false certificates signed with them will not be honoured under any circumstances, even by mistake. Unfortunately, it's also yet another example of how the roots are the most vulnerable part of secure connections (previously, previously).

Development on IonPower continues. Right now I'm trying to work out a serious bug with Baseline stubs and not having a lot of luck; if I can't get this working by 38.0, we'll ship 38 with PPCBC (targeting a general release by 38.0.2 in that case). But I'm trying as hard as I can!

Categorieën: Mozilla-nl planet

Lenovo releases tool to purge Superfish 'crapware' - Computerworld

Nieuws verzameld via Google - sn, 21/02/2015 - 19:01

Computerworld

Lenovo releases tool to purge Superfish 'crapware'
Computerworld
To serve ads on encrypted websites, Superfish installed a self-signed root certificate into the Windows certificate store, as well as into Mozilla's certificate store for the Firefox browser and Thunderbird email client. That Superfish certificate then ...
How to Remove Superfish Adware from your Lenovo Laptop: All you Need to KnowInternational Business Times, India Edition
Superfish Malware Can Now Be Removed By Windows DefenderValueWalk
Microsoft updates Windows Defender to remove Superfish infectionZDNet
KDramaStars -TechTarget
alle 281 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Marco Zehe: Social networks and accessibility: A not so sad picture

Mozilla planet - sn, 21/02/2015 - 10:12

This post originally was written in December 2011 and had a slightly different title. Fortunately, the landscape has changed dramatically since then, so it is finally time to update it with more up to date information.

Social networks are part of many people’s lives nowadays. In fact if you’re reading this, chances are pretty high that you came from Twitter, Facebook or some other social network. The majority of referrers to my blog posts come from social networks nowadays, those who read me via an RSS feed seem to be getting less and less.

So let’s look at some of the well-known social networks and see what their state of accessibility is nowadays, both when considering the web interface as well as the native apps for mobile devices most of them have.

In recent years, several popular social networks moved from a fixed relaunch schedule of their services to a more agile, incremental development cycle.. Also, most, if not all, social network providers we’ll look at below have added personell dedicated to either implementing or training other engineers in accessibility skills. Those efforts show great results. There is over-all less breakage of accessibility features, and if something breaks, the teams are usually very quick to react to reports, and the broken feature is fixed in a near future update. So let’s have a look!

Twitter

Twitter has come a long way since I wrote the initial version of this post. New Twitter was here to stay, but ever since a very skilled engineer boarded the ship, a huge improvement has taken place. One can nowadays use Twitter with keyboard shortcuts to navigate tweets, reply, favorite, retweet and do all sorts of other actions. Screen reader users might want to try turning off their virtual buffers and really use the web site like a desktop app. It works really quite well! I also recommend taking a look at the keyboard shortcut list, and memorizing them when you use Twitter more regularly. You’ll be much much more productive! I wrote something more about the Twitter accessibility team in 2013.

Clients

Fortunately, there are a lot of accessible clients out there that allow access to Twitter. The Twitter app for iOS is very accessible now for both iPhone and iPad. The Android client is very accessible, too. Yes, there is the occasional breakage of a feature, but as stated above, the team is very good at reacting to bug reports and fixing them. Twitter releases updates very frequently now, so one doesn’t have to wait long for a fix.

There’s also a web client called Easy Chirp (formerly Accessible Twitter) by Mr. Web Axe Dennis Lembree. It’s now in incarnation 2. This one is marvellous, it offers all the features one would expect from a Twitter client, in your browser, and it’s all accessible to people with varying disabilities! It uses all the good modern web standard stuff like WAI-ARIA to make sure even advanced interaction is done accessibly. I even know many non-disabled people using it for its straight forward interface and simplicity. One cool feature it has is that you can post images and provide an alternative description for visually impaired readers, without having to spoil the tweet where the picture might be the punch line. You just provide the alternative description in an extra field, and when the link to the picture is opened, the description is provided right there. How fantastic is that!

For iOS, there are two more Apps I usually recommend to people. For the iPhone, my Twitter client of choice was, for a long time, TweetList Pro, an advanced Twitter client that has full VoiceOver support, and they’re not even too shy to say it in their app description! They have such things as muting users, hash tags or clients, making it THE Twitter client of choice for many for all intents and purposes. The reason why I no longer use it as my main Twitter client is the steep decline of updates. It’s now February 2015, and as far as I know, it hasn’t even been updated to iOS 8 yet. The last update was some time in October 2013, so it lags behind terribly in recent Twitter API support changes, doesn’t support the iPhone 6 and 6 Plus screens natively, etc.

Another one, which I use on the iPhone and iPad, is Twitterrific by The Icon Factory. Their iPhone and iPad app is fully accessible, the Mac version, on the other hand, is totally inaccessible and outdated. On the Mac, I use the client Yorufukurou (night owl).

Oh yes and if you’re blind and on Windows, there are two main clients available, TheQube, and Chicken Nugget. TheQube is designed specifically for the blind with hardly any visual UI, and it requires a screen reader or at least installed speech synthesizer to talk. Chicken Nugget can be run in UI or non-UI mode, and in non-UI mode, definitely also requires a screen reader to run. Both are updated frequently, so it’s a matter of taste which one you choose.

In short, for Twitter, there is a range of clients, one of which, the EasyChirp web application, is truly cross-platform and useable anywhere, others are for specific platforms. But you have accessible means to get to Twitter services without having to use their web site.

Facebook

Facebook has come a long long way since my original post as well. When I wrote about the web site originally, it had just relaunched and completely broken accessibility. I’m happy to report that nowadays, the FB desktop and mobile sites both are largely accessible, and Facebook also has a dedicated team that responds to bug reports quickly. They also have a training program in place where they teach other Facebook engineers the skills to make new features accessible and keep existing ones that way when they get updated. I wrote more about the Facebook accessibility changes here, and things constantly got better since then.

Clients

Like the web interfaces, the iOS and Android clients for Facebook and Messenger have come a long way and frequently receive updates to fix remaining accessibility problems. Yes, here too, there’s the occasional breakage, but the team is very responsive to bug reports in this area, too, and since FB updates their apps on a two week basis, sometimes even more often if critical issues are discovered, waiting for fixes usually doesn’t take long. If you’re doing messaging on the desktop, you can also integrate FaceBook Messenger/Chat with Skype, which is very accessible on both Mac and Windows. Some features like group chats are, however, reserved for the Messenger clients and web interface.

Google Plus

Google Plus anyone? :) It was THE most hyped thing of the summer of 2011, and as fast as summer went, so did people lose interest in it. Even Google seem to slowly but surely abandon it, cutting back on the requirement to have a Google+ account for certain activities bit by bit. But in terms of accessibility, it is actually quite OK nowadays. As with many of their widgets, Google+ profits from them reusing components that were found in Gmail and elsewhere, giving both keyboard accessibility and screen reader information exposure. Their Android app is also quite accessible from the last time I tried it in the summer of 2014. Their iOS app still seems to be in pretty bad shape, which is surprising considering how well Gmail, Hangouts, and even the Google Docs apps work nowadays. I don’t use it much, even though I recreated an account some time in 2013. But whenever I happen to stumble in, I’m not as dismayed as I was when I wrote the original version of this post.

Yammer

Yammer is an enterprise social network we at Mozilla and in a lot of other companies use for some internal communication. It was bought by Microsoft some time in 2012, and since then, a lot of its accessibility issues have been fixed. When you tweet them, you usually get a response pointing to a bug entry form, and issues are dealt with  satisfactorily.

iOS client

The iOS client is updated quite frequently. It has problems on and off, but the experience got more stable in recent versions, so one can actually use it.

identi.ca

identi.ca from Status.net is a microblogging service similar to Twitter. And unlike Twitter, it’s accessible out of the box! This is good since it does not have a wealth of clients supporting it like Twitter does, so with its own interface being accessible right away, this is a big help! It is, btw, the only open-source social network in these tests. Mean anything? Probably!

Conclusion

All social networks I tested either made significant improvements over the last three years, or they remained accessible (in the case of the last candidate).

In looking for reasons why this is, there are two that come to mind immediately. For one, the introduction of skilled and dedicated personell versed in accessibility matters, or willing to dive in deep and really get the hang of it. These big companies finally understood the social responsibility they have when providing a social network, and leveraged the fact that there is a wealth of information out there on accessible web design. And there’s a community that is willing to help if pinged!

Another reason is that these companies realized that putting in accessibility up-front, making inclusive design decisions, and increasing the test coverage to include accessibility right away not only reduces the cost as opposed to making it bolt-on, but also helps to make a better product for everybody.

A suggestion remains: Look at what others are doing! Learn from them! Don’t be shy to ask questions! If you look at what others! have been doing, you can draw from it! They’ll do that with stuff you put out there, too! And don’t be shy to talk about the good things you do! The Facebook accessibility team does this in monthly updates where they highlight stuff they fixed in the various product lines. I’ve seen signs of that from Twitter engineers, but not as consistent as with Facebook. Talking about the successes in accessibility also serves as an encouragement to others to put inclusive design patterns in their work flows.

Categorieën: Mozilla-nl planet

Pages