mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Air Mozilla: Mozilla Weekly Project Meeting

Mozilla planet - mo, 23/02/2015 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Andreas Gal: Search works differently than you may think

Mozilla planet - mo, 23/02/2015 - 19:26

Search is the main way we all navigate the Web, but it works very differently than you may think. In this blog post I will try to explain how it worked in the past, why it works differently today and what role you play in the process.

The services you use for searching, like Google, Yahoo and Bing, are called a search engines. The very name suggests that they go through a huge index of Web pages to find every one that contains the words you are searching for. 20 years ago search engines indeed worked this way. They would “crawl” the Web and index it, making the content available for text searches.

As the Web grew larger, searches would often find the same word or phrase on more and more pages. This was starting to make search results less and less useful because humans don’t like to read through huge lists to manually find the page that best matches their search. A search for the word “door” on Google, for example, gives you more than 1.9 billion results. It’s impractical — even impossible — for anyone look through all of them to find the most relevant page.

search-1

Google finds about 1.9 billion results for the search query “door”.

To help navigate the ever growing Web, search engines introduced algorithms to rank results by their relevance. In 1996, two Stanford graduate students, Larry Page and Sergey Brin, discovered a way to use the information available on the Web itself to rank results. They called it PageRank.

Pages on the Web are connected by links. Each link contains anchor text that explains to readers why they should follow the link. The link itself points to another page that the author of the source page felt was relevant to the anchor text. Page and Brin discovered that they could rank results by analyzing the incoming links to a page and treating each one as a vote for its quality. A result is more likely to be relevant if many links point to it using anchor text that is similar to the search terms. Page and Brin founded a search engine company in 1998 to commercialize the idea: Google.

PageRank worked so well that it completely changed the way people interact with search results. Because PageRank correctly offered the most relevant results at the top of the page, users started to pay less attention to anything below that. This also meant that pages that didn’t appear on top of the results page essentially started to become “invisible”: users stopped finding and visiting them.

To experience the “invisible Web” for yourself, head over to Google and try to look through more than just the first page of results. So few users ever wander beyond the first page that Google doesn’t even bother displaying all the 1.9 billion search results it claims to have found for “door.” Instead, the list just stops at page 63, about a 100 million pages short of what you would have expected.

Despite reporting over 1.9 billion results, in reality Google’s search results for “door” are quite finite and end at page 63.

With publishers and online commerce sites competing for that small number of top search results, a new business was born: search engine optimization (or SEO). There are many different methods of SEO, but the principal goal is to game the PageRank algorithm in your favor by increasing the number of incoming links to your own page and tuning the anchor text. With sites competing for visitors — and billions in online revenue at stake — PageRank eventually lost this arms race. Today, links and anchor text are no longer useful to determine the most relevant results and, as a result, the importance of PageRank has dramatically decreased.

Search engines have since evolved to use machine learning to rank results. People perform 1.2 trillion searches a year on Google alone  — that’s about 3 billion a day and 40,000 a second. Each search becomes part of this massive query stream as the search engine simultaneously “sees” what billions of people are searching for all over the world. For each search, it offers a range of results and remembers which one you considered most relevant. It then uses these past searches to learn what’s most relevant to the average user to provide the most relevant results for future searches.

Machine learning has made text search all but obsolete. Search engines can answer 90% or so of searches by looking at previous search terms and results. They no longer search the Web in most cases — they instead search past searches and respond based on the preferred result of previous users.

This shift from PageRank to machine learning also changed your role in the process. Without your searches — and your choice of results — a search engine couldn’t learn and provide future answers to others. Every time you use a search engine, the search engine uses you to rank its results on a massive scale. That makes you its most important asset.


Filed under: Mozilla
Categorieën: Mozilla-nl planet

David Tenser: User Success – We’re hiring!

Mozilla planet - mo, 23/02/2015 - 18:18

Just a quick +1 to Roland’s plug for the Senior Firefox Community Support Lead:

  • Ever loved a piece of software so much that you learned everything you
    could about it and helped others with it?
  • Ever coordinated an online community? Especially one around supporting users?
  • Ever measured and tweaked a website’s content so that more folks could find it and learn from it?

Got 2 out of 3 of the above?

Then work with me (since Firefox works closely with my area: Firefox for Android and in the future iOS via cloud services like Sync) and the rest of my colleagues on the fab Mozilla User Success team (especially my fantastic Firefox savvy colleagues over at User Advocacy).

And super extra bonus: you’ll also work with our fantastic community like all Mozilla employees AND Firefox product management, marketing and engineering.

Take a brief detour and head over to Roland’s blog to get a sense of one of the awesome people you’d get to work closely with in this exciting role (trust me, you’ll want to work with Roland!). After that, I hope you know what to do! :)


Categorieën: Mozilla-nl planet

Ben Kelly: That Event Is So Fetch

Mozilla planet - mo, 23/02/2015 - 16:00

The Service Workers builds have been updated as of yesterday, February 22:

Firefox Service Worker Builds

Notable contributions this week were:

  • Josh Mathews landed Fetch Event support in Nightly. This is important, of course, because without the Fetch Event you cannot actually intercept any network requests with your Service Worker. | bug 1065216
  • Catalin Badea landed more of the Service Worker API in Nightly, including the ability to communicate with the Service Worker using postMessage(). | bug 982726
  • Nikhil Marathe landed some more of his spec implementations to handle unloading documents correctly and to treat activations atomically. | bug 1041340 | bug 1130065
  • Andrea Marchesini landed fixes for FirefoxOS discovered by the team in Paris. | bug 1133242
  • Jose Antonio Olivera Ortega contributed a work-in-progress patch to force Service Worker scripts to update when dom.serviceWorkers.test.enabled is set. | bug 1134329
  • I landed my implementation of the Fetch Request and Response clone() methods. | bug 1073231

As always, please let us know if you run into any problems. Thank you for testing!

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 36 beta10 to rc

Mozilla planet - mo, 23/02/2015 - 15:52

For the RC build, we landed a few last minutes changes. We disabled <meta referrer> because of a last minute issue, we landed a compatibility fix for addons and, last but not least, some graphic crash fixes.

Note that a RC2 has been build from the same source code in order to tackle the AMD CPU bug (see comment #22).

  • 11 changesets
  • 32 files changed
  • 220 insertions
  • 48 deletions

ExtensionOccurrences cpp6 js5 jsm2 ini2 xml1 sh1 json1 hgtags1 h1

ModuleOccurrences mobile12 gfx5 browser5 toolkit3 dom2 testing1 parser1

List of changesets:

Robert StrongBug 945192 - Followup to support Older SDKs in loaddlls.cpp. r=bbondy a=Sylvestre - cce919848572 Armen Zambrano GasparnianBug 1110286 - Pin mozharness to 2264bffd89ca instead of production. r=rail a=testing - 948a2c2e31d4 Jared WeinBug 1115227 - Loop: Add part of the UITour PageID to the Hello tour URLs as a query parameter. r=MattN, a=sledru - 1a2baaf50371 Boris ZbarskyBug 1134606 - Disable <meta referrer> in Firefox 36 pending some loose ends being sorted out. r=sstamm, a=sledru - 521cf86d194b Milan SreckovicBug 1126918 - NewShSurfaceHandle can return null. Guard against it. r=jgilbert, a=sledru - 89cfa8ff9fc5 Ryan VanderMeulenMerge beta to m-r. a=merge - 2f2abd6ffebb Matt WoodrowBug 1127925 - Lazily open shared handles in DXGITextureHostD3D11 to avoid holding references to textures that might not be used. r=jrmuizel, a=sledru - 47ec64cc562f Rob WuBug 1128478 - sdk/panel's show/hide events not emitted if contentScriptWhen != 'ready'. r=erikvold, a=sledru - c2a6bab25617 Matt WoodrowBug 1128170 - Use UniquePtr for TextureClient KeepAlive objects to make sure we don't leak them. r=jrmuizel, a=sledru - 67d9db36737e Hector ZhaoBug 1129287 - Fix not rejecting partial name matches for plugin blocklist entries. r=gfritzsche, a=sledru - 7d4016a05dd3 Ryan VanderMeulenMerge beta to m-r. a=merge - a2ffa9047bf4

Categorieën: Mozilla-nl planet

Adam Lofting: The week ahead 23 Feb 2015

Mozilla planet - mo, 23/02/2015 - 12:40

First, I’ll note that even taking the time to write these short ‘note to self’ type blog posts each week takes time and is harder to do than I expected. Like so many priorities, the long term important things often battle with the short term urgent things. And that’s in a culture where working open is more than just acceptable, it’s encouraged.

Anyway, I have some time this morning sitting in an airport to write this, and I have some time on a plane to catch up on some other reading and writing that hasn’t made it to the top of the todo list for a few weeks. I may even get to post a blog post or two in the near future.

This week, I have face-to-face time with lots of colleagues in Toronto. Which means a combination of planning, meetings, running some training sessions, and working on tasks where timezone parity is helpful. It’s also the design team work week, and though I’m too far gone from design work to contribute anything pretty, I’m looking forward to seeing their work and getting glimpses of the future Webmaker product. Most importantly maybe, for a week like this, I expect unexpected opportunities to arise.

One of my objectives this week is working with Ops to decide where my time is best spent this year to have the most impact, and to set my goals for the year. That will get closer to a metrics strategy this year to improve on last years ‘reactive’ style of work.

IMG_0456If you’re following along for the exciting stories of my shed>to>office upgrades. I don’t have much to show today, but I’m building a new desk next and insulation is my new favourite thing. This photo shows the visible difference in heat loss after fitting my first square of insulation material to the roof.

Categorieën: Mozilla-nl planet

Mozilla overweegt blacklist voor Superfish-certificaat - Security.nl

Nieuws verzameld via Google - mo, 23/02/2015 - 12:04

Mozilla overweegt blacklist voor Superfish-certificaat
Security.nl
Mozilla overweegt om het Superfish-certificaat dat op laptops van Lenovo werd geïnstalleerd op een blacklist te zetten. Dat blijkt uit een discussie op Mozilla's Bugzilla, waar ontwikkelaars problemen en bugs in Mozilla-software bespreken. Door het ...

en meer »
Categorieën: Mozilla-nl planet

The Mozilla Blog: MWC 2015: Experience the Latest Firefox OS Devices, Discover what Mozilla is Working on Next

Mozilla planet - mo, 23/02/2015 - 11:14

Preview TVs and phones powered by Firefox OS and demos such as an NFC payment prototype at the Mozilla booth. Hear Mozilla speakers discuss privacy, innovation for inclusion and future of the internet.

Panasonic unveiled their new line of 4K Ultra HD TVs powered by Firefox OS at their convention in Frankfurt today. The Panasonic 2015 4k UHD (Ultra HD) LED VIERA TV, which will be shipping this spring, will also be showcased at Mozilla’s stand at Mobile World Congress 2015 in Barcelona. Like last year, Firefox OS will take its place in Hall 3, Stand 3C30, alongside major operators and device manufacturers.

Mozilla's stand at Mobile World Congress 2015, Hall 3, Stand 3C30

Mozilla’s stand at Mobile World Congress 2015, Hall 3, Stand 3C30

In addition to the Panasonic TV and the latest Firefox OS smartphones announced, visitors have the opportunity to learn more about Mozilla’s innovation projects during talks at the “Fox Den” at Mozilla’s stand, Hall 3, Stand 3C30. Just one example from the demo program:

Mozilla, in collaboration with its partners at Deutsche Telekom Innovation Labs (centers in Silicon Valley and Berlin) and T-Mobile Poland, developed the design and implementation of Firefox OS’s NFC infrastructure to enable several applications including mobile payments, transportation services, door access and media sharing. The mobile wallet demo covering ‘MasterCard® Contactless’ technology together with few non-payment functionalities will be showcased in “FoxDen” talks.

Visit www.firefoxos.com/mwc for the full list of topics and schedule of “Fox Den” talks.

 How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

Schedule of Events and Speaking Appearances

Hear from Mozilla executives on trending topics in mobile at the following sessions:

‘Digital Inclusion: Connecting an additional one billion people to the mobile internet’ Seminar

Executive Director of the Mozilla Foundation Mark Surman will join a seminar that will explore the barriers and opportunities relating to the growth of mobile connectivity in developing markets, particularly in rural areas.
Date: Monday 2 March 12:00 – 13:30 CET
Location: GSMA Seminar Theatre CC1.1

‘Ensuring User-Centred Privacy in a Connected World’ Panel

Denelle Dixon-Thayer, SVP of business and legal affairs at Mozilla, will take part in a session that explores user-centric privacy in a connected world.
Date: Monday, 2 March 16:00 – 17:30 CET
Location: Hall 4, Auditorium 3

‘Innovation for Inclusion’ Keynote Panel

Mozilla Executive Chairwoman and Co-Founder Mitchell Baker will discuss how mobile will continue to empower individuals and societies.
Date: Tuesday, 3 March 11:15 – 12:45 CET
Location: Hall 4, Auditorium 1 (Main Conference Hall)

‘Connected Citizens, Managing Crisis’ Panel

Mark Surman, Executive Director of the Mozilla Foundation, will contribute to a panel on how mobile technology is playing an increasingly central role in shaping responses to some of the most critical humanitarian problems facing the global community today.
Date: Tuesday, 3 March 14:00 – 15:30 CET
Location: Hall 4, Auditorium 2

‘Defining the Future of the Internet’ Panel

Andreas Gal, CTO at Mozilla, will take part in a session that explores the future of the Internet, bringing together industry leaders to the forefront of the net neutrality debate.
Date: Wednesday, 4 March 15:15 – 16:15 CET
Location: Hall 4, Auditorium 5

More information:

  • Please visit Mozilla and experience Firefox OS in Hall 3, Stand 3C30, at the Fira Gran Via, Barcelona from March 2-5, 2015
  • To learn more about Mozilla at MWC, please visit: www.firefoxos.com/mwc
  • For further details or to schedule a meeting at the show please contact press@mozilla.com
  • For additional resources, such as high-resolution images and b-roll video, visit: https://blog.mozilla.org/press
Categorieën: Mozilla-nl planet

MWC 2015: Experience the Latest Firefox OS Devices, Discover what Mozilla is Working on Next

Mozilla Blog - mo, 23/02/2015 - 11:14

Preview TVs and phones powered by Firefox OS and demos such as an NFC payment prototype at the Mozilla booth. Hear Mozilla speakers discuss privacy, innovation for inclusion and future of the internet.

Panasonic unveiled their new line of 4K Ultra HD TVs powered by Firefox OS at their convention in Frankfurt today. The Panasonic 2015 4k UHD (Ultra HD) LED VIERA TV, which will be shipping this spring, will also be showcased at Mozilla’s stand at Mobile World Congress 2015 in Barcelona. Like last year, Firefox OS will take its place in Hall 3, Stand 3C30, alongside major operators and device manufacturers.

Mozilla's stand at Mobile World Congress 2015, Hall 3, Stand 3C30

Mozilla’s stand at Mobile World Congress 2015, Hall 3, Stand 3C30

In addition to the Panasonic TV and the latest Firefox OS smartphones announced, visitors have the opportunity to learn more about Mozilla’s innovation projects during talks at the “Fox Den” at Mozilla’s stand, Hall 3, Stand 3C30. Just one example from the demo program:

Mozilla, in collaboration with its partners at Deutsche Telekom Innovation Labs (centers in Silicon Valley and Berlin) and T-Mobile Poland, developed the design and implementation of Firefox OS’s NFC infrastructure to enable several applications including mobile payments, transportation services, door access and media sharing. The mobile wallet demo covering ‘MasterCard® Contactless’ technology together with few non-payment functionalities will be showcased in “FoxDen” talks.

Visit www.firefoxos.com/mwc for the full list of topics and schedule of “Fox Den” talks.

 How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

Schedule of Events and Speaking Appearances

Hear from Mozilla executives on trending topics in mobile at the following sessions:

‘Digital Inclusion: Connecting an additional one billion people to the mobile internet’ Seminar

Executive Director of the Mozilla Foundation Mark Surman will join a seminar that will explore the barriers and opportunities relating to the growth of mobile connectivity in developing markets, particularly in rural areas.
Date: Monday 2 March 12:00 – 13:30 CET
Location: GSMA Seminar Theatre CC1.1

‘Ensuring User-Centred Privacy in a Connected World’ Panel

Denelle Dixon-Thayer, SVP of business and legal affairs at Mozilla, will take part in a session that explores user-centric privacy in a connected world.
Date: Monday, 2 March 16:00 – 17:30 CET
Location: Hall 4, Auditorium 3

‘Innovation for Inclusion’ Keynote Panel

Mozilla Executive Chairwoman and Co-Founder Mitchell Baker will discuss how mobile will continue to empower individuals and societies.
Date: Tuesday, 3 March 11:15 – 12:45 CET
Location: Hall 4, Auditorium 1 (Main Conference Hall)

‘Connected Citizens, Managing Crisis’ Panel

Mark Surman, Executive Director of the Mozilla Foundation, will contribute to a panel on how mobile technology is playing an increasingly central role in shaping responses to some of the most critical humanitarian problems facing the global community today.
Date: Tuesday, 3 March 14:00 – 15:30 CET
Location: Hall 4, Auditorium 2

‘Defining the Future of the Internet’ Panel

Andreas Gal, CTO at Mozilla, will take part in a session that explores the future of the Internet, bringing together industry leaders to the forefront of the net neutrality debate.
Date: Wednesday, 4 March 15:15 – 16:15 CET
Location: Hall 4, Auditorium 5

More information:

  • Please visit Mozilla and experience Firefox OS in Hall 3, Stand 3C30, at the Fira Gran Via, Barcelona from March 2-5, 2015
  • To learn more about Mozilla at MWC, please visit: www.firefoxos.com/mwc
  • For further details or to schedule a meeting at the show please contact press@mozilla.com
  • For additional resources, such as high-resolution images and b-roll video, visit: https://blog.mozilla.org/press
Categorieën: Mozilla-nl planet

Nick Thomas: FileMerge bug

Mozilla planet - mo, 23/02/2015 - 10:23

FileMerge is a nice diff and merge tool for OS X, and I use it a lot for larger code reviews where lots of context is helpful. It also supports intra-line diff, which comes in pretty handy.

filemerge screenshot

However in recent releases, at least in v2.8 which comes as part of XCode 6.1, it assumes you want to be merging and shows that bottom pane. Adjusting it away doesn’t persist to the next time you use it, *gnash gnash gnash*.

The solution is to open a terminal and offer this incantation:

defaults write com.apple.FileMerge MergeHeight 0

Unfortunately, if you use the merge pane then you’ll have to do that again. Dear Apple, pls fix!

Categorieën: Mozilla-nl planet

Mozilla to ditch Adobe Flash - GMA News

Nieuws verzameld via Google - mo, 23/02/2015 - 09:54

Mozilla to ditch Adobe Flash
GMA News
Adobe's ubiquitous but oft-targeted Flash Player is about to lose support from yet another Internet entity - Mozilla's popular open-source Firefox browser. Mozilla is now experimenting with Project Shumway, a new technology that plays Flash content ...

Categorieën: Mozilla-nl planet

Ahmed Nefzaoui: A Bit Of Consulting: Entering the RTL Web Market [Part 1]

Mozilla planet - mo, 23/02/2015 - 09:34

As I’m writing this article, we (as Mozilla and everyone else working together on Firefox OS) were never closer to releasing a version of Firefox Operating System containing this much RTL (Right-To-Left UI) support inside, which is (as the one who were/is responsible for most of it I can tell you we have) a pretty damn competitive implementation that *no* one else currently has.

v2.2 which is the version that we’re talking about here. And as you already know by now (since, you know, assuming know about Firefox OS since you’re reading this) the OS is web-based, which means it relates a lot to the web, in fact it *is* the web we want!

So I decided to write a little blog post for anyone out there who wants to get started extending their web products/websites/services support to the RTL Market.
For starters, the RTL Market is anywhere in the world where a country has a language starting from the right as their native language. This means North Africa, Middle East and bits of Asia.

RTL Is NOT Translating To Arabic

Localizing for RTL means more than translating your website into Arabi, Farsi, Urdu or Hebrew and calling it a day. It might sound harsh but here’s my advice: Do RTL right or don’t bother. If you half-arse it, it will be obvious, and you will lose money and credibility. Know who you’re talking to and how they use the web and you’ll be that much closer to a meaningful connection with your users.

RTL Is UI, Patterns And More

So there’s this one time after I had a new Ubuntu install and used Gnome for it instead of Unity, and while personalizing it, I chose for the time & calendar the setting “Arabic (Tunisia)” and suddenly 4:45 PM became litteraly PM 45:4.
Seriously, WTF ^^ I can’t read this even though I’m a native Arabic speaker (so native RTL user), even though the time pattern is flipped and so on, but still it’s WRONG, we don’t read time that way.
So, moral of the story: RTL is not about flipping UI wherever you see a horizontal list of items near each other.

So I ended up switching to English (UK) for my time and date format. And again, if you do RTL wrong it will be obvious and people will not use it.

Back to the topic, in RTL there are exceptions that you should know about before kicking off, don’t be shy to ask the community, especially from the open source community, basically any native could give you valid advices about most of the UI.

Feel free to drop in any questions you’ve got :)

Categorieën: Mozilla-nl planet

Daniel Stenberg: Bug finding is slow in spite of many eyeballs

Mozilla planet - mo, 23/02/2015 - 07:39
“given enough eyeballs, all bugs are shallow”

The saying (also known as Linus’ law) doesn’t say that the bugs are found fast and neither does it say who finds them. My version of the law would be much more cynical, something like: “eventually, bugs are found“, emphasizing the ‘eventually’ part.

(Jim Zemlin apparently said the other day that it can work the Linus way, if we just fund the eyeballs to watch. I don’t think that’s the way the saying originally intended.)

Because in reality, many many bugs are never really found by all those given “eyeballs” in the first place. They are found when someone trips over a problem and is annoyed enough to go searching for the culprit, the reason for the malfunction. Even if the code is open and has been around for years it doesn’t necessarily mean that any of all the people who casually read the code or single-stepped over it will actually ever discover the flaws in the logic. The last few years several world-shaking bugs turned out to have existed for decades until discovered. In code that had been read by lots of people – over and over.

So sure, in the end the bugs were found and fixed. I would argue though that it wasn’t because the projects or problems were given enough eyeballs. Some of those problems were found in extremely popular and widely used projects. They were found because eventually someone accidentally ran into a problem and started digging for the reason.

Time until discovery in the curl project

I decided to see how it looks in the curl project. A project near and dear to me. To take it up a notch, we’ll look only at security flaws. Not only because they are the probably most important bugs we’ve had but also because those are the ones we have the most carefully noted meta-data for. Like when they were reported, when they were introduced and when they were fixed.

We have no less than 30 logged vulnerabilities for curl and libcurl so far through-out our history, spread out over the past 16 years. I’ve spent some time going through them to see if there’s a pattern or something that sticks out that we should put some extra attention to in order to improve our processes and code. While doing this I gathered some random info about what we’ve found so far.

On average, each security problem had been present in the code for 2100 days when fixed – that’s more than five and a half years. On average! That means they survived about 30 releases each. If bugs truly are shallow, it is still certainly not a fast processes.

Perhaps you think these 30 bugs are really tricky, deeply hidden and complicated logic monsters that would explain the time they took to get found? Nope, I would say that every single one of them are pretty obvious once you spot them and none of them take a very long time for a reviewer to understand.

Vulnerability ages

This first graph (click it for the large version) shows the period each problem remained in the code for the 30 different problems, in number of days. The leftmost bar is the most recent flaw and the bar on the right the oldest vulnerability. The red line shows the trend and the green is the average.

The trend is clearly that the bugs are around longer before they are found, but since the project is also growing older all the time it sort of comes naturally and isn’t necessarily a sign of us getting worse at finding them. The average age of flaws is aging slower than the project itself.

Reports per year

How have the reports been distributed over the years? We have a  fairly linear increase in number of lines of code but yet the reports were submitted like this (now it goes from oldest to the left and most recent on the right – click for the large version):

vuln-trend

Compare that to this chart below over lines of code added in the project (chart from openhub and shows blanks in green, comments in grey and code in blue, click it for the large version):

curl source code growth

We received twice as many security reports in 2014 as in 2013 and we got half of all our reports during the last two years. Clearly we have gotten more eyes on the code or perhaps users pay more attention to problems or are generally more likely to see the security angle of problems? It is hard to say but clearly the frequency of security reports has increased a lot lately. (Note that I here count the report year, not the year we announced the particular problems, as they sometimes were done on the following year if the report happened late in the year.)

On average, we publish information about a found flaw 19 days after it was reported to us. We seem to have became slightly worse at this over time, the last two years the average has been 25 days.

Did people find the problems by reading code?

In general, no. Sure people read code but the typical pattern seems to be that people run into some sort of problem first, then dive in to investigate the root of it and then eventually they spot or learn about the security problem.

(This conclusion is based on my understanding from how people have reported the problems, I have not explicitly asked them about these details.)

Common patterns among the problems?

I went over the bugs and marked them with a bunch of descriptive keywords for each flaw, and then I wrote up a script to see how the frequent the keywords are used. This turned out to describe the flaws more than how they ended up in the code. Out of the 30 flaws, the 10 most used keywords ended up like this, showing number of flaws and the keyword:

9 TLS
9 HTTP
8 cert-check
8 buffer-overflow

6 info-leak
3 URL-parsing
3 openssl
3 NTLM
3 http-headers
3 cookie

I don’t think it is surprising that TLS, HTTP or certificate checking are common areas of security problems. TLS and certs are complicated, HTTP is huge and not easy to get right. curl is mostly C so buffer overflows is a mistake that sneaks in, and I don’t think 27% of the problems tells us that this is a problem we need to handle better. Also, only 2 of the last 15 flaws (13%) were buffer overflows.

The discussion following this blog post is on hacker news.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 71

Mozilla planet - mo, 23/02/2015 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

The big news

Rust 1.0.0-alpha.2 was released on Friday, but keep using nightlies. Six more weeks until the beta, which should become 1.0. Only six more weeks.

What's cooking on master?

157 pull requests were merged in the last week, and 15 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes Other Changes New Contributors
  • Adam Jacob
  • Alexander Bliskovsky
  • Brian Brooks
  • caipre
  • Darrell Hamilton
  • Dave Huseby
  • Denis Defreyne
  • Elantsev Serj
  • Henrik Schopmans
  • Ingo Blechschmidt
  • Jormundir
  • Lai Jiangshan
  • posixphreak
  • Ryan Riginding
  • Wesley Wiser
  • Will
  • wonyong kim
Approved RFCs

This covers two weeks since last week I wasn't able review RFCs in time.

New RFCs Friend of the Tree

The Rust Team likes to occassionally recognize people who have made outstanding contributions to The Rust Project, its ecosystem, and its community. These people are 'friends of the tree'.

This week's friend of the tree was ... Toby Scrace.

"Today I would like to nominate Toby Scrace as Friend of the Tree. Toby emailed me over the weekend about a login vulnerability on crates.io where you could log in to whomever the previously logged in user was regardless of whether the GitHub authentication was successful or not. I very much appreciate Toby emailing me privately ahead of time, and I definitely feel that Toby has earned becoming Friend of the Tree."

Quote of the Week <Manishearth> In other news, I have r+ on rust now :D <Ms2ger> No good deed goes unpunished

From #servo. Thanks to SimonSapin for the tip.

Notable Links Project Updates Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Categorieën: Mozilla-nl planet

Mozilla mulls Superfish torpedo - The Register

Nieuws verzameld via Google - mo, 23/02/2015 - 03:30

The Register

Mozilla mulls Superfish torpedo
The Register
Mozilla may neuter the likes of Superfish by blacklisting dangerous root certificates revealed less than a week ago to be used in Lenovo laptops. The move will be another blow against Superfish, which is under a sustained barrage of criticism for its ...
Lenovo Releases 'Crapware'; Tool To Remove 'Superfish' Hidden AdwareFrontline Desk
Lenovo releases tool to remove Superfish 'crapware'The Next Digit

alle 75 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Nick Desaulniers: Hidden in Plain Sight - Public Key Crypto

Mozilla planet - snein, 22/02/2015 - 20:48

How is it possible for us to communicate securely when there’s the possibility of a third party eavesdropping on us? How can we communicate private secrets through public channels? How do such techniques enable us to bank online and carry out other sensitive transactions on the Internet while trusting numerous relays? In this post, I hope to explain public key cryptography, with actual code examples, so that the concepts are a little more concrete.

First, please check out this excellent video on public key crypto:

Hopefully that explains the gist of the technique, but what might it actually look like in code? Let’s take a look at example code in JavaScript using the Node.js crypto module. We’ll later compare the upcoming WebCrypto API and look at a TLS handshake.

Meet Alice. Meet Bob. Meet Eve. Alice would like to send Bob a secret message. Alice would not like Eve to view the message. Assume Eve can intercept, but not tamper with, everything Alice and Bob try to share with each other.

Alice chooses a modular exponential key group, such as modp4, then creates a public and private key.

1 2 3 var group = "modp4"; var aliceDH = crypto.getDiffieHellman(group); aliceDH.generateKeys();

A modular exponential key group is simply a “sufficiently large” prime number, paired with a generator (specific number), such as those defined in RFC2412 and RFC3526.

The public key is meant to be shared; it is ok for Eve to know the public key. The private key must not ever be shared, even with the person communicating to.

Alice then shares her public key and group with Bob.

1 2 3 4 Public Key: <Buffer 96 33 c5 9e b9 07 3e f2 ec 56 6d f4 1a b4 f8 4c 77 e6 5f a0 93 cf 32 d3 22 42 c8 b4 7b 2b 1f a9 55 86 05 a4 60 17 ae f9 ee bf b3 c9 05 a9 31 31 94 0f ... > Group: modp14

Bob now creates a public and private key pair with the same group as Alice.

1 2 var bobDH = crypto.getDiffieHellman(group); bobDH.generateKeys();

Bob shares his public key with Alice.

1 2 Public key: <Buffer ee d7 e2 00 e5 82 11 eb 67 ab 50 20 30 81 b1 74 7a 51 0d 7e 2a de b7 df db cf ac 57 de a4 f0 bd bc b5 7e ea df b0 3b c3 3a e2 fa 0e ed 22 90 31 01 67 ... >

Alice and Bob now compute a shared secret.

1 2 var aliceSecret = aliceDH.computeSecret(bobDH.getPublicKey(), null, "hex"); var bobSecret = bobDH.computeSecret(aliceDH.getPublicKey(), null, "hex");

Alice and Bob have now derived a shared secret from each others’ public keys.

1 aliceSecret === bobSecret; // => true

Meanwhile, Eve has intercepted Alice and Bob’s public keys and group. Eve tries to compute the same secret.

1 2 3 4 5 var eveDH = crypto.getDiffieHellman(group); eveDH.generateKeys(); var eveSecret = eveDH.computeSecret(aliceDH.getPublicKeys(), null, "hex"); eveSecret === aliceSecret; // => false

This is because Alice’s secret is derived from Alice and Bob’s private keys, which Eve does not have. Eve may not realize her secret is not the same as Alice and Bob’s until later.

That was asymmetric encryption; using different keys. The shared secret may now be used in symmetric encryption; using the same keys.

Alice creates a symmetric block cypher using her favorite algorithm, a hash of their secret as a key, and random bytes as an initialization vector.

1 2 3 4 5 var cyper = "aes-256-ctr"; var hash = "sha256"; var aliceIV = crypto.randomBytes(128); var aliceHashedSecret = crypto.createHash(hash).update(aliceSecret).digest("binary"); var aliceCypher = crypto.createCypher(alg, aliceHashedSecret, aliceIV);

Alice then uses her cypher to encrypt her message to Bob.

1 var cypherText = aliceCypher.update("...");

Alice then sends the cypher text, cypher, and hash to Bob.

1 2 3 4 5 6 cypherText: <Buffer bd 29 96 83 fa a8 7d 9c ea 90 ab> cypher: aes-256-ctr hash: sha256

Bob now constructs a symmetric block cypher using the algorithm from Alice, and a hash of their shared secret.

1 2 var bobHasedSecret = crypto.createHash(hash).update(bobSecret).digest("binary"); var bobCypher = crypto.createDecipher(alg, bobHashedSecret);

Bob now decyphers the encrypted message (cypher text) from Alice.

1 2 var plainText = bobCypher.update(cypherText); console.log(plainText); // => "I love you"

Eve has intercepted the cypher text, cypher, hash, and tries to decrypt it.

1 2 3 4 5 var eveHashedSecret = crypto.createHash(hash).update(eveSecret).digest("binary"); var eveCypher = crypto.createDecipher(alg, eveHashedSecret); console.log(eveCypher.update(cypherText).toString()); // => ��_r](�i)

Here’s where Eve realizes her secret is not correct.

This prevents passive eavesdropping, but not active man-in-the-middle (MITM) attacks. For example, how does Alice know that the messages she was supposedly receiving from Bob actually came from Bob, not Eve posing as Bob?

Today, we use a system of certificates to provide authentication. This system certainly has its flaws, but it is what we use today. This is more advanced topic that won’t be covered here. Trust is a funny thing.

What’s interesting to note is that the prime and generator used to generate Diffie-Hellman public and private keys have strings that represent the corresponding modular key exponential groups, ie “modp14”. Web crypto’s API gives you finer grain control to specify the generator and large prime number in a Typed Array. I’m not sure why this is; if it allows you to have finer grain control, or allows you to support newer groups before the implementation does? To me, it seems like a source for errors to be made; hopefully someone will make a library to provide these prime/generator pairs.

One issue with my approach is that I assumed that Alice and Bob both had support for the same hashing algorithms, modular exponential key group, and symmetric block cypher. In the real world, this is not always the case. Instead, it is much more common for the client to broadcast publicly all of the algorithms it supports, and the server to pick one. This list of algorithms is called a “suite,” ie “cypher suit.” I learned this the hard way recently trying to upgrade the cypher suit on my ssh server and finding out that my client did not support the lastest cyphers. In this case, Alice and Bob might not have the same versions of Node.js, which statically link their own versions of OpenSSL. Thus, one should use cryto.getCiphers() and crypto.getHashes() before assuming the party they’re communicating to can do the math to decrypt. We’ll see “cypher suites” come up again in TLS handshakes. The NSA publishes a list of endorsed cryptographic components, for what it’s worth. There are also neat tricks we can do to prevent the message from being decrypted at a later time should the private key be compreomised and encrytped message recorded, called Perfect Forward Secrecy.

Let’s take a look now at how a browser does a TLS handshake. Here’s a capture from Wireshark of me navigating to https://google.com. First we have a TLSv1.2 Client Hello to start the handshake. Here we can see a list of the cypher suites.

Next is the response from the server, a TLSv1.2 Server Hello. Here you can see the server has picked a cypher to use.

The server then sends its certificate, which contains a copy of its public key.

Now that we’ve agreed on a cypher suite, the client now sends its public key. The server sets up a session, that way it may abbreviate the handshake in the future. Finally, the client may now start making requests to the server with encrypted application data.

For more information on TLS handshakes, you should read Ilya Grigorik’s High Performance Browser Networking book chapter TLS Handshake, Mozilla OpSec’s fantastic wiki, and this exellent Stack Exchange post. As you might imagine, all of these back and forth trips made during the TLS handshake add latency overhead when compared to unencrypted HTTP requests.

I hope this post helped you understand how we can use cryptography to exchange secret information through public channels. This isn’t enough information to implement a perfectly secure system; end to end security means one single mistake can compromise the entire system. Peer review and open source, battle tested implementations go a long way.

A cryptosystem should be secure even if everything about the system, except the
key, is public knowledge.

Kerckhoffs’s principle

I wanted to write this post because I believe abstinence-only crypto education isn’t working and I cant stand when anyone acts like part of a cabal from their ivory tower to those trying to learn new things. Someone will surely cite Javascript Cryptography Considered Harmful, which while valid, misses my point of simply trying to show people more concrete basics with code examples. The first crypto system you implement will have its holes, but you can’t go from ignorance of crypto to perfect knowledge without implementing a few imperfect systems. Don’t be afraid to, just don’t start with trying to protect high value data. Crypto is dangerous, because it can be difficult to impossible to tell when your system fails. Assembly is also akin to juggling knives, but at least you’ll usually segfault if you mess up and program execution will halt.

With upcoming APIs like Service Workers requiring TLS, protocols like HTTP2, pushes for all web traffic to be encrypted, and shitty things governments, politicians, and ISPs do, web developers are going to have to start boning up on their crypto knowledge.

What are your recommendations for crrectly learning crypto? Leave me some thoughts in the comments below.

Categorieën: Mozilla-nl planet

Rizky Ariestiyansyah: Github Pages Auto Publication Based on Master Git Commits

Mozilla planet - snein, 22/02/2015 - 19:31

A simple configuration to auto publish the git commits on master to gh-pages,just add few line code (git command) in .git/config file. Here is the code to mirrored the master branch to gh-pages

Categorieën: Mozilla-nl planet

Microsoft Teams Up With Mozilla; IE and Spartan Browsers Getting Firefox ... - Chinatopix

Nieuws verzameld via Google - snein, 22/02/2015 - 14:50

Chinatopix

Microsoft Teams Up With Mozilla; IE and Spartan Browsers Getting Firefox ...
Chinatopix
Mozilla has expressed its excitement that one of its rival browsers has joined forces with them in adopting the asm.js code. Meanwhile, the Chakra team is praising asm.js for greatly improving web program performance. They added this is the reason they ...

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Memory Management In Oxischeme

Mozilla planet - snein, 22/02/2015 - 09:00

I've recently been playing with the Rust programming language, and what better way to learn a language than to implement a second language in the language one wishes to learn?! It almost goes without saying that this second language being implemented should be Scheme. Thus, Oxischeme was born.

Why implement Scheme instead of some other language? Scheme is a dialect of LISP and inherits the simple parenthesized list syntax of its LISP-y origins, called s-expressions. Thus, writing a parser for Scheme syntax is rather easy compared to doing the same for most other languages' syntax. Furthermore, Scheme's semantics are also minimal. It is a small language designed for teaching, and writing a metacircular interpreter (ie a Scheme interpreter written in Scheme itself) takes only a few handfuls of lines of code. Finally, Scheme is a beautiful language: its design is rooted in the elegant λ-calculus.

Scheme is not "close to the metal" and doesn't provide direct access to the machine's hardware. Instead, it provides the illusion of infinite memory and its structures are automatically garbage collected rather than explicitly managed by the programmer. When writing a Scheme implementation in Scheme itself, or any other language with garbage collection, one can piggy-back on top of the host language's garbage collector, and use that to manage the Scheme's structures. This is not the situation I found myself in: Rust is close to the metal, and does not have a runtime with garbage collection built in (although it has some other cool ideas regarding lifetimes, ownership, and when it is safe to deallocate an object). Therefore, I had to implement garbage collection myself.

Faced with this task, I had a decision to make: tracing garbage collection or reference counting?

Reference counting is a technique where we keep track of the number of other things holding references to any given object or resource. When new references are taken, the count is incremented. When a reference is dropped, the count is decremented. When the count reaches zero, the resource is deallocated and it decrements the reference count of any other objects it holds a reference to. Reference counting is great because once an object becomes unreachable, it is reclaimed immediately and doesn't sit around consuming valuable memory space while waiting for the garbage collector to clean it up at some later point in time. Additionally, the reclamation process happens incrementally and program execution doesn't halt while every object in the heap is checked for liveness. On the negative side, reference counting runs into trouble when it encounters cycles. Consider the following situation:

A -> B ^ | | v D <- C

A, B, C, and D form a cycle and all have a reference count of one. Nothing from outside of the cycle holds a reference to any of these objects, so it should be safe to collect them. However, because each reference count will never get decremented to zero, none of these objects will be deallocated. In practice, the programmer must explicitly use (potentially unsafe) weak references, or the runtime must provide a means for detecting and reclaiming cycles. The former defeats the general purpose, don't-worry-about-it style of managed memory environments. The latter is equivalent to implementing a tracing collector in addition to the existing reference counting memory management.

Tracing garbage collectors start from a set of roots and recursively traverse object references to discover the set of live objects in the heap graph. Any object that is not an element of the live set cannot be used again in the future, because the program has no way to refer to that object. Therefore, the object is available for reclaiming. This has the advantage of collecting dead cycles, because if the cycle is not reachable from the roots, then it won't be in the live set. The cyclic references don't matter because those edges are never traversed. The disadvantage is that, without a lot of hard work, when the collector is doing its bookkeeping, the program is halted until the collector is finished analyzing the whole heap. This can result in long, unpredictable GC pauses.

Reference counting is to tracing as yin is to yang. The former operates on dead, unreachable objects while the latter operates on live, reachable things. Fun fact: every high performance GC algorithm (such as generational GC or reference counting with "purple" nodes and trial deletion) uses a mixture of both, whether it appears so on the surface or not. "A Unified Theory of Garbage Collection" by Bacon, et all discusses this in depth.

I opted to implement a tracing garbage collector for Oxischeme. In particular, I implemented one of the simplest GC algorithms: stop-the-world mark-and-sweep. The steps are as follows:

  1. Stop the Scheme program execution.
  2. Mark phase. Trace the live heap starting from the roots and add every reachable object to the marked set.
  3. Sweep phase. Iterate over each object x in the heap:
    • If x is an element of the marked set, continue.
    • If x is not an element of the marked set, reclaim it.
  4. Resume execution of the Scheme program.

Because the garbage collector needs to trace the complete heap graph, any structure that holds references to a garbage collected type must participate in garbage collection by tracing the GC things it is holding alive. In Oxischeme, this is implemented with the oxischeme::heap::Trace trait, whose implementation requires a trace function that returns an iterable of GcThings:

pub trait Trace { /// Return an iterable of all of the GC things referenced by /// this structure. fn trace(&self) -> IterGcThing; }

Note that Oxischeme separates tracing (generic heap graph traversal) from marking (adding live nodes in the heap graph to a set). This enables using Trace to implement other graph algorithms on top of the heap graph. Examples include computing dominator trees and retained sizes of objects, or finding the set of retaining paths of an object that you expected to be reclaimed by the collector, but hasn't been.

If we were to introduce a Trio type that contained three cons cells, we would implement tracing like this:

struct Trio { first: ConsPtr, second: ConsPtr, third: ConsPtr, } impl Trace for Trio { fn trace(&self) -> IterGcThing { let refs = vec!(GcThing::from_cons_ptr(self.first), GcThing::from_cons_ptr(self.second), GcThing::from_cons_ptr(self.third)); refs.into_iter() } }

What causes a garbage collection? As we allocate GC things, GC pressure increases. Once that pressure crosses a threshold — BAM! — a collection is triggered. Oxischeme's pressure application and threshold are very naive at the moment: every N allocations a collection is triggered, regardless of size of the heap or size of individual allocations.

A root is an object in the heap graph that is known to be live and reachable. When marking, we start tracing from the set of roots. For example, in Oxischeme, the global environment is a GC root.

In addition to permanent GC roots, like the global environment, sometimes it is necessary to temporarily root GC things referenced by pointers on the stack. Garbage collection can be triggered by any allocation, and it isn't always clear which Rust functions (or other functions called by those functions, or even other functions called by those functions called from the first function, and so on) might allocate a GC thing, triggering collection. The situation we want to avoid is a Rust function using a temporary variable that references a GC thing, then calling another function which triggers a collection and collects the GC thing that was referred to by the temporary variable. That results in the temporary variable becoming a dangling pointer. If the Rust function accesses it again, that is Undefined Behavior: it might still get the value it was pointing at, or it might be a segfault, or it might be a freshly allocated value being used by something else! Not good!

let a = pointer_to_some_gc_thing; function_which_can_trigger_gc(); // Oops! A collection was triggered and dereferencing this // pointer leads to Undefined Behavior! *a;

There are two possible solutions to this problem. The first is conservative garbage collection, where we walk the native stack and if any value on the stack looks like it might be a pointer and if coerced to a pointer happens to point to a GC thing in the heap, we assume that it is in fact a pointer. Under this assumption, it isn't safe to reclaim the object pointed to, and so we treat that GC thing a root. Note that this strategy is simple and easy to retrofit because it doesn't involve changes in any code other than adding the stack scanning, but it results in false positives. The second solution is precise rooting. With precise rooting, it is the responsibility of the Rust function's author to explicitly root and unroot pointers to GC things used in variables on the stack. The advantage this provides is that there are no false positives: you know exactly which stack values are pointers to GC things. The disadvantage is the requirement of explicitly telling the GC about every pointer to a GC thing you ever reference on the stack.

Almost every modern, high performance tracing collector for managed runtimes uses precise rooting because it is a prerequisite* of a moving collector: a GC that relocates objects while performing collection. Moving collectors are desirable because they can compact the heap, creating a smaller memory footprint for programs and better cache locality. They can also implement pointer bumping allocation, that is both simpler and faster than maintaining a free list. Finally, they can split the heap into generations. Generational GCs gain performance wins from the empirical observation that most allocations are short lived, and those objects that are most recently allocated are most likely to be garbage, so we can focus the GC's efforts on them to get the most bang for our buck. Precise rooting is a requirement for a moving collector because it has to update all references to point to the new address of each moved GC thing. A conservative collector doesn't know for sure if a given value on the stack is a reference to a GC thing or not, and if the value just so happens not to be a reference to a GC thing (it is a false positive), and the collector "helpfully" updates that value to the moved address, then the collector is introducing migraine-inducing bugs into the program execution.

* Technically, there do exist some moving and generational collectors that are "mostly copying" and conservatively mark the stack but precisely mark the heap. These collectors only move objects which are not conservatively reachable.

Oxischeme uses precise rooting, but is not a moving GC (yet). Precise rooting is implemented with the oxischeme::heap::Rooted<T> smart pointer RAII type, which roots its referent upon construction and unroots it when the smart pointer goes out of scope and is dropped.

Using precise rooting and Rooted, we can solve the dangling stack pointer problem like this:

{ // The pointed to GC thing gets rooted when wrapped // with `Rooted`. let a = Rooted::new(heap, pointer_to_some_gc_thing); function_which_can_trigger_gc(); // Dereferencing `a` is now safe, because the referent is // a GC root, and can't be collected! *a; } // `a` is now out of scope, and its referent is unrooted.

With all of that out of the way, here is the implementation of our mark-and-sweep collector:

impl Heap { pub fn collect_garbage(&mut self) { self.reset_gc_pressure(); // First, trace the heap graph and mark everything that // is reachable. let mut pending_trace = self.get_roots(); while !pending_trace.is_empty() { let mut newly_pending_trace = vec!(); for thing in pending_trace.drain() { if !thing.is_marked() { thing.mark(); for referent in thing.trace() { newly_pending_trace.push(referent); } } } pending_trace.append(&mut newly_pending_trace); } // Second, sweep each `ArenaSet`. self.strings.sweep(); self.activations.sweep(); self.cons_cells.sweep(); self.procedures.sweep(); } }

Why do we have four calls to sweep, one for each type that Oxischeme implements? To explain this, first we need to understand Oxischeme's allocation strategy.

Oxischeme does not allocate each individual object directly from the operating system. In fact, most Scheme "allocations" do not actually perform any allocation from the operating system (eg, call malloc or Box::new). Oxischeme uses a set of oxischeme::heap::Arenas, each of which have a preallocated object pool with each item in the pool either being used by live GC things, or waiting to be used in a future allocation. We keep track of an Arena's available objects with a "free list" of indices into its pool.

type FreeList = Vec<usize>; /// An arena from which to allocate `T` objects from. pub struct Arena<T> { pool: Vec<T>, /// The set of free indices into `pool` that are available /// for allocating an object from. free: FreeList, /// During a GC, if the nth bit of `marked` is set, that /// means that the nth object in `pool` has been marked as /// reachable. marked: Bitv, }

When the Scheme program allocates a new object, we remove the first entry from the free list and return a pointer to the object at that entry's index in the object pool. If the every Arena is at capacity (ie, its free list is empty), a new Arena is allocated from the operating system and its object pool is used for the requested Scheme allocation.

impl<T: Default> ArenaSet<T> { pub fn allocate(&mut self) -> ArenaPtr<T> { for arena in self.arenas.iter_mut() { if !arena.is_full() { return arena.allocate(); } } let mut new_arena = Arena::new(self.capacity); let result = new_arena.allocate(); self.arenas.push(new_arena); result } } impl<T: Default> Arena<T> { pub fn allocate(&mut self) -> ArenaPtr<T> { match self.free.pop() { Some(idx) => { let self_ptr : *mut Arena<T> = self; ArenaPtr::new(self_ptr, idx) }, None => panic!("Arena is at capacity!"), } } }

For simplicity, Oxischeme has separate arenas for separate types of objects. This sidesteps the problem of finding an appropriately sized free block of memory when allocating different sized objects from the same pool, the fragmentation that can occur because of that, and lets us use a plain old vector as the object pool. However, this also means that we need a separate ArenaSet<T> for each T object that a Scheme program can allocate and why oxischeme::heap::Heap::collect_garbage has four calls to sweep().

During the sweep phase of Oxischeme's garbage collector, we return the entries of any dead object back to the free list. If the Arena is empty (ie, the free list is full) then we return the Arena's memory to the operating system. This prevents retaining the peak amount of memory used for the rest of the program execution.

impl<T: Default> Arena<T> { pub fn sweep(&mut self) { self.free = range(0, self.capacity()) .filter(|&n| { !self.marked.get(n) .expect("marked should have length == self.capacity()") }) .collect(); // Reset `marked` to all zero. self.marked.set_all(); self.marked.negate(); } } impl<T: Default> ArenaSet<T> { pub fn sweep(&mut self) { for arena in self.arenas.iter_mut() { arena.sweep(); } // Deallocate any arenas that do not contain any // reachable objects. self.arenas.retain(|a| !a.is_empty()); } }

This concludes our tour of Oxischeme's current implementation of memory management, allocation, and garbage collection for Scheme programs. In the future, I plan to make Oxischeme's collector a moving collector, which will pave the way for a compacting and generational GC. I might also experiment with incrementalizing marking for lower latency and shorter GC pauses, or making sweeping lazy. Additionally, I intend to declare to the rust compiler that operations on un-rooted GC pointers unsafe, but I haven't settled on an implementation strategy yet. I would also like to experiment with writing a syntax extension for the Rust compiler so that it can derive Trace implementations, and they don't need to be written by hand.

Thanks to Tom Tromey and Zach Carter for reviewing drafts.

References
Categorieën: Mozilla-nl planet

Tantek Çelik: November Project Book Survey Answers #NP_Book

Mozilla planet - snein, 22/02/2015 - 07:35

The November Project recently wrapped up a survey for a book project. I had the tab open and finally submitted my answers, but figured why not post them on my own site as well. Some of this I've blogged about before, some of it is new.

The basics
Tribe Location
San Francisco
Member Name
Tantek Çelik
Date of Birth
March 11th
Profession
Internet
Date and Location of First NP Workout
2013-10-30 Alamo Square, San Francisco, CA, USA
Contact Info
tantek.com
Pre-NP fitness

Describe your pre-NP fitness background and routine.

  • 2011 started mixed running/jogging/walking every week, short distances 0.5-3 miles.
  • 2008 started bicycling regularly around SF
  • 2007 started rock climbing, eventually 3x a week
  • 1998 started regular yoga and pilates as part of recovering from a back injury
First hear about NP

How did you first hear about the group?

I saw chalkmarks in Golden Gate Park for "NovemberProject 6:30am Kezar!" and thought what the heck is that? 6:30am? Sounds crazy. More: Learning About NP

First NP workout

Recount your first workout, along with the vibe, and how they may have differed from your expectations.

My first NovemberProject workout was a 2013 NPSF PR Wednesday workout, and it was the hardest physical workout I'd ever done. However before it destroyed me, I held my hand up as a newbie, and was warmly welcomed and hugged. My first NP made a strong positive impression. More: My First Year at NP: Newbie

Meeting BG and Bojan

For those who've crossed paths, what was your first impression of BG? Of Bojan?

I first met BG and Bojan at a traverbal Boston destination deck workout. BG and Bojan were (are!) larger than life, with voices to match. Yet their booming matched with self-deprecating humor got everyone laughing and feeling like they belonged.

First Harvard Stadium workout

Boston Only: If you had a particularly memorable newbie meeting and virgin workout at Harvard Stadium, I'd like to know about it for a possible separate section. If so, please describe.

My first Boston Harvard Stadium workout was one to remember. Two days after my traverbal Boston destination deck workout, I joined the newbie orientation since I hadn't done the stadium before. I couldn't believe how many newbies there were. By the time we got to the starting steps I was ready to bolt. I completed 26 sections, far more than I thought I would.

Elevated my fitness

How has NP elevated your fitness level? How have you measured this?

NP has made me a lot faster. After a little over 6 months of NPSF, I cut over 16 minutes in my Bay To Breakers 12km personal record.

Affected personal life

Give an example of how NP has affected your personal life and/or helped you overcome a challenge.

NP turned me from a night person to a morning person, with different activities, and different people. NP inspired me to push myself to overcome my inability to run hills, one house at a time until I could run half a block uphill, then I started running NPSF hills. More: My First Year at NP: Scared of Hills

Impacted relationship with my city

How has NP impacted your relationship with your city?

I would often run into NPSF regulars on my runs to and from the workout, so I teamed up with a couple of them and started an unofficial "rungang". We posted times and corners of our running routes, including to hills. NPSF founder Laura challenged our rungang to run ~4 miles (more than halfway across San Francisco) south to a destination hills workout at Bernal Heights and a few of us did. After similar pre-workout runs North to the Marina, and East to the Embarcadero, I feel like I can confidently run to anywhere in the city, which is an amazing feeling.

Why rapid traction?

Why do you think NP has gained such traction so rapidly?

Two words: community positivity. Yes there's a workout too, but there are lots of workout groups. What makes NP different (beyond that it's free), are the values of community and barrier-breaking positivity that the leaders instill into every single workout. More: My First Year at NP: Positive Community — Just Show Up

Most memorable moment

Describe your most memorable workout or a quintessential NP moment.

Catching the positivity award when it was thrown at me across half of NPSF. Tantek holding up the NPSF positivity award backlit by the rising sun.

Weirdest thing

Weirdest thing about NP?

That so many people get up before sunrise, nevermind in sub-freezing temperatures in many cities, to go to a workout. Describe that to anyone who isn't in NP, and it sounds beyond weird.

NP and regular life

How has NP bled into your "regular" life? (Do you inadvertently go in for a hug when meeting a new client? Do you drop F-bombs at inopportune times? Have you gone from a cubicle brooder to the meeting goofball? Are you kinder to strangers?)

I was already a bit of a hugger, but NP has taught me to better recognize when people might quietly want (or be ok with) a hug, even outside of NP. #huglife

The Positivity Award

If you've ever won the Positivity Award, please describe that moment and what it meant to you.

It's hard to describe. I certainly was not expecting it. I couldn't believe how excited people were that I was getting it. There was a brief moment of fear when Braden tossed it at me over dozens of my friends, all the sound suddenly muted while I watched it flying, hands outstretched. Caught it solidly with both hands, and I could hear again. It was a beautiful day, the sun had just risen, and I could see everyone's smiling faces. More than the award itself, it meant a lot to me to see the joy in people's faces.

Non-NP friends and family

What do your non-NP friends and family think of your involvement?

My family is incredibly supportive and ecstatic with my increased fitness. My non-NP friends are anywhere from curious (at best), to wary or downright worried that it's a cult, which they only half-jokingly admit.

NP in one word

Describe NP in one word.

Community

Additional Thoughts

Additional thoughts? Include them here.

You can follow my additional thoughts on NP, fitness, and other things on my site & blog: tantek.com.

Categorieën: Mozilla-nl planet

Pages