mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Chris AtLee: RelEng 2015 (part 1)

Mozilla planet - di, 26/05/2015 - 20:51

Last week, I had the opportunity to attend RelEng 2015 - the 3rd International Workshop of Release Engineering. This was a fantastic conference, and I came away with lots of new ideas for things to try here at Mozilla.

I'd like to share some of my thoughts and notes I took about some of the sessions. As of yet, the speakers' slides aren't collected or linked to from the conference website. Hopefully they'll get them up soon! The program and abstracts are available here.

For your sake (and mine!) I've split up my notes into a few separate posts. This post covers the introduction and keynote.

tl;dr

"Continuous deployment" of web applications is basically a solved problem today. What remains is for organizations to adopt best practices. Mobile/desktop applications remain a challenge.

Cisco relies heavily on collecting and analyzing metrics to inform their approach to software development. Statistically speaking, quality is the best driver of customer satisfaction. There are many aspects to product quality, but new lines of code introduced per release gives a good predictor of how many new bugs will be introduced. It's always challenging to find enough resources to focus on software quality; being able to correlate quality to customer satisfaction (and therefore market share, $$$) is one technique for getting organizational support for shipping high quality software. Common release criteria such as bugs found during testing, or bug fix rate, are used to inform stakeholders as to the quality of the release.

Introductory Session

Bram Adams and Foutse Khomh kicked things off with an overview of "continuous deployment" over the last 5 years. Back in 2009 we were already talking about systems where pushing to version control would trigger tens of thousands of tests, and do canary deployments up to 50 times a day.

Today we see companies like Facebook demonstrating that continuous deployment of web applications is basically a solved problem. Many organizations are still trying to implement these techniques. Mobile [and desktop!] applications still present a challenge.

Keynote

Pete Rotella from Cisco discussed how he and his team measured and predicted release quality for various projects at Cisco. His team is quite focused on data and analytics.

Cisco has relatively long release cycles compared to what we at Mozilla are used to now. They release 2-3 times per year, with each release representing approximately 500kloc of new code. Their customers really like predictable release cycles, and also don't like releases that are too frequent. Many of their customers have their own testing / validation cycles for releases, and so are only willing to update for something they deem critical.

Pete described how he thought software projects had four degrees of freedom in which to operate, and how quality ends up being the one sacrificed most often in order to compensate for constraints in the others:

  • resources (people / money): It's generally hard to hire more people or find room in the budget to meet the increasing demands of customers. You also run into the mythical man month problem by trying to throw more people at a problem.

  • schedule (time): Having standard release cycles means organizations don't usually have a lot of room to push out the schedule so that features can be completed properly.

    I feel that at Mozilla, the rapid release cycle has helped us out to some extent here. The theory is that if your feature isn't ready for the current version, it can wait for the next release which is only 6 weeks behind. However, I do worry that we have too many features trying to get finished off in aurora or even in beta.

  • content (features): Another way to get more room to operate is to cut features. However, it's generally hard to cut content or features, because those are what customers are most interested in.

  • quality: Pete believes this is where most organizations steal resources for to make up for people/schedule/content constraints. It's a poor long-term play, and despite "quality is our top priority" being the Official Party Line, most organizations don't invest enough here. What's working against quality?

    • plethora of releases: lots of projects / products / special requests for releases. Attempts to reduce the # of releases have failed on most occasions.
    • monetization of quality is difficult. Pete suggests tying the cost of a poor quality release to this. How many customers will we lose with a buggy release?
    • having RelEng and QA embedded in Engineering teams is a problem; they should be independent organizations so that their recommendations can have more weight.
    • "control point exceptions" are common. e.g. VP overrides recommendations of QA / RelEng and ships the release.

Why should we focus on quality? Pete's metrics show that it's the strongest driver of customer satisfaction. Your product's customer satisfaction needs to be more than 4.3/5 to get more than marginal market share.

How can RelEng improve metrics?

  • simple dashboards
  • actionable metrics - people need to know how to move the needle
  • passive - use existing data. everybody's stretched thin, so requiring other teams to add more metadata for your metrics isn't going to work.
  • standardized quality metrics across the company
  • informing engineering teams about risk
  • correlation with customer experience.

Interestingly, handling the backlog of bugs has minimal impact on customer satisfaction. In addition, there's substantial risk introduced whenever bugs are fixed late in a release cycle. There's an exponential relationship between new lines of code added and # of defects introduced, and therefore customer satisfaction.

Another good indicator of customer satisfaction is the number of "Customer found defects" - i.e. the number of bugs found and reported by their customers vs. bugs found internally.

Pete's data shows that if they can find more than 80% of the bugs in a release prior to it being shipped, then the remaining bugs are very unlikely to impact customers. He uses lines of code added for previous releases, and historical bug counts per version to estimate number of bugs introduced in the current version given the new lines of code added. This 80% figure represents one of their "Release Criteria". If less than 80% of predicted bugs have been found, then the release is considered risky.

Another "Release Criteria" Pete discussed was the weekly rate of fixing bugs. Data shows that good quality releases have the weekly bug fix rate drop to 43% of the maximum rate at the end of the testing cycle. This data demonstrates that changes late in the cycle have a negative impact on software quality. You really want to be fixing fewer and fewer bugs as you get closer to release.

I really enjoyed Pete's talk! There are definitely a lot of things to think about, and how we might apply them at Mozilla.

Categorieën: Mozilla-nl planet

Andrea Marchesini: Console API in workers and DevTools

Mozilla planet - di, 26/05/2015 - 20:04

As web developer, debugging Service Workers code can be a complex process. For this reason, we are putting extra effort in improving debugging tools for Firefox. This blog post is focusing on our developments on the Console API and how it can be used in any kind of Worker and shown up in the DevTools WebConsole.

As you probably know, since February 2014 Firefox supports the Console API in Workers (bug 965860). In so doing, we have recently extended the support to different kind of Workers (January 2015 - bug 1058644). However, any use of the Console API in those Workers has not been shown yet into the DevTools WebConsole.

I am pleased to say that, in the nightly, and soon in release builds, you are now able to choose and see the log messages from Shared Workers, Service Workers and Addon-on Workers, which enable them from the ‘logging’ menu. Here a screenshot:

This works with remote debugging too! Happy debugging!

Categorieën: Mozilla-nl planet

Air Mozilla: Reaching more users through better accessibility

Mozilla planet - di, 26/05/2015 - 19:27

Reaching more users through better accessibility Using Firefox OS to test your apps for better accessibility and usability of your mobile web application

Categorieën: Mozilla-nl planet

Report: Mozilla's Firefox OS to shift focus from $25 smartphones toward more ... - FierceWireless

Nieuws verzameld via Google - di, 26/05/2015 - 18:56

Tech Times

Report: Mozilla's Firefox OS to shift focus from $25 smartphones toward more ...
FierceWireless
The report, citing an internal company email it obtained that was sent from Mozilla CEO Chris Beard, indicates that the company is going to focus on new software and services, moving beyond smartphones and even on working with partners to build flip ...
Mozilla Drops $25-Smartphone Plan And Shifts Focus On Firefox OS DevelopmentTech Times
Mozilla: Firefox for iOS is in 'active development'Inquirer
Mozilla U-turns low-cost smartphone tackITWeb
Load The Game -India Today
alle 11 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Air Mozilla: Privacy features for Firefox for Android

Mozilla planet - di, 26/05/2015 - 17:23

Privacy features for Firefox for Android Supporting privacy on the mobile web with built-in features and add-ons

Categorieën: Mozilla-nl planet

Air Mozilla: Martes mozilleros

Mozilla planet - di, 26/05/2015 - 17:00

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

Air Mozilla: Servo (the parallel web browser) and YOU!

Mozilla planet - di, 26/05/2015 - 15:55

Servo (the parallel web browser) and YOU! A beginner's guide to contributing to Servo

Categorieën: Mozilla-nl planet

Air Mozilla: SilexLabs Test

Mozilla planet - di, 26/05/2015 - 15:30

SilexLabs Test Silex Labs

Categorieën: Mozilla-nl planet

Daniel Stenberg: 2015 curl user poll analysis

Mozilla planet - di, 26/05/2015 - 08:23

My full 30 page document with all details and analyses of the curl user poll 2015 is now available. It shows details of all the questions, most of them with a comparison with last year’s survey. The write-ins are also full of good advice, wisdom and some signs of ignorance or unawareness.

I hope all curl hackers and others generally interested in the project can use my “report” to learn something about our users and our user’s view of the project and our products.

Let’s use this to guide us going forward.

keep-calm-and-improve-curl

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

Mozilla planet - di, 26/05/2015 - 06:49

the following changes have been pushed to bugzilla.mozilla.org:

  • [1166280] max-width missing from bottom-actions, causing “top” and “new bug” buttons misalignment on the right edge of window
  • [1163868] Include requests from others in RequestNagger
  • [1164321] fix chrome issues
  • [1166616] disabling of reviewer last-seen doesn’t work for users that don’t have a last_seen_date
  • [1166777] Script for generating API keys
  • [1166623] move MAX_REVIEWER_LAST_SEEN_DAYS_AGO from a constant to a parameter
  • [1135164] display a warning on show_bug when an unassigned bug has a patch attached
  • [1158513] Script to convert MozReview attachments from parents to children
  • [1168190] “See Also” should accept webcompat.com/issues/ – prefixed URLs

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Mozilla to ditch $25 phones, to focus on efficiency instead of cost: Report - Tech2

Nieuws verzameld via Google - di, 26/05/2015 - 05:05

Tech2

Mozilla to ditch $25 phones, to focus on efficiency instead of cost: Report
Tech2
According to a report by CNET, Chief Executive Chris Beard has revealed in an email that Mozilla has changed its mobile strategy. Under a new 'Ignite' initiative, the company plans to introduce phones with 'compelling' features and not just with lower ...
Mozilla Decides To Ditch $25 Firefox Smartphone, To Focus on Quality, Rather ...Trak.in (blog)
Mozilla Drops Out $25 Smartphone, Begins New Ignite InitiativeYibada (English Edition)
Mozilla to abandon $25 Firefox phones, may explore Android app compatibility ...Times of India
Latin Post -STGIST
alle 116 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Web tracking puts lead in your saddlebags, finds Mozilla study - The Register

Nieuws verzameld via Google - di, 26/05/2015 - 01:59

The Register

Web tracking puts lead in your saddlebags, finds Mozilla study
The Register
You already know that too many tracking cookies will slow Web page loading down to a crawl. Now, a study by Mozilla and Columbia University quantifies the problem. According to Columbia's Georgios Kontaxis and Mozilla's Monica Chew, spiking the ...
Firefox's optional Tracking Protection reduces load time for top news sites by 44%VentureBeat
Firefox's tracking cookie blacklist reduces website load time by 44%Ars Technica UK (blog)

alle 4 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Not Technologism, Alchemy.

David Ascher - di, 26/05/2015 - 00:50

Heated argument...

A couple of years ago, I was part of a panel with the remarkable Evgeny Morozov, at an event hosted by the Museum of Vancouver.  The event wasn’t remarkable, except that I ended up in a somewhat high-energy debate with Mr. Morozov, without particularly understanding why.  He ascribed to me beliefs I didn’t hold, which made a rigorous intellectual argument (which I generally enjoy) quite elusive and baffling.

It took a follow-up discussion with my political theorist brother to elucidate it.  In particular, he helped me realize that while for decades I’ve described myself as a technologist, depending on your definition of that word, I was misrepresenting myself.  Starting with my first job in my mid teens, my career (modulo an academic side-trip) has essentially been to bring technological skills to bear on a situation.  Using the word technologist to describe myself felt right, without much analysis.  As my brother pointed out, using an academic lexicon, the word technologist can be used to mean “someone who thinks technology is the answer”.  Using that definition, capital is to capitalist as technology is to technologist.  Once I understood that, I realized why Mr. Morozov was arguing with me.  He thought I believed in technologism, much like the way the New Yorker describes Marc Andreesen (I’m no Andreesen in many ways, but I do think he’d have been a better foil for Morozov).  It also made me realize that he either didn’t understand what Mozilla is trying to do, or didn’t believe me/us.

Using that definition, I am not a technologist any more than I am a capitalist or a regulator[ist?].  I resist absolutes, as it seems to me blatantly obvious (and probably boring) that except for rhetorical/marketing purposes (such as Morozov’s), the choices which one should make when building or changing a system, at every scale from the individual to the society, must be thoughtfully balanced.  This balance is hard to articulate and even harder to maintain, in a political environment which blurs subtlety and a network environment which amplifies differences.

 

The Purpose of Argument

 

It is in this balance that lies evolutionary progress.  I have been re-re-watching The Wire, thinking about Baltimore.  In season 3, one of the district commanders, sick of the status quo, tries to create a “free zone” for unimpeded drug dealing, an Amsterdam-in-Baltimore, in order to draw many of the associated social ills away from the rest of the district.  In doing this, he uses his authority (and fearlessness, thanks to his impending retirement), and combines management techniques, marketing, and positive and negative incentives, to try fix a systemic social problem.  I want more stories like that (especially less fictional ones).

I don’t believe that technology is “the answer,” just like police vans aren’t the answer.  That said, ignoring the impact that technology can have on societal (or business!) challenges is just as silly as saying that technology holds the answer.  Yes, technology choices are often political, and that’s just fine, especially if we invite non-techies to help make those political decisions.  When it comes to driving behavioral change, tech is a tool like many others, from empathy and consumer marketing to capital and incentives.  Let’s understand and use all of these tools.

Human psychology explains why marketing works, and marketing has definitely been used for silly reasons in the past. Still, if part of your goal involves getting people to do something, refusing to use human psychology to further your well-intentioned outcome is cutting your nose to spite your face.  Similarly, it’s unarguable that those who wield gigantic capital have had outsize influence in our society, and some of them are sociopaths. Still, refusing to consider how one could use capital and incentives to drive behavior is an equally flawed limit.  Lastly, obviously, technology can be dehumanizing. But technology can also, if wielded thoughtfully, be liberating.  Each has power, and power isn’t intrinsically evil.

1+1+1 = magic

Tech companies of the last decade have shown the outsize impact that these three disciplines, when wielded in coordinated fashion, can have on the world.  Uber’s success isn’t due to smart deployment of tech, psychology or capital.  Uber’s success is due to a brilliant combination of all three (and more).  I’ll leave it history to figure out whether Uber’s impact will be net positive or negative based on metrics that I care about, but the magical quality that combination unleashed is awe inspiring.

 

object.

 

This effective combination of disciplines is a critical step which I think eludes many people and organizations, due to specialization without coordination.  It is relatively clear how to become a better technologist or marketer.  The ladder of capitalist success is equally straightforward.  But figuring out how to blend those disciplines and unlock magic is elusive, both within a career path and within organizations.

I wonder if it’s possible to label this pursuit, in a gang-signal sort of way. Venture capitalists, marketers, and technologists all gain a lot by simply affiliating with these fields.  I find any one of these dimensions deeply boring, and the combinations so much more provocative.  Who’s with me and what do you call yourselves?  Combinatorists? Alchemists?

Categorieën: Mozilla-nl planet

David Ascher: Not Technologism, Alchemy.

Mozilla planet - di, 26/05/2015 - 00:50

Heated argument...

A couple of years ago, I was part of a panel with the remarkable Evgeny Morozov, at an event hosted by the Museum of Vancouver.  The event wasn’t remarkable, except that I ended up in a somewhat high-energy debate with Mr. Morozov, without particularly understanding why.  He ascribed to me beliefs I didn’t hold, which made a rigorous intellectual argument (which I generally enjoy) quite elusive and baffling.

It took a follow-up discussion with my political theorist brother to elucidate it.  In particular, he helped me realize that while for decades I’ve described myself as a technologist, depending on your definition of that word, I was misrepresenting myself.  Starting with my first job in my mid teens, my career (modulo an academic side-trip) has essentially been to bring technological skills to bear on a situation.  Using the word technologist to describe myself felt right, without much analysis.  As my brother pointed out, using an academic lexicon, the word technologist can be used to mean “someone who thinks technology is the answer”.  Using that definition, capital is to capitalist as technology is to technologist.  Once I understood that, I realized why Mr. Morozov was arguing with me.  He thought I believed in technologism, much like the way the New Yorker describes Marc Andreesen (I’m no Andreesen in many ways, but I do think he’d have been a better foil for Morozov).  It also made me realize that he either didn’t understand what Mozilla is trying to do, or didn’t believe me/us.

Using that definition, I am not a technologist any more than I am a capitalist or a regulator[ist?].  I resist absolutes, as it seems to me blatantly obvious (and probably boring) that except for rhetorical/marketing purposes (such as Morozov’s), the choices which one should make when building or changing a system, at every scale from the individual to the society, must be thoughtfully balanced.  This balance is hard to articulate and even harder to maintain, in a political environment which blurs subtlety and a network environment which amplifies differences.

 

The Purpose of Argument

 

It is in this balance that lies evolutionary progress.  I have been re-re-watching The Wire, thinking about Baltimore.  In season 3, one of the district commanders, sick of the status quo, tries to create a “free zone” for unimpeded drug dealing, an Amsterdam-in-Baltimore, in order to draw many of the associated social ills away from the rest of the district.  In doing this, he uses his authority (and fearlessness, thanks to his impending retirement), and combines management techniques, marketing, and positive and negative incentives, to try fix a systemic social problem.  I want more stories like that (especially less fictional ones).

I don’t believe that technology is “the answer,” just like police vans aren’t the answer.  That said, ignoring the impact that technology can have on societal (or business!) challenges is just as silly as saying that technology holds the answer.  Yes, technology choices are often political, and that’s just fine, especially if we invite non-techies to help make those political decisions.  When it comes to driving behavioral change, tech is a tool like many others, from empathy and consumer marketing to capital and incentives.  Let’s understand and use all of these tools.

Human psychology explains why marketing works, and marketing has definitely been used for silly reasons in the past. Still, if part of your goal involves getting people to do something, refusing to use human psychology to further your well-intentioned outcome is cutting your nose to spite your face.  Similarly, it’s unarguable that those who wield gigantic capital have had outsize influence in our society, and some of them are sociopaths. Still, refusing to consider how one could use capital and incentives to drive behavior is an equally flawed limit.  Lastly, obviously, technology can be dehumanizing. But technology can also, if wielded thoughtfully, be liberating.  Each has power, and power isn’t intrinsically evil.

1+1+1 = magic

Tech companies of the last decade have shown the outsize impact that these three disciplines, when wielded in coordinated fashion, can have on the world.  Uber’s success isn’t due to smart deployment of tech, psychology or capital.  Uber’s success is due to a brilliant combination of all three (and more).  I’ll leave it history to figure out whether Uber’s impact will be net positive or negative based on metrics that I care about, but the magical quality that combination unleashed is awe inspiring.

 

object.

 

This effective combination of disciplines is a critical step which I think eludes many people and organizations, due to specialization without coordination.  It is relatively clear how to become a better technologist or marketer.  The ladder of capitalist success is equally straightforward.  But figuring out how to blend those disciplines and unlock magic is elusive, both within a career path and within organizations.

I wonder if it’s possible to label this pursuit, in a gang-signal sort of way. Venture capitalists, marketers, and technologists all gain a lot by simply affiliating with these fields.  I find any one of these dimensions deeply boring, and the combinations so much more provocative.  Who’s with me and what do you call yourselves?  Combinatorists? Alchemists?

Categorieën: Mozilla-nl planet

Aki Sasaki: introducing scriptharness

Mozilla planet - ma, 25/05/2015 - 22:32

I found myself missing mozharness at various points over the past 10 months. Several things kept me from using it at my then-new job:

  • Even though we had kept mozharness.base.* largely non-Mozilla-specific, the mozharness clone-and-run model meant there was a lot of Mozilla-specific code that came along with it.

  • The monolithic BaseScript + mixins model had a very steep barrier of entry. There's a significant learning curve, and scripts need to be fully ported to mozharness to take advantage of its features.

I had wanted to address these issues for years, but never had time to devote fully to harness-specific development.

Now I do.

Introducing scriptharness 0.1.0:

I'm proud of this. I'm also aware it's not mature [yet], and it's currently missing some functionality.

There are some ideas I'd love to explore before 1.0.0:

  • multiple Script objects with threading and separate logs

  • Config Type Definitions

  • rethink how to enable/disable actions. I could keep mozharness' --add-action clobber --no-upload structure, or play with --actions +clobber -upload or something. (The - before the upload in the latter might cause argparse issues?)

  • also, explore Maven-style actions (all actions before target action are enabled) and actions with dependencies on other actions. I prefer keeping each action independent, idempotent, and individually targetable, but I can see someone wanting the other behavior for certain scripts.

  • I've already split out strings from code in a number of places, for unit testing. I'm curious what it would take to make scriptharness localizable, and if there would be demand for it.

  • ahal suggested adding structured logging; I'd love to investigate that.

I already have 0.2.0 on the brain. I'd love any feedback or patches.



comment count unavailable comments
Categorieën: Mozilla-nl planet

Aki Sasaki: mozharness turns 5

Mozilla planet - ma, 25/05/2015 - 22:21

Five years ago today, I landed the first mozharness commit in my user repo. (github)

starting something, or wasting my time. Log.py + a scratch trunk_nightly.json

The project had three initial goals:

  • First and foremost, I was tasked with building a multi-locale Fennec on Maemo. This was a more complex task than could sanely fit in a buildbot factory.

  • The Mozilla Releng team was already discussing pulling logic out of buildbot factories and into client-side scripts. I had been wanting to write a second version of my script framework idea. The first version was closed-source, perl, and very company- and product-specific. The second would improve on the first in every way, while keeping its three central principles of full logging, flexible config, and modular actions.

  • Finally, at that point I was still a Perl developer learning Python. I tend learn languages by writing a project from scratch in that new language; this was my opportunity.

Multi-locale Fennec became a reality, and then we started adding projects to mozharness, one by one.

As of last July, mozharness was the client-side engine for the majority of Mozilla's CI and release infrastructure. I still see plenty of activity in bugmail and IRC these days. I'll be the first to point out its shortcomings, but I think overall it has been a success.

Happy birthday, mozharness!



comment count unavailable comments
Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting

Mozilla planet - ma, 25/05/2015 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Nick Desaulniers: Interpreter, Compiler, JIT

Mozilla planet - ma, 25/05/2015 - 17:35

Interpreters and compilers are interesting programs, themselves used to run or translate other programs, respectively. Those other programs that might be interpreted might be languages like JavaScript, Ruby, Python, PHP, and Perl. The other programs that might be compiled are C, C++, and to some extent Java and C#.

Taking the time to do translation to native machine code ahead of time can result in better performance at runtime, but an interpreter can get to work right away without spending any time translating. There happens to be a sweet spot somewhere in between interpretation and compilation that combines the best of both worlds. Such a technique is called Just In Time (JIT) compiling. While interpreting, compiling, and JIT’ing code might sound radically different, they’re actually strikingly similar. In this post, I hope to show how similar by comparing the code for an interpreter, a compiler, and a JIT compiler for the language Brainfuck in around 100 lines of C code each.

All of the code in the post is up on GitHub.

Brainfuck is an interesting, if hard to read, language. It only has eight operations it can perform > < + - . , [ ], yet is Turing complete. There’s nothing really to lex; each character is a token, and if the token is not one of the eight operators, it’s ignored. There’s also not much of a grammar to parse; the forward jumping and backwards jumping operators should be matched for well formed input, but that’s about it. In this post, we’ll skip over validating input assuming well formed input so we can focus on the interpretation/code generation. You can read more about it on the Wikipedia page, which we’ll be using as a reference throughout.

A Brainfuck program operates on a 30,000 element byte array initialized to all zeros. It starts off with an instruction pointer, that initially points to the first element in the data array or “tape.” In C code for an interpreter that might look like:

1 2 3 4 5 // Initialize the tape with 30,000 zeroes. unsigned char tape [30000] = { 0 }; // Set the pointer to point at the left most cell of the tape. unsigned char* ptr = tape;

Then, since we’re performing an operation for each character in the Brainfuck source, we can have a for loop over every character with a nested switch statement containing case statements for each operator.

The first two operators, > and < increment and decrement the data pointer.

1 2 case '>': ++ptr; break; case '<': --ptr; break;

One thing that could be bad is that because the interpreter is written in C and we’re representing the tape as an array but we’re not validating our inputs, there’s potential for stack buffer overrun since we’re not performing bounds checks. Again, punting and assuming well formed input to keep the code and the point more precise.

Next up are the + and - operators, used for incrementing and decrementing the cell pointed to by the instruction pointer by one.

1 2 case '+': ++(*ptr); break; case '-': --(*ptr); break;

The operators . and , provide Brainfuck’s only means of input or output, by writing the value pointed to by the instruction pointer to stdout as an ASCII value, or reading one byte from stdin as an ASCII value and writing it to the cell pointed to by the instruction pointer.

1 2 case '.': putchar(*ptr); break; case ',': *ptr = getchar(); break;

Finally, our looping constructs, [ and ]. From the definition on Wikipedia for [: if the byte at the data pointer is zero, then instead of moving the instruction pointer forward to the next command, jump it forward to the command after the matching ] command and for ]: if the byte at the data pointer is nonzero, then instead of moving the instruction pointer forward to the next command, jump it back to the command after the matching [ command.

I interpret that as:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 case '[': if (!(*ptr)) { int loop = 1; while (loop > 0) { unsigned char current_char = input[++i]; if (current_char == ']') { --loop; } else if (current_char == '[') { ++loop; } } } break; case ']': if (*ptr) { int loop = 1; while (loop > 0) { unsigned char current_char = input[--i]; if (current_char == '[') { --loop; } else if (current_char == ']') { ++loop; } } } break;

Where the variable loop keeps track of open brackets for which we’ve not seen a matching close bracket, aka our nested depth.

So we can see the interpreter is quite basic, in around 50 SLOC were able to read a byte, and immediately perform an action based on the operator. How we perform that operation might not be the fastest though.

How about if we want to compile the Brainfuck source code to native machine code? Well, we need to know a little bit about our host machine’s Instruction Set Architecture (ISA) and Application Binary Interface (ABI). The rest of the code in this post will not be as portable as the above C code, since it assumes an x86-64 ISA and UNIX ABI. Now would be a good time to take a detour and learn more about writing assembly for x86-64. The interpreter is even portable enough to build with Emscripten and run in a browser!

For our compiler, we’ll iterate over every character in the source file again, switching on the recognized operator. This time, instead of performing an action right away, we’ll print assembly instructions to stdout. Doing so requires running the compiler on an input file, redirecting stdout to a file, then running the system assembler and linker on that file. We’ll stick with just compiling and not assembling (though it’s not too difficult), and linking (for now).

First, we need to print a prologue for our compiled code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 const char* const prologue = ".text\n" ".globl _main\n" "_main:\n" " pushq %rbp\n" " movq %rsp, %rbp\n" " pushq %r12\n" // store callee saved register " subq $30008, %rsp\n" // allocate 30,008 B on stack, and realign " leaq (%rsp), %rdi\n" // address of beginning of tape " movl $0, %esi\n" // fill with 0's " movq $30000, %rdx\n" // length 30,000 B " call _memset\n" // memset " movq %rsp, %r12"; puts(prologue);

During the linking phase, we’ll make sure to link in libc so we can call memset. What we’re doing is backing up callee saved registers we’ll be using, stack allocating the tape, realigning the stack (x86-64 ABI point #1), copying the address of the only item on the stack into a register for our first argument, setting the second argument to the constant 0, the third arg to 30000, then calling memset. Finally, we use the callee saved register %r12 as our instruction pointer, which is the address into a value on the stack.

We can expect the call to memset to result in a segfault if simply subtract just 30000B, and not realign for the 2 registers (64 b each, 8 B each) we pushed on the stack. The first pushed register aligns the stack on a 16 B boundary, the second misaligns it; that’s why we allocate an additional 8 B on the stack (x86-64 ABI point #1). The stack is mis-aligned upon function entry in x86-64. 30000 is a multiple of 16.

Moving the instruction pointer (>, <) and modifying the pointed to value (+, -) are straight-forward:

1 2 3 4 5 6 7 8 9 10 11 12 case '>': puts(" inc %r12"); break; case '<': puts(" dec %r12"); break; case '+': puts(" incb (%r12)"); break; case '-': puts(" decb (%r12)"); break;

For output, ., we need to copy the pointed to byte into the register for the first argument to putchar. We explicitly zero out the register before calling putchar, since it takes an int (32 b), but we’re only copying a char (8 b) (Look up C’s type promotion rules for more info). x86-64 has an instruction that does both, movzXX, Where the first X is the source size (b, w) and the second is the destination size (w, l, q). Thus movzbl moves a byte (8 b) into a double word (32 b). %rdi and %edi are the same register, but %rdi is the full 64 b register, while %edi is the lowest (or least significant) 32 b.

1 2 3 4 5 6 case '.': // move byte to double word and zero upper bits since putchar takes an // int. puts(" movzbl (%r12), %edi"); puts(" call _putchar"); break;

Input (,) is easy; call getchar, move the resulting lowest byte into the cell pointed to by the instruction pointer. %al is the lowest 8 b of the 64 b %rax register.

1 2 3 4 case ',': puts(" call _getchar"); puts(" movb %al, (%r12)"); break;

As usual, the looping constructs ([ & ]) are much more work. We have to match up jumps to matching labels, but for an assembly program, labels must be unique. One way we can solve for this is whenever we encounter an opening brace, push a monotonically increasing number that represents the numbers of opening brackets we’ve seen so far onto a stack like data structure. Then, we do our comparison and jump to what will be the label that should be produced by the matching close label. Next, we insert our starting label, and finally increment the number of brackets seen.

1 2 3 4 5 6 case '[': stack_push(&stack, num_brackets); puts(" cmpb $0, (%r12)"); printf(" je bracket_%d_end\n", num_brackets); printf("bracket_%d_start:\n", num_brackets++); break;

For close brackets, we pop the number of brackets seen (or rather, number of pending open brackets which we have yet to see a matching close bracket) off of the stack, do our comparison, jump to the matching start label, and finally place our end label.

1 2 3 4 5 6 case ']': stack_pop(&stack, &matching_bracket); puts(" cmpb $0, (%r12)"); printf(" jne bracket_%d_start\n", matching_bracket); printf("bracket_%d_end:\n", matching_bracket); break;

So for sequential loops ([][]) we can expect the relevant assembly to look like:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 cmpb $0, (%r12) je bracket_0_end bracket_0_start: cmpb $0, (%r12) jne bracket_0_start bracket_0_end: cmpb $0, (%r12) je bracket_1_end bracket_1_start: cmpb $0, (%r12) jne bracket_1_start bracket_1_end:

and for nested loops ([[]]), we can expect assembly like the following (note the difference in the order of numbered start and end labels):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 cmpb $0, (%r12) je bracket_0_end bracket_0_start: cmpb $0, (%r12) je bracket_1_end bracket_1_start: cmpb $0, (%r12) jne bracket_1_start bracket_1_end: cmpb $0, (%r12) jne bracket_0_start bracket_0_end:

Finally, we need an epilogue to clean up the stack and callee saved registers after ourselves.

1 2 3 4 5 6 const char* const epilogue = " addq $30008, %rsp\n" // clean up tape from stack. " popq %r12\n" // restore callee saved register " popq %rbp\n" " ret\n"; puts(epilogue);

The compiler is a pain when modifying and running a Brainfuck program; it takes a couple extra commands to compile the Brainfuck program to assembly, assemble the assembly into an object file, link it into an executable, and run it whereas with the interpreter we can just run it. The trade off is that the compiled version is quite a bit faster. How much faster? Let’s save that for later.

Wouldn’t it be nice if there was a translation & execution technique that didn’t force us to compile our code every time we changed it and wanted to run it, but also performance closer to that of compiled code? That’s where a JIT compiler comes in!

For the basics of JITing code, make sure you read my previous article on the basics of JITing code in C. We’re going to follow the same technique of creating executable memory, copying bytes into that memory, casting it to a function pointer, then calling it. Just like the interpreter and the compiler, we’re going to do a unique action for each recognized token. What’s different is that for each operator, we’re going to push opcodes into a dynamic array, that way it can grow based on our sequential reading of input and will simplify our calculation of relative offsets for branching operations.

The other special thing we’re going to do it that we’re going to pass the address of our libc functions (memset, putchar, and getchar) into our JIT’ed function at runtime. This avoids those kooky stub functions you might see in a disassembled executable. That means we’ll be invoking our JIT’ed function like:

1 2 3 4 5 typedef void* fn_memset (void*, int, size_t); typedef int fn_putchar (int); typedef int fn_getchar (); void (*jitted_func) (fn_memset, fn_putchar, fn_getchar) = mem; jitted_func(memset, putchar, getchar);

Where mem is our mmap’ed executable memory with our opcodes copied into it, and the typedef’s are for the respective function signatures for our function pointers we’ll be passing to our JIT’ed code. We’re kind of getting ahead of ourselves, but knowing how we will invoke the dynamically created executable code will give us an idea of how the code itself will work.

The prologue is quite a bit involved, so we’ll take it step at a time. First, we have the usual prologue:

1 2 3 char prologue [] = { 0x55, // push rbp 0x48, 0x89, 0xE5, // mov rsp, rbp

Then we want to back up our callee saved registers that we’ll be using. Expect horrific and difficult to debug bugs if you forget to do this.

1 2 3 0x41, 0x54, // pushq %r12 0x41, 0x55, // pushq %r13 0x41, 0x56, // pushq %r14

At this point, %rdi will contain the address of memset, %rsi will contain the address of putchar, and %rdx will contain the address of getchar, see x86-64 ABI point #2. We want to store these in callee saved registers before calling any of them, else they may clobber %rdi, %rsi, or %rdx since they’re not “callee saved,” rather “call clobbered.” See x86-64 ABI point #4.

1 2 3 0x49, 0x89, 0xFC, // movq %rdi, %r12 0x49, 0x89, 0xF5, // movq %rsi, %r13 0x49, 0x89, 0xD6, // movq %rdx, %r14

At this point, %r12 will contain the address of memset, %r13 will contain the address of putchar, and %r14 will contain the address of getchar.

Next up is allocating 30008 B on the stack:

1 0x48, 0x81, 0xEC, 0x38, 0x75, 0x00, 0x00, // subq $30008, %rsp

This is our first hint at how numbers, whose value is larger than the maximum representable value in a byte, are represented on x86-64. Where in this instruction is the value 30008? The answer is the 4 byte sequence 0x38, 0x75, 0x00, 0x00. The x86-64 architecture is “Little Endian,” which means that the least significant bit (LSB) is first and the most significant bit (MSB) is last. When humans do math, they typically represent numbers the other way, or “Big Endian.” Thus we write decimal ten as “10” and not “01.” So that means that 0x38, 0x75, 0x00, 0x00 in Little Endian is 0x00, 0x00, 0x75, 0x38 in Big Endian, which then is 7*16^3+5*16^2+3*16^1+8*16^0 which is 30008 in decimal, the amount of bytes we want to subtract from the stack. We’re allocating an additional 8 B on the stack for alignment requirements, similar to the compiler. By pushing even numbers of 64 b registers, we need to realign our stack pointer.

Next in the prologue, we set up and call memset:

1 2 3 4 5 6 7 8 // address of beginning of tape 0x48, 0x8D, 0x3C, 0x24, // leaq (%rsp), %rdi // fill with 0's 0xBE, 0x00, 0x00, 0x00, 0x00, // movl $0, %esi // length 30,000 B 0x48, 0xC7, 0xC2, 0x30, 0x75, 0x00, 0x00, // movq $30000, %rdx // memset 0x41, 0xFF, 0xD4, // callq *%r12

After invoking memset, %rdi, %rsi, & %rcx will contain garbage values since they are “call clobbered” registers. At this point we no longer need memset, so we now use %r12 as our instruction pointer. %rsp will point to the top (technically the bottom) of the stack, which is the beginning of our memset’ed tape. That’s the end of our prologue.

1 2 0x49, 0x89, 0xE4 // movq %rsp, %r12 };

We can then push our prologue into our dynamic array implementation:

1 vector_push(&instruction_stream, prologue, sizeof(prologue))

Now we iterate over our Brainfuck program and switch on the operations again. For pointer increment and decrement, we just nudge %r12.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 case '>': { char opcodes [] = { 0x49, 0xFF, 0xC4 // inc %r12 }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } break; case '<': { char opcodes [] = { 0x49, 0xFF, 0xCC // dec %r12 }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } break;

That extra fun block in the switch statement is because in C/C++, we can’t define variables in the branches of switch statements.

Pointer deref then increment/decrement are equally uninspiring:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 case '+': { char opcodes [] = { 0x41, 0xFE, 0x04, 0x24 // incb (%r12) }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } break; case '-': { char opcodes [] = { 0x41, 0xFE, 0x0C, 0x24 // decv (%r12) }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } break;

I/O might be interesting, but in x86-64 we have an opcode for calling the function at the end of a pointer. %r13 contains the address of putchar while %r14 contains the address of getchar.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 case '.': { char opcodes [] = { 0x41, 0x0F, 0xB6, 0x3C, 0x24, // movzbl (%r12), %edi 0x41, 0xFF, 0xD5 // callq *%r13 }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } break; case ',': { char opcodes [] = { 0x41, 0xFF, 0xD6, // callq *%r14 0x41, 0x88, 0x04, 0x24 // movb %al, (%r12) }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } break;

Now with our looping constructs, we get to the fun part. With the compiler, we deferred the concept of “relocation” to the assembler. We simply emitted labels, that the assembler turned into relative offsets (jumps by values relative to the last byte in the jump instruction). We’ve found ourselves in a Catch-22 though: how many bytes forward do we jump to the matching close bracket that we haven’t seen yet?

Normally, an assembler might have a data structure known as a “relocation table.” It keeps track of the first byte after a label and jumps, rewriting jumps-to-labels (which aren’t kept around in the resulting binary executable) to relative jumps. Spidermonkey, Firefox’s JavaScript Virtual Machine has two classes for this, MacroAssembler and Label. Spidermonkey embeds a linked list in the opcodes it generates for jumps with which it’s yet to see a label for. Once it finds the label, it walks the linked list (which itself is embedded in the emitted instruction stream) patching up these locations as it goes.

For Brainfuck, we don’t have to anything quite as fancy since each label only ends up having one jump site. Instead, we can use a stack of integers that are offsets into our dynamic array, and do the relocation once we know where exactly we’re jumping to.

1 2 3 4 5 6 7 8 9 10 11 case '[': { char opcodes [] = { 0x41, 0x80, 0x3C, 0x24, 0x00, // cmpb $0, (%r12) // Needs to be patched up 0x0F, 0x84, 0x00, 0x00, 0x00, 0x00 // je <32b relative offset, 2's compliment, LE> }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } stack_push(&relocation_table, instruction_stream.size); // create a label after break;

First we push the compare and jump opcodes, but for now we leave the relative offset blank (four zero bytes). We will come back and patch it up later. Then, we push the current length of dynamic array, which just so happens to be the offset into the instruction stream of the next instruction.

All of the relocation magic happens in the case for the closing bracket.

1 2 3 4 5 6 7 8 9 10 case ']': { char opcodes [] = { 0x41, 0x80, 0x3C, 0x24, 0x00, // cmpb $0, (%r12) // Needs to be patched up 0x0F, 0x85, 0x00, 0x00, 0x00, 0x00 // jne <33b relative offset, 2's compliment, LE> }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } // ...

First, we push our comparison and jump instructions into the dynamic array. We should know the relative offset we need to jump back to at this point, and thus don’t need to push four empty bytes, but it makes the following math a little simpler, as were not done yet with this case.

1 2 3 4 // ... stack_pop(&relocation_table, &relocation_site); relative_offset = instruction_stream.size - relocation_site; // ...

We pop the matching offset into the dynamic array (from the matching open bracket), and calculate the difference from the current size of the instruction stream to the matching offset to get our relative offset. What’s interesting is that this offset is equal in magnitude for the forward and backwards jumps that we now need to patch up. We simply go back in our instruction stream 4 B, and write that relative offset negated as a 32 b LE number (patching our backwards jump), then go back to the site of our forward jump minus 4 B and write that relative offset as a 32 b LE number (patching our forwards jump).

1 2 3 4 // ... vector_write32LE(&instruction_stream, instruction_stream.size - 4, -relative_offset); vector_write32LE(&instruction_stream, relocation_site - 4, relative_offset); break;

Thus, when writing a JIT, one must worry about manual relocation. From the Intel 64 and IA-32 Architectures Software Developer’s Manual Volume 2 (2A, 2B & 2C): Instruction Set Reference, A-Z “A relative offset (rel8, rel16, or rel32) is generally specified as a label in assembly code, but at the machine code level, it is encoded as a signed, 8-bit or 32-bit immediate value, which is added to the instruction pointer.”

The last thing we push onto our instruction stream is clean up code in the epilogue.

1 2 3 4 5 6 7 8 9 10 char epilogue [] = { 0x48, 0x81, 0xC4, 0x38, 0x75, 0x00, 0x00, // addq $30008, %rsp // restore callee saved registers 0x41, 0x5E, // popq %r14 0x41, 0x5D, // popq %r13 0x41, 0x5C, // popq %r12 0x5d, // pop rbp 0xC3 // ret }; vector_push(&instruction_stream, epilogue, sizeof(epilogue));

A dynamic array of bytes isn’t really useful, so we need to create executable memory the size of the current instruction stream and copy all of the machine opcodes into it, cast it to a function pointer, call it, and finally clean up:

1 2 3 4 5 6 7 void* mem = mmap(NULL, instruction_stream.size, PROT_WRITE | PROT_EXEC, MAP_ANON | MAP_PRIVATE, -1, 0); memcpy(mem, instruction_stream.data, instruction_stream.size); void (*jitted_func) (fn_memset, fn_putchar, fn_getchar) = mem; jitted_func(memcpy, putchar, getchar); munmap(mem, instruction_stream.size); vector_destroy(&instruction_stream);

Note: we could have used the instruction stream rewinding technique to move the address of memset, putchar, and getchar as 64 b immediate values into %r12-%r14, which would have simplified our JIT’d function’s type signature.

Compile that, and we now have a function that will JIT compile and execute Brainfuck in roughly 141 SLOC. And, we can make changes to our Brainfuck program and not have to recompile it like we did with the Brainfuck compiler.

Hopefully it’s becoming apparent how similar an interpreter, compiler, and JIT behave. In the interpreter, we immediately execute some operation. In the compiler, we emit the equivalent text based assembly instructions corresponding to what the higher level language might get translated to in the interpreter. In the JIT, we emit the binary opcodes into executable memory and manually perform relocation, where the binary opcodes are equivalent to the text based assembly we might emit in the compiler. A production ready JIT would probably have macros for each operation in the JIT would perform, so the code would look more like the compiler rather than raw arrays of bytes (though the preprocessor would translate those macros into such). The entire process is basically disassembling C code with gobjdump -S -M suffix a.out, and punching in hex like one would a Gameshark.

Compare pointer incrementing from the three:

Interpreter:

1 case '>': ++ptr; break;

Compiler:

1 2 3 case '>': puts(" inc %r12"); break;

JIT:

1 2 3 4 5 6 7 8 case '>': { char opcodes [] = { 0x49, 0xFF, 0xC4 // inc %r12 }; vector_push(&instruction_stream, opcodes, sizeof(opcodes)); } break;

Or compare the full sources of the the interpreter, the compiler, and the JIT. Each at ~100 lines of code should be fairly easy to digest.

Let’s now examine the performance of these three. One of the longer running Brainfuck programs I can find is one that prints the Mandelbrot set as ASCII art to stdout.

Running the UNIX command time on the interpreter, compiled result, and the JIT, we should expect numbers similar to:

1 2 3 4 5 6 7 8 $ time ./interpreter ../samples/mandelbrot.b 43.54s user 0.03s system 99% cpu 43.581 total $ ./compiler ../samples/mandelbrot.b > temp.s; ../assemble.sh temp.s; time ./a.out 3.24s user 0.01s system 99% cpu 3.254 total $ time ./jit ../samples/mandelbrot.b 3.27s user 0.01s system 99% cpu 3.282 total

The interpreter is an order of magnitude slower than the compiled result or run of the JIT. Then again, the interpreter isn’t able to jump back and forth as efficiently as the compiler or JIT, since it scans back and forth for matching brackets O(N), while the other two can jump to where they need to go in a few instructions O(1). A production interpreter would probably translate the higher level language to a byte code, and thus be able to calculate the offsets used for jumps directly, rather than scanning back and forth.

The interpreter bounces back and forth between looking up an operation, then doing something based on the operation, then lookup, etc.. The compiler and JIT preform the translation first, then the execution, not interleaving the two.

The compiled result is the fastest, as expected, since it doesn’t have the overhead the JIT does of having to read the input file or build up the instructions to execute at runtime. The compiler has read and translated the input file ahead of time.

What if we take into account the time it takes to compile the source code, and run it?

1 2 $ time (./compiler ../samples/mandelbrot.b > temp.s; ../assemble.sh temp.s; ./a.out) 3.27s user 0.08s system 99% cpu 3.353 total

Including the time it takes to compile the code then run it, the compiled results are now slightly slower than the JIT (though I bet the multiple processes we start up are suspect), but with the JIT we pay the price to compile each and every time we run our code. With the compiler, we pay that tax once. When compilation time is cheap, as is the case with our Brainfuck compiler & JIT, it makes sense to prefer the JIT; it allows us to quickly make changes to our code and re-run it. When compilation is expensive, we might only want to pay the compilation tax once, especially if we plan on running the program repeatedly.

JIT’s are neat but compared to compilers can be more complex to implement. They also repeatedly re-parse input files and re-build instruction streams at runtime. Where they can shine is bridging the gap for dynamically typed languages where the runtime itself is much more dynamic, and thus harder (if not, impossible) to optimize ahead of time. Being able to jump into JIT’d native code from an interpreter and back gives you the best of both (interpreted and compiled) worlds.

Categorieën: Mozilla-nl planet

Nathan Froyd: white space as unused advertising space

Mozilla planet - ma, 25/05/2015 - 17:24

I picked up Matthew Crawford’s The World Outside Your Head this weekend. The introduction, subtitled “Attention as a Cultural Problem”, opens with these words:

The idea of writing this book gained strength one day when I swiped my bank card to pay for groceries. I watched the screen intently, waiting for it to prompt me to do the next step. During the following seconds it became clear that some genius had realized that a person in this situation is a captive audience. During those intervals between swiping my card, confirming the amount, and entering my PIN, I was shown advertisements. The intervals themselves, which I had previously assumed were a mere artifact of the communication technology, now seemed to be something more deliberately calibrated. These haltings now served somebody’s interest.

I have had a similar experience: the gas station down the road from me has begun playing loud “news media” clips on the digital display of the gas pump while your car is being refueled, cleverly exploiting the driver as a captive audience. Despite this gas station being somewhat closer to my house and cheaper than the alternatives, I have not been back since I discovered this practice.

Crawford continues, describing how a recent airline trip bombarded him with advertisements in “unused” (“unmonetized”?) spaces: on the fold-down tray table in his airplane seat, the moving handrail on the escalator in the airport, on the key card (!) for his hotel room. The logic of filling up unused space reaches even to airport security:

But in the last few years, I have found I have to be careful at the far end of [going through airport security], because the bottoms of the gray trays that you place your items in for X-ray screening are now papered with advertisements, and their visual clutter makes it very easy to miss a pinky-sized flash memory stick against a picture of fanned-out L’Oréal lipstick colors…

Somehow L’Oréal has the Transportation Security Administration on its side. Who made the decision to pimp out the security trays with these advertisements? The answer, of course, is that Nobody decided on behalf of the public. Someone made a suggestion, and Nobody responded in the only way that seemed reasonable: here is an “inefficient” use of space that could instead be used to “inform” the public of “opportunities.” Justifications of this flavor are so much a part of the taken-for-granted field of public discourse that they may override our immediate experience and render it unintelligible to us. Our annoyance dissipates into vague impotence because we have no public language in which to articulate it, and we search instead for a diagnosis of ourselves: Why am I so angry? It may be time to adjust the meds.

Reading the introduction seemed especially pertinent to me in light of last week’s announcement about Suggested Tiles. The snippets in about:home featuring Mozilla properties or efforts, or even co-opting tiles on about:newtab for similar purposes feels qualitatively different than using those same tiles for advertisements from third parties bound only to Mozilla through the exchange of money. I have at least consented to the former, I think, by downloading Firefox and participating in that ecosystem, similar to how Chrome might ask you to sign into your Google account when the browser opens. The same logic is decidedly not true in the advertising case.

People’s attention is a scarce resource. We should be treating it as such in Firefox, not by “informing” them of “opportunities” from third parties unrelated to Mozilla’s mission, even if we act with the utmost concern for their privacy. Like the gas station near my house, people are not going to come to Firefox because it shows them advertisements in “inefficiently” used space. At best, they will tolerate the advertisements (maybe even taking steps to turn them off); at worst, they’ll use a different web browser and they won’t come back.

Categorieën: Mozilla-nl planet

Mozilla Decides To Ditch $25 Firefox Smartphone, To Focus on Quality, Rather ... - Trak.in (blog)

Nieuws verzameld via Google - ma, 25/05/2015 - 14:44

Trak.in (blog)

Mozilla Decides To Ditch $25 Firefox Smartphone, To Focus on Quality, Rather ...
Trak.in (blog)
Mozilla in 2013 decided to get their hands wet in the Smartphone OS market. After a number of collaborations with many manufacturers based out of Europe and other countries, Mozilla has now decided to go another way! Mozilla had partnered with Intex ...
Mozilla to abandon $25 Firefox phones, may explore Android app compatibility ...Times of India
Mozilla Changes Firefox OS Strategy Due To Android's PopularitySTGIST
Mozilla Plans To Release Firefox-based Mobile DevicesPioneer News
Digital Trends -Tech2
alle 110 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Pagina's