mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Ben Hearsum: Upcoming changes to Mac package layout, signing

Mozilla planet - di, 12/08/2014 - 19:05

Apple recently announced changes to how OS X applications must be packaged and signed in order for them to function correctly on OS X 10.9.5 and 10.10. The tl;dr version of this is “only mach-O binaries may live in .app/Contents/MacOS, and signing must be done on 10.9 or later”. Without any changes, future versions of Firefox will cease to function out-of-the-box on OS X 10.9.5 and 10.10. We do not have a release date for either of these OS X versions yet.

Changes required:
* Move all non-mach-O files out of .app/Contents/MacOS. Most of these will move to .app/Contents/Resources, but files that could legitimately change at runtime (eg: everything in defaults/) will move to .app/MozResources (which can be modified without breaking the signature): https://bugzilla.mozilla.org/showdependencytree.cgi?id=1046906&hide_resolved=1. This work is in progress, but no patches are ready yet.
* Add new features to the client side update code to allow partner repacks to continue to work. (https://bugzilla.mozilla.org/show_bug.cgi?id=1048921)
* Create and use 10.9 signing servers for these new-style apps. We still need to use our existing 10.6 signing servers for any builds without these changes. (https://bugzilla.mozilla.org/show_bug.cgi?id=1046749 and https://bugzilla.mozilla.org/show_bug.cgi?id=1049595)
* Update signing server code to support new v2 signatures.

Timeline:
We are intending to ship the required changes with Gecko 34, which ships on November 25th, 2014. The changes required are very invasive, and we don’t feel that they can be safely backported to any earlier version quickly enough without major risk of regressions. We are still looking at whether or not we’ll backport to ESR 31. To this end, we’ve asked that Apple whitelist Firefox and Thunderbird versions that will not have the necessary changes in them. We’re still working with them to confirm whether or not this can happen.

This has been cross posted a few places – please send all follow-ups to the mozilla.dev.platform newsgroup.

Categorieën: Mozilla-nl planet

Gervase Markham: Absence

Mozilla planet - di, 12/08/2014 - 15:15

I will be away and without email from Thu 14th August to Friday 22nd August, and then mostly away from email for the following week as well (until Friday 29th August).

Categorieën: Mozilla-nl planet

Mozilla ID startet - Pro-Linux

Nieuws verzameld via Google - di, 12/08/2014 - 12:26

Mozilla ID startet
Pro-Linux
Die Mozilla Foundation hat mit Mozilla ID ein neues Projekt in Angriff genommen, dessen Ziel ein neues Logo und die Schaffung einer lebendigen Mozilla-Marke ist. Seit der Gründung des Mozilla-Projekts am 31. März 1998 nutzte Mozilla längere Zeit einen ...
Mozilla bekommt eine neue visuelle Identitätsoeren-hentzschel.at

alle 3 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Benjamin Kerensa: UbuConLA: Firefox OS on show in Cartagena

Mozilla planet - di, 12/08/2014 - 11:30

 Firefox OS on show in CartagenaIf you are attending UbuConLA I would strongly encourage you to check out the talks on Firefox OS and Webmaker. In addition to the talks, there will also be a Firefox OS workshop where attendees can go more hands on.

When the organizers of UbuConLA reached out to me several months ago, I knew we really had to have a Mozilla presence at this event so that Ubuntu Users who are already using Firefox as their browser of choice could learn about other initiatives like Firefox OS and Webmaker.

People in Latin America always have had a very strong ethos in terms of their support and use of Free Software and we have an amazingly vibrant community there in Columbia.

So if you will be anywhere near Universidad Tecnológica De Bolívar in Catagena, Columbia, please go see the talks and learn why Firefox OS is the mobile platform that makes the open web a first class citizen.

Learn how you can build apps and test them in Firefox on Ubuntu! A big thanks to Guillermo Movia for helping us get some speakers lined up here! I really look forward to seeing some awesome Firefox OS apps getting published as a result of our presence at UbuConLA as I am sure the developers will love what Firefox OS has to offer.

 

Feliz Conferencia!

Categorieën: Mozilla-nl planet

Fredy Rouge: OpenBadges at Duolingo test center

Mozilla planet - di, 12/08/2014 - 09:06

Duolingo is starting a new certification program:

I think is good idea if anyone at MoFo (pay staff) write or call this people to propose the integration of http://openbadges.org/ in their certification program.

I really don’t have friends at MoFo/OpenBadges if you think is a good idea and you know anyone at OpenBadges please FW this idea.


Classé dans:Statut Tagged: duolingo, english, Mozilla, OpenBadgets
Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

Mozilla planet - di, 12/08/2014 - 09:00

the following changes have been pushed to bugzilla.mozilla.org:

  • [1049929] Product Support / Corp Support to Business Support
  • [1033897] Firefox OS MCTS Waiver Request Submission Form
  • [1041964] Indicate that a comment is required when selecting a value which has an auto-comment configured
  • [498890] Bugzilla::User::Setting doesn’t need to sort DB results
  • [993926] Bugzilla::User::Setting::get_all_settings() should use memcached
  • [1048053] convert bug 651803 dupes to INVALID bugs in “Invalid Bugs” product

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Downloading Mozilla Firefox for your Android Devices and PC - StreetWise Tech

Nieuws verzameld via Google - di, 12/08/2014 - 06:19

Downloading Mozilla Firefox for your Android Devices and PC
StreetWise Tech
Released back in September, 2002 the well known Mozilla Firefox has since been praised as one of the most popular open source web browsers available. Currently being the third most popular, it has free versions on a ton of different operating systems ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla bekommt eine neue visuelle Identität - soeren-hentzschel.at

Nieuws verzameld via Google - ma, 11/08/2014 - 23:58

Mozilla bekommt eine neue visuelle Identität
soeren-hentzschel.at
Der Dinosaurier hat schon lange als Logo von Mozilla ausgedient. Was geblieben ist, das ist eine Wortmarke und eine Farbpalette, bestehend aus ein paar wenigen Farben. Dies repräsentiert nach Ansicht von Mozilla nicht gut genug das, wofür Mozilla steht ...

Google Nieuws
Categorieën: Mozilla-nl planet

Hannah Kane: Maker Party Engagement Week 4

Mozilla planet - ma, 11/08/2014 - 23:22

We’re almost at the halfway point!

Here’s some fodder for this week’s Peace Room meetings.

tl;dr potential topics of discussion:

  • big increase in user accounts this week caused by change to snippet strategy
    • From Adam: We’re directing all snippet traffic straight to webmaker.org/signup while we develop a tailored landing page experience with built in account creation.This page is really converting well for an audience as broad and cold as the snippet, and I believe we can increase this rate further with bespoke pages and optimization.

      Fun fact: this approach is generating a typical month’s worth of new webmaker users every three days.

  • what do we want from promotional partners?
  • what are we doing to engage active Mozillians?

——–

Overall stats:

  • Contributors: 5441 (we’ve passed the halfway point!)
  • Webmaker accounts: 106.3K (really big jump this week—11.6K new accounts this week as compared to 2.6K last week) (At one point we thought that 150K Webmaker accounts would be the magic number for hitting 10K Contributors. Should we revisit that assumption?)
  • Events: 1199 (up 10% from last week; this is down from the previous week which saw a 26% jump)
  • Hosts: 450 (up 14% from last week, same as the prior week)
  • Expected attendees: 61,910 (up 13% from last week, down a little bit from last week’s 16% increase)
  • Cities: 260 (up 8% from 241 last week)
  • Traffic: here’s the last three weeks. You can see we’re maintaining the higher levels that started with last week’s increase to our snippet allotment. ​ ​

traffic

  • The Webmaker user account conversion rate also went up this week:

Screen Shot 2014-08-10 at 4.50.50 PM

  • Do we know what caused the improved conversion rate?

——————————————————————–

Engagement Strategy #1: PARTNER OUTREACH

EVENT PARTNERS: This week we started implementing our phone-based “hand holding” strategy. We’re tracking finding from partner calls on a spreadsheet and capturing learnings on an etherpad.

Notes:

  • as I understand it, we need to populate the Potential Contributors column with numbers (not words) to inform the expected Contributors trend line
  • same for the Potential Accounts column
  • are we using the Potential Events column to inform a trend line on any dashboard?
  • oh, and let’s agree on a format convention for the date field, so that we can sort by date

PROMOTIONAL PARTNERS: It still looks like we’re only getting handfuls of referrals through the specific partner URLs. I’d like to clarify what exactly our goals are for promotional partners, so that we can figure out whether to focus more attention on tracking results.

——————————————————————–

Engagement Strategy #2: ACTIVE MOZILLIANS

I haven’t heard anything about engaging Reps or FSAs this week. Have we done anything on this front?

——————————————————————–

Engagement Strategy #3: OWNED MEDIA

Snippet:

The snippet continues to perform well in terms of driving traffic. Last week we sent the first of the drip campaign emails and saw the following results after the first two days:

  • Sent to 75,964
  • Unique opens 13187
  • Open rate 17%
  • Unique clicks 4004
  • Open to click rate 30%
  • New accounts 554
  • Email to account conversion 0.73%
  • Click to conversion 13.84%

The snippet working group met and agreed to build the following two iterations:

  • Survey without email > 2 x tailored account signup pages > ongoing journey
  • Immediate account signup page > ongoing journey

——————————————————————–

Engagement Strategy #4: EARNED MEDIA

Press this week:

We revised our strategy with Turner this week. See previous email on that topic.

Brand awareness

Here’s this week’s traffic coming from searches for “webmaker” and “maker party” (blue line) vs. the week before (orange line). There’s been a 28% increase (though the overall numbers are quite small).

Screen Shot 2014-08-10 at 5.41.34 PMSOCIAL (not one of our key strategies): #MakerParty trendline: Back down a bit this week. ​

Screen Shot 2014-08-10 at 5.45.58 PMSee #MakerParty tweets here: https://twitter.com/search?q=%23makerparty&src=typd


Categorieën: Mozilla-nl planet

Sean Martell: What is a Living Brand?

Mozilla planet - ma, 11/08/2014 - 19:22

Today, we’re starting the Mozilla ID project, which will be an exploration into how to make the Mozilla identity system as bold and dynamic as the Mozilla project itself. The project will look into tackling three of our brand elements – typography, color, and the logo. Of these three, the biggest challenge will be creating a new logo since we currently don’t have an official mark at the moment. Mozilla’s previous logo was the ever amazing dino head that we all love, which has now been set as a key branding element for our community-facing properties. Its replacement should embody everything that Mozilla is, and our goal is to bake as much of our nature into the visual as we can while keeping it clean and modern. In order to do this, we’re embracing the idea of creating a living brand.

A living brand you say? Tell me more. mtv4

Image from DesignBoom

I’m pleased to announce you already know what a living brand is, you just may not know it under that term. If you’ve ever seen the MTV logo – designed in 1981 by Manhattan Design – you’ve witnessed a living brand. The iconic M and TV shapes are the base elements for their brand and building on that with style, color, illustrations and animations creates the dynamic identity system that brings it alive. Their system allows designers to explore unlimited variants of the logo, while maintaining brand consistency with the underlying recognizable shapes. As you can tell through this example, a living brand can unlock so much potential for a logo, opening up so many possibilities for change and customization. It’s because of this that we feel a living brand is perfect for Mozilla – we’ll be able to represent who we are through an open visual system of customization and creative expression.

You may be wondering how this is so open if Mozilla Creative will be doing all of the variants for this new brand? Here’s the exciting part. We’re going to be helping define the visual system, yes, but we’re exploring dynamic creation of the visual itself through code and data visualization. We’re also going to be creating the visual output using HTML5 and Web technologies, baking the building blocks of the Web we love and protect into our core brand logo.

OMG exciting, right? Wait, there’s still more!

In order to have this “organized infinity” allow a strong level of brand recognition, we plan to have a constant mark as part of the logo, similar to how MTV did it with the base shapes. Here’s the fun part and one of several ways you can get involved – we’ll be live streaming the process with a newly minted YouTube channel where you can follow along as we explore everything from wordmark choices to building out those base logo shapes and data viz styles. Yay! Open design process!

So there it is. Our new fun project. Stay tuned to various channels coming out of Creative – this blog, my Twitter account, the Mozilla Creative blog and Twitter account – and we’ll update you shortly on how you’ll be able to take part in the process. For now, fell free to jump in to #mologo on IRC to say hi and discuss all things Mozilla brand!

It’s a magical time for design, Mozilla. Let’s go exploring!

Categorieën: Mozilla-nl planet

Gervase Markham: Accessing Vidyo Meetings Using Free Software: Help Needed

Mozilla planet - ma, 11/08/2014 - 17:43

For a long time now, Mozilla has been a heavy user of the Vidyo video-conferencing system. Like Skype, it’s a “pretty much just works” solution where, sadly, the free software and open standards solutions don’t yet cut it in terms of usability. We hope WebRTC might change this. Anyway, in the mean time, we use it, which means that Mozilla staff have had to use a proprietary client, and those without a Vidyo login of their own have had to use a Flash applet. Ick. (I use a dedicated Android tablet for Vidyo, so I don’t have to install either.)

However, this sad situation may now have changed. In this bug, it seems that SIP and H.263/H.264 gateways have been enabled on our Vidyo setup, which should enable people to call in using standards-compliant free software clients. However, I can’t get video to work properly, using Linphone. Is there anyone out there in the Mozilla world who can read the bug and figure out how to do it?

Categorieën: Mozilla-nl planet

Gervase Markham: It’s Not All About Efficiency

Mozilla planet - ma, 11/08/2014 - 17:30

Delegation is not merely a way to spread the workload around; it is also a political and social tool. Consider all the effects when you ask someone to do something. The most obvious effect is that, if he accepts, he does the task and you don’t. But another effect is that he is made aware that you trusted him to handle the task. Furthermore, if you made the request in a public forum, then he knows that others in the group have been made aware of that trust too. He may also feel some pressure to accept, which means you must ask in a way that allows him to decline gracefully if he doesn’t really want the job. If the task requires coordination with others in the project, you are effectively proposing that he become more involved, form bonds that might not otherwise have been formed, and perhaps become a source of authority in some subdomain of the project. The added involvement may be daunting, or it may lead him to become engaged in other ways as well, from an increased feeling of overall commitment.

Because of all these effects, it often makes sense to ask someone else to do something even when you know you could do it faster or better yourself.

– Karl Fogel, Producing Open Source Software

Categorieën: Mozilla-nl planet

Just Browsing: “Because We Can” is Not a Good Reason

Mozilla planet - ma, 11/08/2014 - 17:17

The two business books that have most influenced me are Geoffrey Moore’s Crossing the Chasm and Andy Grove’s Only the Paranoid Survive. Grove’s book explains that, for long-term success, established businesses must periodically navigate “strategic inflection points”, moments when a paradigm shift forces them to adopt a new strategy or fade into irrelevance. Moore’s book could be seen as a prequel, outlining strategies for nascent companies to break through and become established themselves.

The key idea of Crossing the Chasm is that technology startups must focus ruthlessly in order to make the jump from early adopters (who will use new products just because they are cool and different) into the mainstream. Moore presents a detailed strategy for marketing discontinuous hi-tech products, but to my mind his broad message is relevant to any company founder. You have a better chance of succeeding if you restrict the scope of your ambitions to the absolute minimum, create a viable business and then grow it from there.

This seems obvious: to compete with companies who have far more resources, a newcomer needs to target a niche where it can fight on an even footing with the big boys (and defeat them with its snazzy new technology). Taking on too much means that financial investment, engineering talent and, worst of all, management attention are diluted by spreading them across multiple projects.

So why do founders consistently jeopardize their prospects by trying to do too much? Let me count the ways.

In my experience the most common issue is an inability to pass up a promising opportunity. The same kind of person who starts their own company tends to be a go-getter with a bias towards action, so they never want to waste a good idea. In the very early stages this is great. Creativity is all about trying as many ideas as possible and seeing what sticks. But once you’ve committed to something that you believe in, taking more bets doesn’t increase your chances of success, it radically decreases them.

Another mistake is not recognizing quickly enough that a project has failed. Failure is rarely total. Every product will have a core group of passionate users or a flashy demo or some unique technology that should be worth something, dammit! The temptation is to let the project drag on even as you move on. Better to take a deep breath and kill it off so you can concentrate on your new challenges, rather than letting it weigh you down for months or years… until you inevitably cancel it anyway.

Sometimes lines of business need to be abandoned even if they are successful. Let’s say you start a small but prosperous company selling specialized accounting software to Lithuanian nail salons. You add a cash-flow forecasting feature and realize after a while that it is better than anything else on the market. Now you have a product that you can sell to any business in any country. But you might as well keep selling your highly specialized accounting package in the meantime, right? After all, it’s still contributing to your top-line revenue. Wrong! You’ve found a much bigger opportunity now and you should dump your older business as soon as financially possible.

Last, but certainly not least, there is the common temptation to try to pack too much into a new product. I’ve talked to many enthusiastic entrepreneurs who are convinced that their product will be complete garbage unless it includes every minute detail of their vast strategic vision. What they don’t realize is that it is going to take much longer than they think to develop something far simpler than what they have in mind. This is where all the hype about the Lean Startup and Minimum Viable Products is spot on. They force you to make tough choices about what you really need before going to market. In the early stages you should be hacking away big chunks of your product spec with a metaphorical machete, not agonizing over every “essential” feature you have to let go.

The common thread is that ambitious, hard-charging individuals, the kind who start companies, have a tough time seeing the downside of plugging away endlessly at old projects, milking every last drop out of old lines of business and taking on every interesting new challenge that comes their way. But if you don’t have a coherent, disciplined strategic view of what you are trying to achieve, if you aren’t willing to strip away every activity that doesn’t contribute to this vision, then you probably aren’t working on the right things.

Categorieën: Mozilla-nl planet

Roberto A. Vitillo: Dasbhoard generator for custom Telemetry jobs

Mozilla planet - ma, 11/08/2014 - 17:07

tldr: Next time you are in need of a dashboard similar to the one used to monitor main-thread IO, please consider using my dashboard generator which takes care of displaying periodically generated data.

So you wrote your custom analysis for Telemetry, your map-reduce job is finally giving you the desired data and you want to set it up so that it runs periodically. You will need some sort of dashboard to monitor the weekly runs but since you don’t really care how it’s done what do you do? You copy paste the code of one of our current dashboards, a little tweak here and there and off you go.

That basically describes all of the recent dashboards, like the one for main-thread IO (mea culpa). Writing dashboards is painful when the only thing you care about is data. Once you finally have what you were looking for, the way you present is often considered an afterthought at best. But maintaining N dashboards becomes quickly unpleasant.

But what makes writing and maintaining dashboards so painful exactly? It’s simply that the more controls you have, the more the different kind events you have to handle and the easier things get out of hand quickly. You start with something small and beautiful that just displays some csv and presto you end up with what should have been properly described as a state machine but instead is a mess of intertwined event handlers.

What I was looking for was something on the line of Shiny for R, but in javascript and with the option to have a client-only based interface. It turns out that React does more or less what I want. It’s not necessary meant for data analysis so there aren’t any plotting facilities but everything is there to roll your own. What makes exactly Shiny and React so useful is that they embrace reactive programming. Once you define a state and a set of dependencies, i.e. a data flow graph in practical terms, changes that affect the state end up being automatically propagated to the right components. Even though this can be seen as overkill for small dashboards, it makes it extremely easy to extend them when the set of possible states expands, which is almost always what happens.

To make things easier for developers I wrote a dashboard generator, iacumus, for use-cases similar to the ones we currently have. It can be used in simple scenarios when:

  • the data is collected in csv files on a weekly basis, usually using build-ids;
  • the dashboard should compare the current week against the previous one and mark differences in rankings;
  • it should be possible to go back back and forward in time;
  • the dashboard should provide some filtering and sorting criterias.

Iacumus is customizable through a configuration file that is specified through a GET parameter. Since it’s hosted on github, it means you just have to provide the data and don’t even have to spend time deploying the dashboard somewhere, assuming the machine serving the configuration file supports CORS. Here is how the end result looks like using the data for the Add-on startup correlation dashboard. Note that currently Chrome doesn’t handle properly our gzipped datasets and is unable to display anything, in case you wonder…

My next immediate goal is to simplify writing map-reduce jobs for the above mentioned use cases or to the very least write down some guidelines. For instance, some of our dashboards are based on Firefox’s version numbers and not on build-ids, which is really what you want when you desire to make comparisons of Nightly on a weekly basis.

Another interesting thought would be to automatically detect differences in the dashboards and send alerts. That might be not as easy with the current data, since a quick look at the dashboards makes it clear that the rankings fluctuate quite a bit. We would have to collect daily reports and account for the variance of the ranking in those as just using a few weekly datapoints is not reliable enough to account for the deviation.


Categorieën: Mozilla-nl planet

Matt Brubeck: Let's build a browser engine! Part 2: Parsing HTML

Mozilla planet - ma, 11/08/2014 - 17:00

This is the second in a series of articles on building a toy browser rendering engine:

This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

A Simple HTML Dialect

I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

<html> <body> <h1>Title</h1> <div id="main" class="test"> <p>Hello <em>world</em>!</p> </div> </body> </html>

The following syntax is allowed:

  • Balanced tags: <p>...</p>
  • Attributes with quoted values: id="main"
  • Text nodes: <em>world</em>

Everything else is unsupported, including:

  • Comments
  • Doctype declarations
  • Escaped characters (like &amp;) and CDATA sections
  • Self-closing tags: <br/> or <br> with no closing tag
  • Error handling (e.g. unbalanced or improperly nested tags)
  • Namespaces and other XHTML syntax: <html:body>
  • Character encoding detection

At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

Example Code

Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

struct Parser { pos: uint, input: String, }

We can use this to implement some simple methods for peeking at the next characters in the input:

impl Parser { /// Read the next character without consuming it. fn next_char(&self) -> char { self.input.as_slice().char_at(self.pos) } /// Do the next characters start with the given string? fn starts_with(&self, s: &str) -> bool { self.input.as_slice().slice_from(self.pos).starts_with(s) } /// Return true if all input is consumed. fn eof(&self) -> bool { self.pos >= self.input.len() } // ... }

Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

/// Return the current character, and advance to the next character. fn consume_char(&mut self) -> char { let range = self.input.as_slice().char_range_at(self.pos); self.pos = range.next; range.ch }

Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

/// Consume characters until `test` returns false. fn consume_while(&mut self, test: |char| -> bool) -> String { let mut result = String::new(); while !self.eof() && test(self.next_char()) { result.push_char(self.consume_char()); } result }

We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

/// Consume and discard zero or more whitespace characters. fn consume_whitespace(&mut self) { self.consume_while(|c| c.is_whitespace()); } /// Parse a tag or attribute name. fn parse_tag_name(&mut self) -> String { self.consume_while(|c| match c { 'a'..'z' | 'A'..'Z' | '0'..'9' => true, _ => false }) }

Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

/// Parse a single node. fn parse_node(&mut self) -> dom::Node { match self.next_char() { '<' => self.parse_element(), _ => self.parse_text() } } /// Parse a text node. fn parse_text(&mut self) -> dom::Node { dom::text(self.consume_while(|c| c != '<')) }

An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

/// Parse a single element, including its open tag, contents, and closing tag. fn parse_element(&mut self) -> dom::Node { // Opening tag. assert!(self.consume_char() == '<'); let tag_name = self.parse_tag_name(); let attrs = self.parse_attributes(); assert!(self.consume_char() == '>'); // Contents. let children = self.parse_nodes(); // Closing tag. assert!(self.consume_char() == '<'); assert!(self.consume_char() == '/'); assert!(self.parse_tag_name() == tag_name); assert!(self.consume_char() == '>'); dom::elem(tag_name, attrs, children) }

Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

/// Parse a single name="value" pair. fn parse_attr(&mut self) -> (String, String) { let name = self.parse_tag_name(); assert!(self.consume_char() == '='); let value = self.parse_attr_value(); (name, value) } /// Parse a quoted value. fn parse_attr_value(&mut self) -> String { let open_quote = self.consume_char(); assert!(open_quote == '"' || open_quote == '\''); let value = self.consume_while(|c| c != open_quote); assert!(self.consume_char() == open_quote); value } /// Parse a list of name="value" pairs, separated by whitespace. fn parse_attributes(&mut self) -> dom::AttrMap { let mut attributes = HashMap::new(); loop { self.consume_whitespace(); if self.next_char() == '>' { break; } let (name, value) = self.parse_attr(); attributes.insert(name, value); } attributes }

To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

/// Parse a sequence of sibling nodes. fn parse_nodes(&mut self) -> Vec<dom::Node> { let mut nodes = vec!(); loop { self.consume_whitespace(); if self.eof() || self.starts_with("</") { break; } nodes.push(self.parse_node()); } nodes }

Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

/// Parse an HTML document and return the root element. pub fn parse(source: String) -> dom::Node { let mut nodes = Parser { pos: 0u, input: source }.parse_nodes(); // If the document contains a root element, just return it. Otherwise, create one. if nodes.len() == 1 { nodes.swap_remove(0).unwrap() } else { dom::elem("html".to_string(), HashMap::new(), nodes) } }

That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.

Exercises

Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

  1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

  2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

  3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.

Shortcuts

If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

// <html><body>Hello, world!</body></html> let root = element("html"); let body = element("body"); root.children.push(body); body.children.push(text("Hello, world!"));

Or you can find an existing HTML parser and incorporate it into your program.

The next article in this series will cover CSS data structures and parsing.

Categorieën: Mozilla-nl planet

Christian Heilmann: Presenter tip: animated GIFs are not as cool as we think

Mozilla planet - ma, 11/08/2014 - 15:42

Disclaimer: I have no right to tell you what to do and how to present – how dare I? You can do whatever you want. I am not “hating” on anything – and I don’t like the term. I am also guilty and will be so in the future of the things I will talk about here. So, bear with me: as someone who spends most of his life currently presenting, being at conferences and coaching people to become presenters, I think it is time for an intervention.


The hardest part of putting together a talk for developers is finding the funny gifs that accurately represent your topic.
The Tweet that started this and its thread

If you are a technical presenter and you consider adding lots of animated GIFs to your slides, stop, and reconsider. Consider other ways to spend your time instead. For example:

  • Writing a really clean code example and keeping it in a documented code repository for people to use
  • Researching how very successful people use the thing you want the audience to care
  • Finding a real life example where a certain way of working made a real difference and how it could be applied to an abstract coding idea
  • Researching real numbers to back up your argument or disprove common “truths”

Don’t fall for the “oh, but it is cool and everybody else does it” trap. Why? because when everybody does it there is nothing cool or new about it.

Animated GIFs are ubiquitous on the web right now and we all love them. They are short videos that work in any environment, they are funny and – being very pixelated – have a “punk” feel to them.

This, to me, was the reason presenters used them in technical presentations in the first place. They were a disruption, they were fresh, they were different.

We all got bored to tears by corporate presentations that had more bullets than the showdown in a Western movie. We all got fed up with amazingly brushed up presentations by visual aficionados that had just one too many inspiring butterfly or beautiful sunset.

added text to sunrise

We wanted something gritty, something closer to the metal – just as we are. Let’s be different, let’s disrupt, let’s show a seemingly unconnected animation full of pixels.

This is great and still there are many good reasons to use an animated GIF in our presentations:

  • They are an eye catcher – animated things is what we look at as humans. The subconscious check if something that moves is a saber tooth tiger trying to eat me is deeply ingrained in us. This can make an animated GIF a good first slide in a new section of your talk: you seemingly do something unexpected but what you want to achieve is to get the audience to reset and focus on the next topic you’d like to cover.
  • They can be a good emphasis of what you are saying. When Soledad Penades shows a lady drinking under the table (6:05) when talking about her insecurities as someone people look up to it makes a point. soledad and drinking lady When Jake Archibald explains that navigator.onLine will be true even if the network cable is plugged into some soil (26:00) it is a funny, exciting and simple thing to do and adds to the point he makes. jake and the soil
  • It is an in-crowd ting to do – the irreverence of an animated, meme-ish GIF tells the audience that you are one of them, not a professional, slick and tamed corporate speaker.

But is it? Isn’t a trick that everybody uses way past being disruptive? Are we all unique and different when we all use the same content? How many more times do we have to endure the “this escalated quickly” GIF taken from a 10 year old movie? Let’s not even talk about the issue that we expect the audience to get the reference and why it would be funny.

We’re running the danger here of becoming predictable and boring. Especially when you see speakers who use an animated GIF and know it wasn’t needed and then try to shoe-horn it somehow into their narration. It is not a rite of passage. You should use the right presentation technique to achieve a certain response. A GIF that is in your slides just to be there is like an unused global variable in your code – distracting, bad practice and in general causing confusion.

The reasons why we use animated GIFs (or videos for that matter) in slides are also their main problem:

  • They do distract the audience – as a “whoa, something’s happening” reminder to the audience, that is good. When you have to compete with the blinking thing behind you it is bad. This is especially true when you chose a very “out there” GIF and you spend too much time talking over it. A fast animation or a very short loop can get annoying for the audience and instead of seeing you as a cool presenter they get headaches and think “please move on to the next slide” without listening to you. I made that mistake with my rainbow vomiting dwarf at HTML5Devconf in 2013 and was called out on Twitter.
  • They are too easy to add – many a times we are tempted just to go for the funny cat pounding a strawberry because it is cool and it means we are different as a presenter and surprising.

Well, it isn’t surprising any longer and it can be seen as a cheap way out for us as creators of a presentation. Filler material is filler material, no matter how quirky.

You don’t make a boring topic more interesting by adding animated images. You also don’t make a boring lecture more interesting by sitting on a fart cushion. Sure, it will wake people up and maybe get a giggle but it doesn’t give you a more focused audience. We stopped using 3D transforms in between slides and fiery text as they are seen as a sign of a bad presenter trying to make up for a lack of stage presence or lack of content with shiny things. Don’t be that person.

When it comes to technical presentations there is one important thing to remember: your slides do not matter and are not your presentation. You are.

Your slides are either:

  • wallpaper for your talking parts
  • emphasis of what you are currently covering or
  • a code example.

If a slide doesn’t cover any of these cases – remove it. Wallpaper doesn’t blink. It is there to be in the background and make the person in front of it stand out more. You already have to compete with a lot of of other speakers, audience fatigue, technical problems, sound issues, the state of your body and bad lighting. Don’t add to the distractions you have to overcome by adding shiny trinkets of your own making.

You don’t make boring content more interesting by wrapping it in a shiny box. Instead, don’t talk about the boring parts. Make them interesting by approaching them differently, show a URL and a screenshot of the boring resources and tell people what they mean in the context of the topic you talk about. If you’re bored about something you can bet the audience is, too. How you come across is how the audience will react. And insincerity is the worst thing you can project. Being afraid or being shy or just being informative is totally fine. Don’t try too hard to please a current fashion – be yourself and be excited about what you present and the rest falls into place.

So, by all means, use animated GIFs when they fit – give humorous and irreverent presentations. But only do it when this really is you and the rest of your stage persona fits. There are masterful people out there doing this right – Jenn Schiffer comes to mind. If you go for this – go all in. Don’t let the fun parts of your talk steal your thunder. As a presenter, you are entertainer, educator and explainer. It is a mix, and as all mixes go, they only work when they feel rounded and in the right rhythm.

Categorieën: Mozilla-nl planet

T-DOSE

Mozilla-NL nieuws - ma, 11/08/2014 - 12:36
Date: zaterdag, 25 oktober, 2014 - 09:30 tot zondag, 26 oktober, 2014 - 17:30Wie: OpenbaarTags: Community

T-DOSE is a free and yearly event held in The Netherlands to promote use and development of Open Source Software. During this event Open Source projects, developers and visitors can exchange ideas and knowledge. This years event will be held on 25 and 26 October 2014 at the Fontys University of Applied Science in Eindhoven.

Flickr tags: T-DOSE
Categorieën: Mozilla-nl planet

Nicholas Nethercote: Some good reading on sexual harassment and being a decent person

Mozilla planet - ma, 11/08/2014 - 06:04

Last week I attended a sexual harassment prevention training seminar. This was the first  of several seminars that Mozilla is holding as part of its commendable Diversity and Inclusion Strategy. The content was basically “how to not get sued for sexual harassment in the workplace”. That’s a low bar, but also a reasonable place to start, and the speaker was both informative and entertaining. I’m looking forward to the next seminar on Unconcious Bias and Inclusion, which sounds like it will cover more subtle issues.

With the topic of sexual harassment in mind, I stumbled across a Metafilter discussion from last year about an essay by Genevieve Valentine in which she describes and discusses a number of incidents of sexual harassment that she has experienced throughout her life. I found the essay interesting, but the Metafilter discussion thread even more so. It’s a long thread (594 comments) but mostly high quality. It focuses initially on one kind of harassment that some men perform on public transport, but quickly broadens to be about (a) the full gamut of harassing behaviours that many women face regularly, (b) the responses women make towards these behaviours, and (c) the reactions, both helpful and unhelpful, that people can and do have towards those responses. Examples abound, ranging from the disconcerting to the horrifying.

There are, of course, many other resources on the web where one can learn about such topics. Nonetheless, the many stories that viscerally punctuate this particular thread (and the responses to those stories) helped my understanding of this topic — in particular, how bystanders can intervene when a woman is being harassed — more so than some dryer, more theoretical presentations have. It was well worth my time.

Categorieën: Mozilla-nl planet

Jonas Finnemann Jensen: Using Aggregates from Telemetry Dashboard in Node.js

Mozilla planet - ma, 11/08/2014 - 05:28

When I was working on the aggregation code for telemetry histograms as displayed on the telemetry dashboard, I also wrote a Javascript library (telemetry.js) to access the aggregated histograms presented in the dashboard. The idea was separate concerns and simplify access to the aggregated histogram data, but also to allow others to write custom dashboards presenting this data in different ways. Since then two custom dashboards have appeared:

Both of these dashboards runs a cronjob that downloads the aggregated histogram data using telemetry.js and then aggregates or analyses it in an interesting way before publishing the results on the custom dashboard. However, telemetry.js was written to be included from telemetry.mozilla.org/v1/telemetry.js, so that we could update the storage format, use a differnet data service, move to a bucket in another region, etc. I still want to maintain the ability to modify telemetry.js without breaking all the deployments, so I decided to write a node.js module called telemetry-js-node that loads telemetry.js from telemetry.mozilla.org/v1/telemetry.js. As evident from the example below, this module is straight forward to use, and exhibits full compatibility with telemetry.js for better and worse.

// Include telemetry.js var Telemetry = require('telemetry-js-node'); // Initialize telemetry.js just the documentation says to Telemetry.init(function() { // Get all versions var versions = Telemetry.versions(); // Pick a version var version = versions[0]; // Load measures for version Telemetry.measures(version, function(measures) { // Print measures available console.log("Measures available for " + version); // List measures Object.keys(measures).forEach(function(measure) { console.log(measure); }); }); });

Whilst there certainly is some valid concerns (and risks) with loading Javascript code over http. This hack allows us to offer a stable API and minimize maintenance for people consuming the telemetry histogram aggregates. And as we’re reusing the existing code the extensive documentation for telemetry is still applicable. See the following links for further details.

Disclaimer: I know it’s not smart to load  Javascript code into node.js over http. It’s mostly a security issue as you can’t use telemetry.js without internet access anyway. But considering that most people will run this as an isolated cron job (using docker, lxc, heroku or an isolated EC2 instance), this seems like an acceptable solution.

By the way, if you make a custom telemetry dashboard, whether it’s using telemetry.js in the browser or Node.js, please file a pull request against telemetry-dashboard on github to have a link for your dashboard included on telemetry.mozilla.org.

Categorieën: Mozilla-nl planet

Jordan Lund: This Week In Releng - Aug 4th, 2014

Mozilla planet - ma, 11/08/2014 - 03:09

Major Highlights:

  • Kim enabled c3.xlarge slaves for selected b2g tests - bug 1031083
  • Catlee added pvtbuilds to list of things that pass through proxxy
  • Coop implemented the ability to enable/disable a slave directly from slave health

Completed work (resolution is 'FIXED'):

In progress work (unresolved and not assigned to nobody):

Categorieën: Mozilla-nl planet

Pagina's