mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Tantek Çelik: Dublin Core Application Profiles — A Brief Dialogue

Mozilla planet - sn, 21/03/2015 - 01:41

IndieWebCamp Cambridge 2015 is over. Having finished their ice cream and sorbet while sitting on a couch at Toscanini’s watching it snow, the topics of sameAs, reuse, and general semantics leads to a mention of Dublin Core Application Profiles.

  1. A: Dublin Core Application Profiles could be useful for a conceptual basis for metadata interoperation.
  2. T: (Yahoos for dublin core application profiles, clicks first result)
  3. T: Dublin Core Application Profile Guidelines (SUPERSEDED, SEE Guidelines for Dublin Core Application Profiles)
  4. T: Kind of like how The Judean People’s Front was superseded by The People’s Front of Judea?
  5. A: (nervous laugh)
  6. T: Guidelines for Dublin Core Application Profiles
  7. T: Replaces: http://dublincore.org/documents/2008/11/03/profile-guidelines/
  8. T: Hmm. (clicks back)
  9. T: Dublin Core Application Profile Guidelines
  10. T: Is Replaced By: Not applicable, wait, isn’t that supposed to be an inverse relationship?
  11. A: I’m used to this shit.
  12. T: (nods, clicks forward, starts scrolling, reading)
  13. T: We decide that the Library of Congress Subject Headings (LCSH) meet our needs. - I’m not sure the rest of the world would agree.
  14. A: No surprises there.
  15. T: The person has a name, but we want to record the forename and family name separately rather than as a single string. DCMI Metadata Terms has no such properties, so we will take the properties foaf:firstName and foaf:family_name
  16. T: Wait what? Not "given-name" and "family-name"? Nor "first-name" and "last-name" but "firstName" and "family_name"?!?
  17. A: Clearly it wasn’t proofread.
  18. T: But it’s in the following table too. foaf:firstName / foaf:family_name
  19. A: At least it’s internally consistent.
  20. A: Oh, this is really depressing.
  21. A: Did they even read the FOAF spec or did they just hear a rumour?
  22. T: (opens text editor)
Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Beer and Tell: March 2015

Mozilla planet - fr, 20/03/2015 - 22:00

 March 2015 Web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Categorieën: Mozilla-nl planet

Kim Moir: Scaling Yosemite

Mozilla planet - fr, 20/03/2015 - 19:50
We migrated most of our Mac OS X 10.8 (Mountain Lion) test machines to 10.10.2 (Yosemite) this quarter.

This project had two major constraints:
1) Use the existing hardware pool (~100 r5 mac minis)
2) Keep wait times sane1.  (The machines are constantly running tests most of the day due to the distributed nature of the Mozilla community and this had to continue during the migration.)

So basically upgrade all the machines without letting people notice what you're doing!

Yosemite Valley - Tunnel View Sunrise by ©jeffkrause, Creative Commons by-nc-sa 2.0
Why didn't we just buy more minis and add them to the existing pool of test machines?
  1. We run performance tests and thus need to have all the machines running the same hardware within a pool so performance comparisons are valid.  If we buy new hardware, we need to replace the entire pool at once.  Machines with different hardware specifications = useless performance test comparisons.
  2. We tried to purchase some used machines with the same hardware specs as our existing machines.  However, we couldn't find a source for them.  As Apple stops production of old mini hardware each time they announce a new one, they are difficult and expensive to source.
Apple Pi by ©apionid, Creative Commons by-nc-sa 2.0
Given that Yosemite was released last October, why we are only upgrading our test pool now?  We wait until the population of users running a new platform2 surpass those the old one before switching.

Mountain Lion -> Yosemite is an easy upgrade on your laptop.  It's not as simple when you're updating production machines that run tests at scale.

The first step was to pull a few machines out of production and verify the Puppet configuration was working.  In Puppet, you can specify commands to only run certain operating system versions. So we implemented several commands to accommodate changes for Yosemite. For instance, changing the default scrollbar behaviour, new services that interfere with test runs needed to be disabled, debug tests required new Apple security permissions configured etc.

Once the Puppet configuration was stable, I updated our configs so the people could run tests on Try and allocated a few machines to this pool. We opened bugs for tests that failed on Yosemite but passed on other platforms.  This was a very iterative process.  Run tests on try.  Look at failures, file bugs, fix test manifests. Once we had to the opt (functional) tests in a green state on try, we could start the migration.

Migration strategy
  • Disable selected Mountain Lion machines from the production pool
  • Reimage as Yosemite, update DNS and let them puppetize
  • Land patches to disable Mountain Lion tests and enable corresponding Yosemite tests on selected branches
  • Enable Yosemite machines to take production jobs
  • Reconfig so the buildbot master enable new Yosemite builders and schedule jobs appropriately
  • Repeat this process in batches
    • Enable Yosemite opt and performance tests on trunk (gecko >= 39) (50 machines)
    • Enable Yosemite debug (25 more machines)
    • Enable Yosemite on mozilla-aurora (15 more machines)
We currently have 14 machines left on Mountain Lion for mozilla-beta and mozilla-release branches.

As a I mentioned earlier, the two constraints with this project were to use the existing hardware pool that constantly runs tests in production and keep the existing wait times sane.  We encountered two major problems that impeded that goal:

It's a compliment when people say things like "I didn't realize that you updated a platform" because it means the upgrade did not cause large scale fires for all to see.  So it was a nice to hear that from one of my colleagues this week.

Thanks to philor, RyanVM and jmaher for opening bugs with respect to failing tests and greening them up.  Thanks to coop for many code reviews. Thanks dividehex for reimaging all the machines in batches and to arr for her valiant attempts to source new-to-us minis!

References
1Wait times represent the time from when a job is added to the scheduler database until it actually starts running. We usually try to keep this to under 15 minutes but this really varies on how many machines we have in the pool.
2We run tests for our products on a matrix of operating systems and operating system versions. The terminology for operating system x version in many release engineering shops is a platform.  To add to this, the list of platform we support varies across branches.  For instance, if we're going to deprecate a platform, we'll let this change ride the trains to release.

Further reading
Bug 1121175: [Tracking] Fix failing tests on Mac OSX 10.10 
Bug 1121199: Green up 10.10 tests currently failing on try 
Bug 1126493: rollout 10.10 tests in a way that doesn't impact wait times
Bug 1144206: investigate what is causing frequent talos failures on 10.10
Bug 1125998: Debug tests initially took 1.5-2x longer to complete on Yosemite


Why don't you just run these tests in the cloud?
  1. The Apple EULA severely restricts virtualization on Mac hardware. 
  2. I don't know of any major cloud vendors that offer the Mac as a platform.  Those that claim they do are actually renting racks of Macs on a dedicated per host basis.  This does not have the inherent scaling and associated cost saving of cloud computing.  In addition, the APIs to manage the machines at scale aren't there.
  3. We manage ~350 Mac minis.  We have more experience scaling Apple hardware than many vendors. Not many places run CI at Mozilla scale :-) Hopefully this will change and we'll be able to scale testing on Mac products like we do for Android and Linux in a cloud.
Categorieën: Mozilla-nl planet

Emma Irwin: P2PU Course in a Box & Mozilla Community Education

Mozilla planet - fr, 20/03/2015 - 19:14

Last year I created my first course on the P2PU platform  titled ‘Hacking Open Source Participation’,  and through that fantastic experience stumbled across a newer P2PU project called Course in a Box. Built on  Jekyll blogging software, Course in a Box makes it easy to create online educational content powered by Github Pages.

As awesome as this project is, there were a number of challenges I needed solve before adopting it for Mozilla’s Community Education Platform:

 Hierarchy

Jekyll is a blog-aware, static site generator. It uses template and layout files + markdown  +  CSS to display posts. Course in a Box comes with a top level category for content called modules, and within those modules are the content  – which works beautifully for single-course purpose

The challenge is , that we need to write education and training materials on a regular basis, and creating multiple Course in a Box(es) would be a maintenance nightmare.  What I really needed was a way to build multiple courses under one or more topics vrs the ‘one course’ model.  To do that, we needed to build out a hierarchy of content.

What I did

Visualized the menu moving from a list of course modules

cib

 

To a list of course topics.

ce

So Marketpulse, DevRel (for example) are course topics.  Topics are followed by courses, which then contain modules.

On the technical side, I added a new variable called submodules to the courses.yml data file.

2015-03-19_2313

Submodules are prefixed with the topic they belong ‘under’, for example: reps_mentor_training is a module in the topic reps.  This is also how module folders are named:

2015-03-19_2319

 

 

 

 

Using this method of prefixing modules with topics, it was super-simple to create a dropdown menu.

2015-03-19_2317

 

As far as Jekyll is concerned, these are all still ‘modules’, which means that even top level topics can have content associated.  This works great for a ‘landing page’ type of introduction to a topic.

Curriculum Modularity

As mentioned, Jekyll is a blogging platform, so there’s no depth or usability  designed into content architecture, and this is a problem with our goal of writing modular curriculum.  I wanted to make it possible to reuse curriculum across not only our instance of Course in a Box, but other instances across Mozilla well.

What I did

I created a separate repository for community curriculum and made this a git submodule  in the _includes folder of Course in a Box.

2015-03-20_1026

 

 

 

 

With this submodule & Jekyll’s include() function  – I was able easily reference our modular content from a post:

{% include community_curriculum/market_pulse/FFOS/en/introduction.md %}

The only drawback is that Jekyll expects all content referenced with include() to be in a specific folder – and so having content in with design files is – gah!  But I can live with it.

And of course we can do this for multiple repositories if we need.  By using a submodule we can stick to certain versions/releases of curriculum if needed.   Additionally, this makes it easier for contributors to focus on ‘just the content’ (and not get lost in Jeykll code) when they are forking and helping improve curriculum.

Finally

I’m thinking about bigger picture of curriculum-sharing, in large part thanks to conversations with the amazing Laura Hilliger about how we can both share and remix curriculum accross more than one instance of Course in a Box.  The challenge is with remixed curriculum, which is essentially a new version – and whether it should ‘ live’ in a difference place than the original repository fork.

My current thinking is that each Course in a Box Instance should have it’s own curriculum repository, included as a git submodule. This  repo will contain all curriculum unique to that instance, including remixed versions of content from other repositories.   (IMHO)  Remixed content should not live in the original fork, ans you risk becoming increasing out of sync with the original.

So that’s where I am right now, welcoming feedback & suggestions on our Mozilla Community Education platform (with gratitude to P2PU for making it possible)

 

 

 

 

 

Categorieën: Mozilla-nl planet

Air Mozilla: Webmaker Demos March 20 2015

Mozilla planet - fr, 20/03/2015 - 18:00

Webmaker Demos March 20 2015 Webmaker Demos March 13 2015

Categorieën: Mozilla-nl planet

Doug Belshaw: Web Literacy Map v1.5

Mozilla planet - fr, 20/03/2015 - 17:31

I’m delighted to announce that, as a result of a process that started back in late August 2014, the Mozilla community has defined the skills and competencies that make up v1.5 of the Web Literacy Map.

Cheers - DiCaprio.gif

Visual design work will be forthcoming with the launch of teach.webmaker.org, but I wanted to share the list of skills and competencies as soon as possible:

EXPLORING

Reading the Web

Navigation

Using software tools to browse the web

  • Accessing the web using the common features of a browser
  • Using hyperlinks to access a range of resources on the web
  • Reading, evaluating, and manipulating URLs
  • Recognizing the common visual cues in web services
  • Exploring browser add-ons and extensions to provide additional functionality
Web Mechanics

Understanding the web ecosystem and Internet stack

  • Using and understanding the differences between URLs, IP addresses and search terms
  • Identifying where data is in the network of devices that makes up the Internet
  • Exporting, moving, and backing up data from web services
  • Explaining the role algorithms play in creating and managing content on the web
  • Creating or modifying an algorithm to serve content from around the web
Search

Locating information, people and resources via the web

  • Developing questions to aid a search
  • Using and revising keywords to make web searches more efficient
  • Evaluating search results to determine if the information is relevant
  • Finding real-time or time-sensitive information using a range of search techniques
  • Discovering information and resources by asking people within social networks
Credibility

Critically evaluating information found on the web

  • Comparing and contrasting information from a number of sources
  • Making judgments based on technical and design characteristics
  • Discriminating between ‘original’ and derivative web content
  • Identifying and investigating the author or publisher of web resources
  • Evaluating how purpose and perspectives shape web resources
Security

Keeping systems, identities, and content safe

  • Recommending how to avoid online scams and 'phishing’
  • Managing and maintaining account security
  • Encrypting data and communications using software and add-ons
  • Changing the default behavior of websites, add-ons and extensions to make web browsing more secure
BUILDING

Writing the web

Composing for the web

Creating and curating content for the web

  • Inserting hyperlinks into a web page
  • Identifying and using HTML tags
  • Embedding multimedia content into a web page
  • Creating web resources in ways appropriate to the medium/genre
  • Setting up and controlling a space to publish on the Web
Remixing

Modifying existing web resources to create something new

  • Identifying remixable content
  • Combining multimedia resources to create something new on the web
  • Shifting context and meaning by creating derivative content
  • Citing and referencing original content
Designing for the web

Enhancing visual aesthetics and user experiences

  • Using CSS properties to change the style and layout of a Web page
  • Demonstrating the difference between inline, embedded and external CSS
  • Improving user experiences through feedback and iteration
  • Creating device-agnostic web resources
Coding / Scripting

Creating interactive experiences on the web

  • Reading and explaining the structure of code
  • Identifying and applying common coding patterns and concepts
  • Adding comments to code for clarification and attribution
  • Applying a script framework
  • Querying a web service using an API
Accessibility

Communicating in a universally-recognisable way

  • Using empathy and awareness to inform the design of web content that is accessible to all users
  • Designing for different cultures which may have different interpretations of design elements
  • Comparing and exploring how different interfaces impact diverse users
  • Improving the accessibility of a web page through the design of its color scheme, structure/hierarchy and markup
  • Comparing and contrasting how different interfaces impact diverse web users
CONNECTING

Participating on the web

Sharing

Providing access to web resources

  • Creating and using a system to distribute web resources to others
  • Contributing and finding content for the benefit of others
  • Creating, curating, and circulating web resources to elicit peer feedback
  • Understanding the needs of audiences in order to make relevant contributions to a community
  • Identifying when it is safe to contribute content in a variety of situations on the web
Collaborating

Creating web resources with others

  • Choosing a Web tool to use for a particular contribution/ collaboration
  • Co-creating Web resources
  • Configuring notifications to keep up-to-date with community spaces and interactions
  • Working towards a shared goal using synchronous and asynchronous tools
  • Developing and communicating a set of shared expectations and outcomes
Community Participation

Getting involved in web communities and understanding their practices

  • Engaging in web communities at varying levels of activity
  • Respecting community norms when expressing opinions in web discussions
  • Making sense of different terminology used within online communities
  • Participating in both synchronous and asynchronous discussions
Privacy

Examining the consequences of sharing data online

  • Debating privacy as a value and right in a networked world
  • Explaining ways in which unsolicited third parties can track users across the web
  • Controlling (meta)data shared with online services
  • Identifying rights retained and removed through user agreements
  • Managing and shaping online identities
Open Practices

Helping to keep the web democratic and universally accessible

  • Distinguishing between open and closed licensing
  • Making web resources available under an open license
  • Contributing to an Open Source project
  • Advocating for an open web

Thanks goes to the dedicated Mozilla contributors who steadfastly worked on this over the last few months. They’re listed here. We salute you!

Any glaring errors? Typos? Let us know! You can file an issue on GitHub.

Questions? Comments? Try and put them in the GitHub repo, but you can also grab me on Twitter (@dajbelshaw) or by email (doug@mozillafoundation.org

Categorieën: Mozilla-nl planet

Michael Kaply: CCK2 2.0.21 released

Mozilla planet - fr, 20/03/2015 - 16:04

I've released new version of the CCK. New features include:

  • Setting a lightweight theme
  • Clearing preferences
  • Setting user preference values (versus default or locking)
  • More control over the CA trust string
  • Security devices are loaded at startup and fail gracefully (so multiple platforms can be specified)
  • Redesign of security devices dialog
  • Distribution info on about dialog is no longer bolded
  • Proxy information can be set in the preference page (if you want user values, not default/locked)
  • Better migration of bookmarks between versions
  • Better errors for cert download failures

Bugs fixed include:

  • International characters not working properly
  • CA trust string not being used
  • Unable to set the plugin.disable_full_page_plugin_for_types
  • Bookmarks not deleted when migrating from CCK Wizard

If you find bugs, please report them at cck2.freshdesk.com.

Priority support is given to folks with support subscriptions. If the CCK2 is important to your company, please consider purchasing one.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Reps Weekly Call – March 19th 2015

Mozilla planet - fr, 20/03/2015 - 13:26

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-ivorycoast

Summary
  • FOSSASIA 2015 Updates
  • Maker Party Jaipur
  • Update on Council + Peers meetup
  • Education

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Categorieën: Mozilla-nl planet

Adobe, Mozilla, Microsoft fall at Pwn2Own 2015 - bit-tech.net

Nieuws verzameld via Google - fr, 20/03/2015 - 12:43

bit-tech.net

Adobe, Mozilla, Microsoft fall at Pwn2Own 2015
bit-tech.net
According to coverage of the first day of the event published by security specialist Kaspersky's Threatpost blog, the first day's targets have all fallen: Adobe Flash, Adobe Reader, Mozilla Firefox and Internet Explorer, netting the researchers ...
Hackers prove security still a myth on Windows PCs, bag $320000The Register
All major browsers hacked at Pwn2Own contestPCWorld
Adobe Falls at Pwn2Own as HP Awards $317500 on Contest's First DayeWeek
Load The Game -Infosecurity Magazine
alle 13 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Gregory Szorc: New High Scores for hg.mozilla.org

Mozilla planet - fr, 20/03/2015 - 10:55

It's been a rough week.

The very short summary of events this week is that both the Firefox and Firefox OS release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of release automation was making more requests than it should have and was doing so in a way that both increased the chances of an outage occurring and increased the chances of a prolonged outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TB of data. This is a significant increase from the ~4 TB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making under-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl, 17 years old today

Mozilla planet - fr, 20/03/2015 - 08:04

Today we celebrate the fact that it is exactly 17 years since the first public release of curl. I have always been the lead developer and maintainer of the project.

Birthdaycake

When I released that first version in the spring of 1998, we had only a handful of users and a handful of contributors. curl was just a little tool and we were still a few years out before libcurl would become a thing of its own.

The tool we had been working on for a while was still called urlget in the beginning of 1998 but as we just recently added FTP upload capabilities that name turned wrong and I decided cURL would be more suitable. I picked ‘cURL’ because the word contains URL and already then the tool worked primarily with URLs, and I thought that it was fun to partly make it a real English word “curl” but also that you could pronounce it “see URL” as the tool would display the contents of a URL.

Much later, someone (I forget who) came up with the “backronym” Curl URL Request Library which of course is totally awesome.

17 years are 6209 days. During this time we’ve done more than 150 public releases containing more than 2600 bug fixes!

We started out GPL licensed, switched to MPL and then landed in MIT. We started out using RCS for version control, switched to CVS and then git. But it has stayed written in good old C the entire time.

The term “Open Source” was coined 1998 when the Open Source Initiative was started just the month before curl was born, which was superseded with just a few days by the announcement from Netscape that they would free their browser code and make an open browser.

We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…

We have grown to support a ridiculous amount of protocols and curl can be built to run on virtually every modern operating system and CPU architecture.

The list of helpful souls who have contributed to make curl into what it is now have grown at a steady pace all through the years and it now holds more than 1200 names.

Employments

In 1998, I was employed by a company named Frontec Tekniksystem. I would later leave that company and today there’s nothing left in Sweden using that name as it was sold and most employees later fled away to other places. After Frontec I joined Contactor for many years until I started working for my own company, Haxx (which we started on the side many years before that), during 2009. Today, I am employed by my forth company during curl’s life time: Mozilla. All through this project’s lifetime, I’ve kept my work situation separate and I believe I haven’t allowed it to disturb our project too much. Mozilla is however the first one that actually allows me to spend a part of my time on curl and still get paid for it!

The Netscape announcement which was made 2 months before curl was born later became Mozilla and the Firefox browser. Where I work now…

Future

I’m not one of those who spend time glazing toward the horizon dreaming of future grandness and making up plans on how to go there. I work on stuff right now to work tomorrow. I have no idea what we’ll do and work on a year from now. I know a bunch of things I want to work on next, but I’m not sure I’ll ever get to them or whether they will actually ship or if they perhaps will be replaced by other things in that list before I get to them.

The world, the Internet and transfers are all constantly changing and we’re adapting. No long-term dreams other than sticking to the very simple and single plan: we do file-oriented internet transfers using application layer protocols.

Rough estimates say we may have a billion users already. Chances are, if things don’t change too drastically without us being able to keep up, that we will have even more in the future.

1000 million users

It has to feel good, right?

I will of course point out that I did not take curl to this point on my own, but that aside the ego-boost this level of success brings is beyond imagination. Thinking about that my code has ended up in so many places, and is driving so many little pieces of modern network technology is truly mind-boggling. When I specifically sit down or get a reason to think about it at least.

Most of the days however, I tear my hair when fixing bugs, or I try to rephrase my emails to no sound old and bitter (even though I can very well be that) when I once again try to explain things to users who can be extremely unfriendly and whining. I spend late evenings on curl when my wife and kids are asleep. I escape my family and rob them of my company to improve curl even on weekends and vacations. Alone in the dark (mostly) with my text editor and debugger.

There’s no glory and there’s no eternal bright light shining down on me. I have not climbed up onto a level where I have a special status. I’m still the same old me, hacking away on code for the project I like and that I want to be as good as possible. Obviously I love working on curl so much I’ve been doing it for over seventeen years already and I don’t plan on stopping.

Celebrations!

Yeps. I’ll get myself an extra drink tonight and I hope you’ll join me. But only one, we’ll get back to work again afterward. There are bugs to fix, tests to write and features to add. Join in the fun! My backlog is only growing…

Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: The Evolutionary Prototype

Mozilla planet - fr, 20/03/2015 - 06:00

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

I came upon a new (for me) term recently: evolutionary prototyping. This is in contrast to the rapid or throwaway prototype.

Another term for the rapid prototype: the “close-ended prototype.” The prototype with a sunset, unlike the evolutionary prototype which is expected to become the final product, even if every individual piece of work will only end up as disposable scaffolding for the final product.

The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it.

The first version of the product, written primarily late at night, was definitely a throwaway prototype. All imperative jQuery UI and lots of copy-and-paste code. It served its purpose. I was able to extend that code reasonably well – and I played with many ideas during that initial stage – but it was unreasonable to ask anyone else to touch it, and even I hated the code when I had stepped away from it for a couple weeks. So most of the code is being rewritten for the next phase.

To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest.

Thinking about this, it’s a lot like the Minimal Viable Product approach. Of which I am skeptical. And maybe I’m skeptical because I see MVP as reductive, encouraging the aggressive stripping down of a product, and in the process encouraging design based on conventional wisdom instead of critical engagement. When people push me in that direction I get cagey and defensive (not a great response on my part, just acknowledging it). The framing of the evolutionary prototype feels more humble to me. I don’t want to focus on the question “how can we most quickly get this into users hands?” but instead “what do we know we should build, so we can collect a fuller list of questions we want to answer?”

Categorieën: Mozilla-nl planet

Gregory Szorc: New High Scores for hg.mozilla.org

Mozilla planet - fr, 20/03/2015 - 04:22

It's been a rough week.

The very short summary of events this week is that Firefox's release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The Firefox release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of Firefox release automation was making more requests than it should have and was doing so in a way that increased the chances of a prolonged service outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TiB of data. This is a significant increase from the ~4 TiB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GiB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GiB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making mis-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.

Categorieën: Mozilla-nl planet

Niko Matsakis: The danger of negative thinking

Mozilla planet - to, 19/03/2015 - 22:59

One of the aspects of language design that I find the most interesting is trying to take time into account. That is, when designing a type system in particular, we tend to think of the program as a fixed, immutable artifact. But of course real programs evolve over time, and when designing a language it’s important to consider what impact the type rules will have on the ability of people to change their programs. Naturally as we approach the 1.0 release of Rust this is very much on my mind, since we’ll be making firmer commitments to compatibility than we ever have before.

Anyway, with that introduction, I recently realized that our current trait system contains a forward compatibility hazard concerned with negative reasoning. Negative reasoning is basically the ability to decide if a trait is not implemented for a given type. The most obvious example of negative reasoning are negative trait bounds, which have been proposed in a rather nicely written RFC. However, what’s perhaps less widely recognized is that the trait system as currently implemented already has some amount of negative reasoning, in the form of the coherence system.

This blog post covers why negative reasoning can be problematic, with a focus on the pitfalls in the current coherence system. This post only covers the problem. I’ve been working on prototyping possible solutions and I’ll be covering those in the next few blog posts.

A goal

Let me start out with an implicit premise of this post. I think it’s important that we be able to add impls of existing traits to existing types without breaking downstream code (that is, causing it to stop compiling, or causing it to radically different things). Let me give you a concrete example. libstd defines the Range<T> type. Right now, this type is not Copy for various good reasons. However, we might like to make it Copy in the future. It feels like that should be legal. However, as I’ll show you below, this could in fact cause existing code not to compile. I think this is a problem.

(In the next few posts when I start covering solutions, we’ll see that it may be that one cannot always add impls of any kind for all traits to all types. If so, I can live with it, but I think we should try to make it possible to add as many kinds of impls as possible.)

Negative reasoning in coherence today, the simple case

“Coherence” refers to a set of rules that Rust uses to enforce the idea that there is at most one impl of any trait for any given set of input types. Let me introduce an example crate hierarchy that I’m going to be coming back to throughout the post:

1 2 3 4 5 6 7 8 libstd | +-> lib1 --+ | | +-> lib2 --+ | v app

This diagram shows that four crates: libstd, two libraries (creatively titled lib1 and lib2), and an application app. app uses both of the libraries (and, transitively, libstd). The libraries are otherwise defined independently from one another., We say that libstd is a parent of the other crates, and that lib[12] are cousins.

OK, so, imagine that lib1 defines a type Carton but doesn’t implement any traits for it. This is a kind of smart pointer, like Box.

1 2 // In lib1 struct Carton<T> { }

Now imagine that the app crate defines a type AppType that uses the Debug trait.

1 2 3 // In app struct AppType { } impl Debug for AppType { }

At some point, app has a Carton<AppType> that it is passing around, and it tries to use the Debug trait on that:

1 2 3 4 5 // In app fn foo(c: Carton<AppType>) { println!("foo({:?})", c); // Error ... }

Uh oh, now we encounter a problem because there is no impl of Debug for Carton<AppType>. But app can solve this by adding such an impl:

1 2 // In app impl Debug for Carton<AppType> { ... }

You might expect this to be illegal per the orphan rules, but in fact it is not, and this is no accident. We want people to be able to define impls on references and boxes to their types. That is, since Carton is a smart pointer, we want impls like the one above to work, just like you should be able to do an impl on &AppType or Box<AppType>.

OK, so, what’s the problem? The problem is that now maybe lib1 notices that Carton should define Debug, and it adds a blanket impl for all types:

1 2 // In lib1 impl<T:Debug> Debug for Carton<T> { }

This seems like a harmless change, but now if app tries to recompile, it will encounter a coherence violation.

What went wrong? Well, if you think about it, even a simple impl like

1 impl Debug for Carton<AppType> { }

contains an implicit negative assertion that no ancestor crate defines an impl that could apply to Carton<AppType>. This is fine at any given moment in time, but as the ancestor crates evolve, they may add impls that violate this negative assertion.

Negative reasoning in coherence today, the more complex case

The previous example was relatively simple in that it only involved a single trait (Debug). But the current coherence rules also allow us to concoct examples that employ multiple traits. For example, suppose that app decided to workaround the absence of Debug by defining it’s own debug protocol. This uses Debug when available, but allows app to add new impls if needed.

1 2 3 4 5 6 7 8 9 10 // In lib1 (note: no `Debug` impl yet) struct Carton<T> { } // In app, before `lib1` added an impl of `Debug` for `Carton` trait AppDebug { } impl<T:Debug> AppDebug for T { } // Impl A struct AppType { } impl Debug for AppType { } impl AppDebug for Carton<AppType> { } // Impl B

This is all perfectly legal. In particular, implementing AppDebug for Carton<AppType> is legal because there is no impl of Debug for Carton, and hence impls A and B are not in conflict. But now if lib1 should add the impl of Debug for Carton<T> that it added before, we get a conflict again:

1 2 // Added to lib1 impl<T:Debug> Debug for Carton<T> { }

In this case though the conflict isn’t that there are two impls of Debug. Instead, adding an impl of Debug caused there to be two impls of AppDebug that are applicable to Carton<AppType>, whereas before there was only one.

Negative reasoning from OIBIT and RFC 586

The conflicts I showed before have one thing in common: the problem is that when we add an impl in the supercrate, they cause there to be too many impls in downstream crates. This is an important observation, because it can potentially be solved by specialization or some other form conflict resolution – basically a way to decide between those duplicate impls (see below for details).

I don’t believe it is possible today to have the problem where adding an impl in one crate causes there to be too few impls in downstream crates, at least not without enabling some feature-gates. However, you can achieve this easily with OIBIT and RFC 586. This suggests to me that we want to tweak the design of OIBIT – which has been accepted, but is still feature-gated – and we do not want to accept RFC 586.

I’ll start by showing what I mean using RFC 586, because it’s more obvious. Consider this example of a trait Release that is implemented for all types that do not implement Debug:

1 2 3 // In app trait Release { } impl<T:!Debug> Release for T { }

Clearly, if lib1 adds an impl of Debug for Carton, we have a problem in app, because whereas before Carton<i32> implemented Release, it now does not.

Unfortunately, we can create this same scenario using OIBIT:

1 2 trait Release for .. { } impl<T:Debug> !Release for T { }`

In practice, these sorts of impls are both feature-gated and buggy (e.g. #23072), and there’s a good reason for that. When I looked into fixing the bugs, I realized that this would entail implementing essentially the full version of negative bounds, which made me nervous. It turns out we don’t need conditional negative impls for most of the uses of OIBIT that we have in mind, and I think that we should forbid them before we remove the feature-gate.

Orphan rules for negative reasoning

One thing I tried in researching this post is to apply a sort of orphan condition to negative reasoning. To see what I tried, let me walk you through how the overlap check works today. Consider the following impls:

1 2 3 trait AppDebug { ... } impl<T:Debug> AppDebug for T { } impl AppDebug for Carton<AppType> { }

(Assume that there is no impl of Debug for Carton.) The overlap checker would check these impls as follows. First, it would create fresh type variables for T and unify, so that T=Carton<AppType>. Because T:Debug must hold for the first impl to be applicable, and T=Carton<AppType>, that implies that if both impls are to be applicable, then Carton<AppType>: Debug must hold. But by searching the impls in scope, we can see that it does not hold – and thanks to the coherence orphan rules, we know that nobody else can make it hold either. So we conclude that the impls do not overlap.

It’s true that Carton<AppType>: Debug doesn’t hold now – but this reasoning doesn’t take into account time. Because Carton is defined in the lib1 crate, and not the app crate, it’s not under “local control”. It’s plausible that lib1 can add an impl of Debug for Carton<T> for all T or something like that. This is the central hazard I’ve been talking about.

To avoid this hazard, I modified the checker so that it could only rely on negative bounds if either the trait is local or else the type is a struct/enum defined locally. The idea being that the current crate is in full control of the set of impls for either of those two cases. This turns out to work somewhat OK, but it breaks a few patterns we use in the standard library. The most notable is IntoIterator:

1 2 3 4 5 6 // libcore trait IntoIterator { } impl<T:Iterator> for IntoIterator { } // libcollections impl<'a,T> IntoIterator for &'a Vec<T> { }

In particular, the final impl there is illegal, because it relies on the fact that &Vec<T>: Iterator, and the type &Vec is not a struct defined in the local crate (it’s a reference to a struct). In particular, the coherence checker here is pointing out that in principle we could add an impl like impl<T:Something> Iterator for &T, which would (maybe) conflict. This pattern is one we definitely want to support, so we’d have to find some way to allow this. (See below for some further thoughts.)

Limiting OIBIT

As an aside, I mentioned that OIBIT as specified today is equivalent to negative bounds. To fix this, we should add the constraint that negative OIBIT impls cannot add additional where-clauses beyond those implied by the types involved. (There isn’t much urgency on this because negative impls are feature-gated.) Therefore, one cannot write an impl like this one, because it would be adding a constraint T:Debug:

1 2 trait Release for .. { } impl<T:Debug> !Release for T { }`

However, this would be legal:

1 2 3 struct Foo<T:Debug> { } trait Release for .. { } impl<T:Debug> !Release for Foo<T> { }`

The reason that this is ok is because the type Foo<T> isn’t even valid if T:Debug doesn’t hold. We could also just skip such “well-formedness” checking in negative impls and then say that there should be no where-clauses at all.

Either way, the important point is that when checking a negative impl, the only thing we have to do is try and unify the types. We could even go farther, and have negative impls use a distinct syntax of some kind.

Still to come.

OK, so this post laid out the problem. I have another post or two in the works exploring possible solutions that I see. I am currently doing a bit of prototyping that should inform the next post. Stay tuned.

Categorieën: Mozilla-nl planet

Avi Halachmi: Firefox e10s Performance on Talos

Mozilla planet - to, 19/03/2015 - 19:57

Electrolysis, or e10s, is a Firefox project whose goal is to spread the work of browsing the web over multiple processes. The main initial goal is to separate the UI from web content and reduce negative effects one could have over the other.

e10s is already enabled by default on Firefox Nightly builds, and tabs which run on a different process than the UI are marked with an underline at the tab’s title.

While currently the e10s team’s main focus is correctness more than performance (one bug list and another), we can start collecting performance data and understand roughly where we stand.

jmaher, wlach and myself worked to make Talos run well in e10s Firefox and provide meaningful results. The Talos harness and tests now run well on Windows and Linux, while OS X should be handled shortly (bug 1124728). Session restore tests are still not working with e10s (bug 1098357).

Talos e10s tests run by default on m-c pushes, though Treeherder still hides the e10s results (they can be unhidden from the top right corner of the Treeherder job page).

To compare e10s Talos results with non-e10s we use compare.py, a script which is available in the Talos repository. We’ve improved it recently to make such comparisons more useful. It’s also possible to use the compare-talos web tool.

Here are some numbers on Windows 7 and Ubuntu 32 comparing e10s to non-e10s Talos results of a recent build using compare.py (the output below has been made more readable but the numbers have not been modified).

At the beginning of each line:

  • A plus + means that e10s is better.
  • A minus - means that e10s is worse.

The change % value simply compare the numbers on both sides. For most tests raw numbers are lower-is-better and therefore a negative percentage means that e10s is better. Tests where higher-is-better are marked with an asterix * near the percentage value (and for these values positive percentage means that e10s is better).

Descriptions of all Talos tests and what their numbers mean.

$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Win7 --master-revision 42afc7ef5ccb Windows 7 [ non-e10s ] [ e10s ] [ results ] change % [ results ] - tresize 15.1 [ +1.7%] 15.4 - kraken 1529.3 [ +3.9%] 1589.3 + v8_7 17798.4 [ +1.6%]* 18080.1 + dromaeo_css 5815.2 [ +3.7%]* 6033.2 - dromaeo_dom 1310.6 [ -0.5%]* 1304.5 + a11yr 178.7 [ -0.2%] 178.5 ++ ts_paint 797.7 [ -47.8%] 416.3 + tpaint 155.3 [ -4.2%] 148.8 ++ tsvgr_opacity 228.2 [ -56.5%] 99.2 - tp5o 225.4 [ +5.3%] 237.3 + tart 8.6 [ -1.0%] 8.5 + tcanvasmark 5696.9 [ +0.6%]* 5732.0 ++ tsvgx 199.1 [ -24.7%] 149.8 + tscrollx 3.0 [ -0.2%] 3.0 --- glterrain 5.1 [+268.9%] 18.9 + cart 53.5 [ -1.2%] 52.8 ++ tp5o_scroll 3.4 [ -13.0%] 3.0 $ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Linux --master-revision 42afc7ef5ccb Ubuntu 32 [ non-e10s ] [ e10s ] [ results ] change [ results ] ++ tresize 17.2 [ -25.1%] 12.9 - kraken 1571.8 [ +2.2%] 1606.6 + v8_7 19309.3 [ +0.5%]* 19399.8 + dromaeo_css 5646.3 [ +3.9%]* 5866.8 + dromaeo_dom 1129.1 [ +3.9%]* 1173.0 - a11yr 241.5 [ +5.0%] 253.5 ++ ts_paint 876.3 [ -50.6%] 432.6 - tpaint 197.4 [ +5.2%] 207.6 ++ tsvgr_opacity 218.3 [ -60.6%] 86.0 -- tp5o 269.2 [ +21.8%] 328.0 -- tart 6.2 [ +13.9%] 7.1 -- tcanvasmark 8153.4 [ -15.6%]* 6877.7 -- tsvgx 580.8 [ +10.2%] 639.7 ++ tscrollx 9.1 [ -16.5%] 7.6 + glterrain 22.6 [ -1.4%] 22.3 - cart 42.0 [ +6.5%] 44.7 ++ tp5o_scroll 8.8 [ -12.4%] 7.7

For the most part, the Talos scores are comparable with a few improvements and a few regressions - most of them relatively small. Windows e10s results fare a bit better than Linux results.

Overall, that’s a great starting point for e10s!

A noticeable improvement on both platforms is tp5o-scroll. This test scrolls the top-50 Alexa pages and measures how fast it can iterate with vsync disabled (ASAP mode).

A noticeable regression on Windows is WebGL (glterrain) - Firefox with e10s performs roughly 3x slower than non-e10s Firefox - bug 1028859 (bug 1144906 should also help for Windows).

A supposed notable improvement is of the tsvg-opacity test, however, this test is sometimes too sensitive to underlying platform changes (regardless of e10s), and we should probably keep an eye on it (yet again, e.g. bug 1027481).

We don’t have bugs filed yet for most Talos e10s regressions since we don’t have systems in place to alert us of them, and it’s still not trivial for developers to obtain e10s test results (e10s doesn’t run on try-server yet, and on m-c it also doesn’t run on every batch of pushes). See bug 1144120.

Snappiness is something that both the performance team and the e10s team care deeply about, and so we’ll be working closely together when it comes time to focus on making multi-process Firefox zippy.

Thanks to vladan and mconley for their valuable comments.

Categorieën: Mozilla-nl planet

Air Mozilla: Participation at Mozilla

Mozilla planet - to, 19/03/2015 - 18:00

Participation at Mozilla The Participation Forum

Categorieën: Mozilla-nl planet

Air Mozilla: Participation at Mozilla

Mozilla planet - to, 19/03/2015 - 18:00

Participation at Mozilla The Participation Forum

Categorieën: Mozilla-nl planet

Air Mozilla: Reps weekly

Mozilla planet - to, 19/03/2015 - 17:00

Reps weekly Weekly Mozilla Reps call

Categorieën: Mozilla-nl planet

Mike Conley: The Joy of Coding (Episode 6): Plugins!

Thunderbird - to, 19/03/2015 - 16:13

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Categorieën: Mozilla-nl planet

Mike Conley: The Joy of Coding (Episode 6): Plugins!

Mozilla planet - to, 19/03/2015 - 16:13

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Categorieën: Mozilla-nl planet

Pages