mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Christian Heilmann: Progressive Enhancement is not about JavaScript availability.

Mozilla planet - wo, 18/02/2015 - 16:05

I have been telling people for years that in order to create great web experiences and keep your sanity as a developer you should embrace Progressive Enhancement.

escalators are great. when there is an issue, they become stairs and still work

A lot of people do the same, others question the principle and advocate for graceful degradation and yet others don’t want anything to do with people who don’t have the newest browsers, fast connections and great hardware to run them on.

People have been constantly questioning the value of progressive enhancement. That’s good. Lately there have been some excellent thought pieces on this, for example Doug Avery’s “Progressive Enhancement Benefits“.

One thing that keeps cropping up as a knee-jerk reaction to the proposal of progressive enhancement is boiling it down to whether JavaScript is available or not. And that is not progressive enhancement. It is a part of it, but it forgets the basics.

Progressive enhancement is about building robust products and being paranoid about availability. It is about asking “if” a lot. That starts even before you think about your interface.

Having your data in a very portable format and having an API to retrieve parts of it instead of the whole set is a great idea. This allows you to build various interfaces for different use cases and form factors. It also allows you to offer this API to other people so they can come up with solutions and ideas you never thought of yourself. Or you might not, as offering an API means a lot of work and you might disappoint people as Michael Mahemoff debated eloquently the other day.

In any case, this is one level of progressive enhancement. The data is there and it is in a logical structure. That’s a given, that works. Now you can go and enhance the experience and build something on top of it.

This here is a great example: you might read this post here seeing my HTML, CSS and JavaScript interface. Or you might read it in an RSS reader. Or as a cached version in some content aggregator. Fine – you are welcome. I don’t make assumptions as to what your favourite way of reading this is. I enhanced progressively as I publish full articles in my feed. I could publish only a teaser and make you go to my blog and look at my ads instead. I chose to make it easy for you as I want you to read this.

Progressive enhancement in its basic form means not making assumptions but start with the most basic thing and check every step on the way if we are still OK to proceed. This means you never leave anything broken behind. Every step on the way results in something usable – not necessarily enjoyable or following a current “must have” format, but usable. It is checking the depth of a lake befor jumping in head-first.

Markup progressive enhancement

Web technologies and standards have this concept at their very core. Take for example the img element in HTML:

<img src="threelayers.png" alt="Three layers of separation - HTML(structure), CSS(presentation) and JavaScript(behaviour)">

By adding an alt attribute with a sensible description you now know what this image is supposed to tell you. If it can be loaded and displayed, then you get a beautiful experience. If not, browsers display this text. People who can’t see the image at all also get this text explanation. Search engines got something to index. Everybody wins.

<h1>My experience in camp</h1>

This is a heading. It is read out as that to assistive technology, and screen readers for example allow users to jump from heading to heading without having to listen to the text in between. By applying CSS, we can turn this into an image, we can rotate it, we can colour it. If the CSS can not be loaded, we still get a heading and the browser renders the text larger and bold as it has a default style sheet associated with it.

Visual progressive enhancement

CSS has the same model. If the CSS parser encounters something it doesn’t understand, it skips to the next instruction. It doesn’t get stuck, it just moves on. That way progressive enhancement in CSS itself has been around for ages, too:

.fancybutton { color: #fff; background: #333; background: linear-gradient( to bottom, #ccc 0%,#333 47%, #000 100% ); }

If the browser doesn’t understand linear gradients, then the button is white on dark grey text. Sadly, what you are more likely to see in the wild is this:

.fancybutton { color: #fff; background: -webkit-gradient( linear, left top, left bottom, color-stop(0%,#ccc), color-stop(47%,#333), color-stop(100%,#000)); }

Which, if the browser doesn’t understand webkit gradient, results in a white button with white text. Only because the developer was too lazy to first define a background colour the browser could fall back on. Instead, this code assumes that the user has a webkit browser. This is not progressive enhancement. This is breaking the web. So much so, that other browsers had to consider supporting webkit specific CSS, thus bloating browsers and making the web less standardised as a browser-prefixed, experimental feature becomes a necessity.

Progressive enhancement in redirection and interactivity

<a href="http://here.com/catalog"> See catalog </a>

This is a link pointing to a data endpoint. It is keyboard accessible, I can click, tap or touch it, I can right-click and choose “save as”, I can bookmark it, I can drag it into an email and send it to a friend. When I touch it with my mouse, the cursor changes indicating that this is an interactive element.

That’s a lot of great stuff I get for free and I know it works. If there is an issue with the endpoint, the browser tells me. It shows me when the resource takes too long to load, it shows me an error when it can’t be found. I can try again.

I can even use JavaScript, apply a handler on that link, choose to override the default behaviour with preventDefault() and show the result in the current page without reloading it.

Should my JavaScript fail to execute for all the reasons that can happen to it (slow connection, blocked resources on a firewall level, adblockers, flaky connection), no biggie: the link stays clickable and the browser redirects to the page. In your backend you probably want to check for a special header you send when you request the content with JS instead of as a redirect from the browser and return different views accordingly.

<a href="javascript:void()"> Catalog</a> or <a href="#">Catalog</a> or <a onclick="catalog()">Catalog</a>

This offers none of that working model. When JS doesn’t work, you got nothing. You still have a link that looks enticing, the cursor changed, you promised the user something. And you failed at delivering it. It is your fault, your mistake. And a simple one to avoid.

XHTML had to die, HTML5 took its place

When XHTML was the cool thing, the big outcry was that it breaks the web. XHTML meant we delivered HTML as XML. This meant that any HTML syntax error – an unclosed tag, an unencoded ampersand, a non-closed quote meant the end user got an error message instead of the thing they came for. Even worse, they got some cryptic error message instead.

HTML5 parsers are forgiving. Errors happen silently and the browser tries to fix them for you. This was considered necessary to stop the web from breaking. It was considered bad form to punish our users for our mistakes.

If you don’t progressively enhance your solutions, you do the same thing. Any small error will result in an interface that is stuck. It is up to you to include error handling, timeout handling, user interaction like right-click -> open in new tab and many other things.

This is what progressive enhancement protects us and our users from. Instead of creating a solution and hoping things work out, we create solutions that have a safety-belt. Things can not break horribly, because we planned for them.

Why don’t we do that? Because it is more work in the first place. However, this is just intelligent design. You measure twice, and cut once. You plan for a door to be wide enough for a wheelchair and a person. You have a set of stairs to reach the next floor when the lift is broken. Or – even better – you have an escalator, that, when broken, just becomes a set of stairs.

Of course I want us to build beautiful, interactive and exciting experiences. However, a lot of the criticism of progressive enhancement doesn’t take into consideration that nothing stops you from doing that. You just have to think more about the journey to reach the final product. And that means more work for the developer. But it is very important work, and every time I did this, I ended up with a smaller, more robust and more beautiful end product.

By applying progressive enhancement to our product plan we deliver a lot of different products along the way. Each working for a different environment, and yet each being the same code base. Each working for a certain environment without us having to specifically test for it. All by turning our assumptions into an if statement. In the long run, you save that way, as you do not have to maintain various products for different environments.

We continuously sacrifice robustness of our products for developer convenience. We’re not the ones using our products. It doesn’t make sense to save time and effort for us when the final product fails to deliver because of a single error.

Categorieën: Mozilla-nl planet

Det stormar på Mozilla - IDG.se

Nieuws verzameld via Google - wo, 18/02/2015 - 15:03

IDG.se

Det stormar på Mozilla
IDG.se
Mozilla har länge haft en stabiliserande effekt både vad gäller spridning av tekniska webbstandarder, och synen på personlig integritet för webbanvändare och öppenhet på webben. Dessutom är den viktigaste produkten, webbläsaren Firefox, ett bra exempel ...

Categorieën: Mozilla-nl planet

Stuart Colville: Taming SlimerJS

Mozilla planet - wo, 18/02/2015 - 14:04

We've been using CasperJS for E2E testing on Marketplace projects for quite some time. Run alongside unittests, E2E testing allows us to provide coverage for specific UI details and to make sure that users flows operate as expected. It also can be used for writing regression tests for complex interactions.

By default CasperJS uses PhantomJS (headless webkit) as the browser engine that loads the site under test.

Given the audience for the Firefox Marketplace is (by a huge majority) mainly Gecko browsers, we've wanted to add SlimerJS as an engine to our CasperJS test runs for a while.

In theory it's as simple as defining an environment var to say which Firefox you want to use, and then setting a flag to tell casper to use slimer as the engine. However, in practice retrofitting Slimer into our existing Casper/Phantom setup was much more involved and I'm now the proud owner of several Yak hair jumpers as a result of the process to get it working.

In this article I'll list out some of the gotchas we faced along with how we solved them in-case it helps anyone else trying to do the same thing.

Require Paths and globals

There's a handy list of the things you need to know about modules in the slimer docs.

Here's the list they include:

  • global variables declared in the main script are not accessible in modules loaded with require
  • Modules are completely impervious. They are executed in a truly javascript sandbox
  • Modules must be files, not folders. node_modules folders are not searched specially (SlimerJS provides require.paths).

We had used this pattern throughout our casperJS tests at the top of each test file:

var helpers = require('../helpers');

The problem that you instantly face using Slimer is that requires need an absolute path. I tried a number of ways to work around it without success. In the end the best solution was to create a helpers-shim to iron out the differences. This is a file that gets injected into the test file. To do this we used the includes function via grunt-casper.

An additional problem was that modules are run in a sandbox and have their own context. In order to provider the helpers module with the casper object it was necessary to add it to require.globals.

Our shim looks like this:

if (require.globals) { // SlimerJS require.globals.casper = casper; casper.echo('Running under SlimerJS', 'WARN'); var v = slimer.version; casper.echo('Version: ' + v.major + '.' + v.minor + '.' + v.patch); var helpers = require(require('fs').absolute('tests/helpers')); casper.isSlimer = true; } else { // PhantomJS casper.echo('Running under PhantomJS', 'WARN'); var helpers = require('../helpers'); }

One happy benefit of this is that every test now has helpers on it by default. Which saves a bunch of repetition \o/.

Similarly, in our helpers file we had requires for config. We needed to workaround those with more branching. Here's what we needed at the top of helpers.js. You'll see we deal with some config merging manually to get the config we need, thus avoiding the need to workaround requires in the config file too!

if (require.globals) { // SlimerJS setup required to workaround require path issues. var fs = require('fs'); require.paths.push(fs.absolute(fs.workingDirectory + '/config/')); var _ = require(fs.absolute('node_modules/underscore/underscore')); var config = require('default'); var testConf = require('test'); _.extend(config, testConf || {}); } else { // PhantomJS setup. var config = require('../config'); var _ = require('underscore'); } Event order differences

Previously for phantom tests we used the load.finished event to carry out modifications to the page before the JS loaded.

In Slimer this event fires later than it does in phantomjs. As a result the JS (in the document) was already executing before the setup occured which meant the test setup didn't happen before the code that needed those changes ran.

To fix this I tried alot of the other events to find something that would do the job for both Slimer and Phantom. In the end I used a hack which was to do it when the resource for the main JS file was seen as received.

Using that hook I fired used a custom event in Casper like so:

casper.on('resource.received', function(requestData) { if (requestData.url.indexOf('main.min.js') > -1) { casper.emit('mainjs.received'); } });

Hopefully in time the events will have more parity between the two engines so hacks like these aren't necessary.

Passing objects between the client and the test

We had some tests that setup spys via SinonJS which passed the spy objects from the client context via casper.evaluate (browser) into the test context (node). These spys were then introspected to see if they'd been called.

Under phantom this worked fine. In Slimer it seemed that the movement of objects between contexts made the tests break. In the end these tests were refactored so all the evaluation of the spys occurs in the client context. This resolved that problem.

Adding an env check to the Gruntfile

As we use grunt it's handy to add a check to make sure the SLIMERJSLAUNCHER env var for slimer is set.

if (!process.env.SLIMERJSLAUNCHER) { grunt.warn('You need to set the env var SLIMERJSLAUNCHER to point ' + 'at the version of Firefox you want to use\n See http://' + 'docs.slimerjs.org/current/installation.html#configuring-s' + 'limerjs for more details\n\n'); } Sending an enter key

Using the following under slimer didn't work for us:

this.sendKeys('.pinbox', casper.page.event.key.Enter);

So instead we use a helper function to do a synthetic event with jQuery like so:

function sendEnterKey(selector) { casper.evaluate(function(selector) { /*global $ */ var e = $.Event('keypress'); e.which = 13; e.keyCode = 13; $(selector).trigger(e); }, selector || 'body'); } casper.back() caused tests to hang.

For unknown reasons using casper.back() caused tests to hang. To fix this I had to do a hacky casper.open() on the url back would have gone to. There's an open issue about this here: casper.back() not working (script appears to hang)

The travis.yml

Getting this running under travis was quite simple. Here's what our file looks like (Note: Using env vars for versions makes doing updates much easier in the future):

language: node_js node_js: - "0.10" addons: firefox: "18.0" install: - "npm install" before_script: - export SLIMERJSLAUNCHER=$(which firefox) DISPLAY=:99.0 PATH=$TRAVIS_BUILD_DIR/slimerjs:$PATH - export SLIMERVERSION=0.9.5 - export CASPERVERSION=1.1-beta3 - git clone -q git://github.com/n1k0/casperjs.git - pushd casperjs - git checkout -q tags/$CASPERVERSION - popd - export PATH=$PATH:`pwd`/casperjs/bin - sh -e /etc/init.d/xvfb start - wget http://download.slimerjs.org/releases/latest-slimerjs-stable/slimerjs-$SLIMERVERSION.zip - unzip slimerjs-0.9.5.zip - mv slimerjs-$SLIMERVERSION ./slimerjs - phantomjs --version; casperjs --version; slimerjs --version script: - "npm test" - "npm run-script uitest"

Yes that is Firefox 18! Gecko18 is the oldest version we still support as FFOS 1.x was based on Gecko 18. As such it's a good Canary for making sure we don't use features that are too new for that Gecko version.

In time I'd like to look at using the env setting in travis and expanding our testing to cover multiple gecko versions for even greater coverage.

Categorieën: Mozilla-nl planet

​More change for Mozilla as top Firefox exec departs - CNET

Nieuws verzameld via Google - wo, 18/02/2015 - 13:28

CNET

​More change for Mozilla as top Firefox exec departs
CNET
I don't actually know what my next thing will be," said Nightingale, who joined Mozilla in 2007. "I want to take some time to catch up on what's happened in the world around me. I want to take some time with my kid before she finishes her too-fast ...

en meer »
Categorieën: Mozilla-nl planet

​More change for Mozilla as top Firefox exec departs - CNET

Nieuws verzameld via Google - wo, 18/02/2015 - 13:09

CNET

​More change for Mozilla as top Firefox exec departs
CNET
I don't actually know what my next thing will be," said Nightingale, who joined Mozilla in 2007. "I want to take some time to catch up on what's happened in the world around me. I want to take some time with my kid before she finishes her too-fast ...

en meer »
Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP/2 talk on Packet Pushers

Mozilla planet - wo, 18/02/2015 - 12:57

http2 logoI talked with Greg Ferro on Skype on January 15th. Greg runs the highly technical and nerdy network oriented podcast Packet Pushers. We talked about HTTP/2 for well over an hour and we went through a lot stuff about the new version of the most widely used protocol on the Internet.

Listen or download it.

Very suitably published today, the very day the IESG approved HTTP/2.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Input status: February 18th, 2015

Mozilla planet - wo, 18/02/2015 - 11:00
Development

High-level summary:

  • Some minor fixes
  • Upgraded to Django 1.7

Thank you to contributors!:

  • Adam Okoye: 1
  • L Guruprasad: 5 (now over 25 commits!)
  • Ricky Rosario: 8

Landed and deployed:

  • 57a540f Rename test_browser.py to something more appropriate
  • 6c360d9 bug 1129579 Fix user agent parser for firefox for android
  • 0fa7c28 bug 1093341 Fix gengo warning emails
  • 39f3d25 bug 1053384 Make the filters visible even when there are no results (L. Guruprasad)
  • 9d009d7 bug 1130009 Add pyflakes and mccabe to requirements/dev.txt with hashes (L. Guruprasad)
  • 5b5f9b9 bug 1129085 Infer version for Firefox Dev
  • b0e0447 bug 1130474 Add sample data for heartbeat
  • 91de653 Update django-celery to master tip. (Ricky Rosario)
  • 6eda058 Update django-nose to v1.3 (Ricky Rosario)
  • f2ba0d0 Fix docs: remove stale note about test_utils. (Ricky Rosario)
  • 3b7811f bug 1116848 Change thank you page view (Adam Okoye)
  • 8d8ee31 bug 1053384 Fix selected sad/happy filters not showing up on 0 results (L. Guruprasad)
  • fea60dc bug 1118765 Upgrade django to 1.7.4 (Ricky Rosario)
  • 7aa9750 bug 1118765 Replace south with the new django 1.7 migrations. (Ricky Rosario)
  • dcd6acb bug 1118765 Update db docs for django 1.7 (new migration system) (Ricky Rosario)
  • c55ae2c bug 1118765 Fake the migrations for the first deploy of 1.7 (Ricky Rosario)
  • 1288d5b bug 1118765 Fix wsgi file
  • c9a326d bug 1118765 Run migrations for real during deploy. (Ricky Rosario)
  • f2398c2 Add "migrate --list" to let us know migration status
  • bf8bf4c Split up peep line into multiple commands
  • 0710080 Add a "version" to the jingo requirement so it updates
  • 0d1ca43 bug 1131664 Quell Django 1.6 warning
  • 7545259 bug 1131391 update to pep8 1.6.1 (L. Guruprasad)
  • 0fa0aab bug 1130762 Alerts app, models and modelfactories
  • be95d8e bug 1130469 Add filter for hb test rows and distinguish them by color (L. Guruprasad)
  • f3abd8e Add help_text for Survey model fields
  • f6ba2a2 Migration for help_text fields in Survey
  • f8cd339 bug 1133734 Fix waffle cookie thing
  • c8a6805 bug 1133895 Upgrade grappelli to 2.6.3

Current head: 11aa7a4

Rough plan for the next two weeks
  1. Adam is working on the new Thank You page
  2. I'm working on the Alerts API
  3. I'm working on the implementation work for the Gradient Sentiment project

That's it!

Categorieën: Mozilla-nl planet

Mozilla vil godkende alle udvidelser til Firefox - Version2

Nieuws verzameld via Google - wo, 18/02/2015 - 09:27

Mozilla vil godkende alle udvidelser til Firefox
Version2
Noget kunne tyde på, at Mozilla er træt af malware, der dukker op i butikken for ekstra tilføjelser til firmaets populære Firefoz-browser. Nu lukker Mozilla nemlig den frie og uregulerede adgang og sætter sig tungt på godkendelsen af nye add-ons ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla Addons: Kritik an geplanten digitalen Signaturen - Golem.de

Nieuws verzameld via Google - wo, 18/02/2015 - 08:13

Golem.de

Mozilla Addons: Kritik an geplanten digitalen Signaturen
Golem.de
Alle Addons für Firefox sollen künftig nach einer Prüfung von Mozilla digital signiert werden. Ohne die Signatur soll Firefox die Installation und Verwendung der Addons verweigern. Das solle die Verbreitung manipulierter und schädlicher Erweiterungen ...
Signaturpflicht für Add-Ons: Mozilla verspricht Sicherheit für FirefoxSPIEGEL ONLINE
Mozilla verspricht mehr Sicherheit für Firefox-ErweiterungenRP ONLINE
Mozilla will Erweiterungen prüfenEngadget German
PC Games Hardware -Mac & i -MacTechNews.de
alle 11 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: Building for a Demo

Mozilla planet - wo, 18/02/2015 - 07:00

I’ve been trying to work through a post on technology choices, as I had it in my mind that we should rewrite substantial portions of the product. We’ve just upped the team size to two, adding Donovan Preston, and it’s an opportunity to share in some of these decisions. And get rid of code that was desperately expedient. The server is only 400ish lines, with some significant copy-and-paste, so we’re not losing any big investment.

Now I wonder if part of the danger of a rewrite isn’t the effort, but that it’s an excuse to go heads-down and starve your situational awareness.

In other news there has been a major resignation at Mozilla. I’d read into it largely what Johnathan implies in his post: things seem to be on a good track, so he’s comfortable leaving. But the VP of Firefox can’t leave without some significant organizational impact. Now is an important time for me to be situationally aware, and for the product itself to show situational awareness. The technical underpinnings aren’t that relevant at this moment.

So instead, if only for a few days, I want to move back into expedient demoable product mode. Now is the time to explain the product to other people in Mozilla.

The choices this implies feel weird at times. What is most important? Security bugs? Hardly! It needs to demonstrate some things to different stakeholders:

  1. There are some technical parts that require demonstration. Can we freeze the DOM and produce something usable? Only an existence proof is really convincing. Can we do a login system? Of course! So I build out the DOM freezing and fix bugs in it, but I’m preparing to build a login system where you type in your email address. I’m sure you wouldn’t lie so we’ll just believe you are who you say you are.

  2. But I want to get to the interesting questions. Do we require a login for this system? If not, what can an anonymous user do? I don’t have an answer, but I want to engage people in the question. I think one of the best outcomes of a demo is having people think about these questions, offer up solutions and criticisms. If the demo makes everyone really impressed with how smart I am that is very self-gratifying, but it does not engage people with the product, and I want to build engagement. To ask a good question I do need to build enough of the context to clarify the question. I at least need fake logins.

  3. I’ve been getting design/user experience help from Bram Pitoyo too, and now we have a number of interesting mockups. More than we can implemented in short order. I’m trying to figure out how to integrate these mockups into the demo itself — as simple as “also look at this idea we have”. We should maintain a similar style (colors, basic layout), so that someone can look at a mockup and use all the context that I’ve introduced from the live demo.

  4. So far I’ve put no effort into onboarding. A person who picks up the tool may have no idea how it is supposed to be used. Or maybe they would figure it out: I haven’t even thought it through. Since I know how it works, and I’m doing the demo, that’s okay. My in-person narration is the onboarding experience. But even if I’m trying to explain the product internally, I should recognize I’m cutting myself off from an organic growth of interest.

  5. There are other stakeholders I keep forgetting about. I need to speak to the Mozilla Mission. I think I have a good story to tell there, but it’s not the conventional wisdom of what it means to embody the mission. I see this as a tool of direct outward-facing individual empowerment, not the mediated power of federation, not the opting-out power of privacy, not the committee-mediated and developer driven power of standards.

  6. Another stakeholder: people who care about the Firefox brand and marketing our products. Right now the tool lacks any branding, and it would be inappropriate to deploy this as a branded product right now. But I can demo a branded product. There may also be room to experiment with a call to action, and to start a discussion about what that would mean. I shouldn’t be afraid to do it really badly, because that starts the conversation, and I’d rather attract the people who think deeply about these things than try to solve them myself.

So I’m off now on another iteration of really scrappy coding, along with some strategic fakery.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

Mozilla planet - wo, 18/02/2015 - 06:12

the following changes have been pushed to bugzilla.mozilla.org:

  • [1131622] update the description of the “Any other issue” option on the itrequest form
  • [1124810] Searching for ‘—‘ in Simple Search causes a SQL error
  • [1111343] Wrapping of table-header when sorting with “Mozilla” skin
  • [1130590] Changes to the new data compliance bug form
  • [1108816] Project kickoff form, changes to privacy review
  • [1120048] Strange formatting on inline URLs
  • [1118987] create a new bug form for discourse issues (based on form.reps.it)

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Meeting Notes: Thunderbird: 2015-02-17

Thunderbird - wo, 18/02/2015 - 05:00

Thunderbird meeting notes 2015-02-17. NOON PST. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

fallen, wsmwk, rkent, aceman, paenglab, makemyday, magnus, jorgk,

Current status and discussions
  • 36.0 beta is out
Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31
  • ldap crasher
  • certificate crasher
  • Lightning integration
  • AB all-acount search bug 170270
  • maildir UI
  • video chat The initial set of patches, with IB UI, may land this week (they’re up for final review). We’re considering also landing a set of matching strings for TB so uplifting a port of the UI becomes possible. I’m not sure the feature will be ready to ship in TB38 as it has not undergone much real world testing yet, but you never know, there may not be any nasty surprises ;)

Release Issues

Upcoming
  • Thunderbird 38 moves to Earlybird ~ February 24, 2015
    • string freeze
Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles
Round Table Paenglab
  • I’ve requested for bug 1096006 “Add AccountManager to the prefs in tab” for Tracking_TB38.
    • Is this bug desired for TB 38? It would be needed to enable PrefsInTab.
    • If yes, I have a string only patch to land before string freeze.
  • I’ve also requested for Hiro’s bug 1087233 “Create about:downloads to migrate to Downloads.jsm” for Tracking_TB38.
    • I’ve needinfoed him to ask if he has time to finish, but no answer until now.
    • It has also strings in it. I could make a strings only patch if needed.
sshagarwal
  • Plan to land AB fix bug 170270 for TB 38.
  • Bundled chat desktop notifications bug 1127802 waiting for final review.
  • Discussing schema design and appropriate db backend for next gen address book with mconley. We plan to get an approximate idea of the number of contacts in the users’ address books on an average bug 1132588 as a required minimum performance measure.
wsmwk
  • 36.0 beta QA organized
  • triage topcrashes
  • working on HWA question bug 1131879 Disable hardware acceleration (HWA)
aceman
  • having an active week with fixing smaller backend bugs (landing right now), polishing for the release. Proud to fix long-standing dataloss bug 840418.
Question Time Other
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.
Action Items
  • organize 36 beta postmortem meeting (wsmwk)
  • lightning integration meeting (rkrent/fallen)
Retrieved from “https://wiki.mozilla.org/index.php?title=Thunderbird/StatusMeetings/2015-02-17&oldid=1056531

Categorieën: Mozilla-nl planet

Mozilla's Firefox Open-Source 'bug' Won't Get Fixed - InternetNews.com (blog)

Nieuws verzameld via Google - wo, 18/02/2015 - 00:00

Mozilla's Firefox Open-Source 'bug' Won't Get Fixed
InternetNews.com (blog)
Every so often, a bug shows up in an open-source application that just make me go ..huH? That's the case with Mozilla firefox bug 949446 which has the ominous title of: "Source Code Disclosure of every possible project". For proprietary software ...

Categorieën: Mozilla-nl planet

Benjamin Smedberg: Gratitude Comes in Threes

Mozilla planet - ti, 17/02/2015 - 23:37

Today Johnathan Nightingale announced his departure from Mozilla. There are three special people at Mozilla who shaped me into the person I am today, and Johnathan Nightingale is one of them:

 Shaver, Johnathan, Beltzner

Mike Shaver taught me how to be an engineer. I was a full-time musician who happened to be pretty good at writing code and volunteering for Mozilla. There were many people at Mozilla who helped teach me the fine points of programming, and techniques for being a good programmer, but it was shaver who taught me the art of software engineering: to focus on simplicity, to keep the ultimate goal always in mind, when to compromise in order to ship, and when to spend the time to make something impossibly great. Shaver was never my manager, but I credit him with a lot of my engineering success. Shaver left Mozilla a while back to do great things at Facebook, and I still miss him.

Mike Beltzner taught me to care about users. Beltzner was never my manager either, but his single-minded and sometimes pugnacious focus on users and the user experience taught me how to care about users and how to engineer products that people might actually want to use. It’s easy for an engineer to get caught up in the most perfect technology and forget why we’re building any of this at all. Or to get caught up trying to change the world, and forget that you can’t change the world without a great product. Beltzner left Mozilla a while back and is now doing great things at Pinterest.

Perhaps it is just today talking, but I will miss Johnathan Nightingale most of all. He taught me many things, but mostly how to be a leader. I have had the privilege of reporting to Johnathan for several years now. He taught me the nuances of leadership and management; how to support and grow my team and still be comfortable applying my own expertise and leadership. He has been a great and empowering leader, both for me personally and for Firefox as a whole. He also taught me how to edit my own writing and others, and especially never to bury the lede. Now Johnathan will also be leaving Mozilla, and undoubtedly doing great things on his next adventure.

It doesn’t seem coincidental that triumverate were all Torontonians. Early Toronto Mozillians, including my three mentors, built a culture of teaching, leading, mentoring, and Mozilla is better because of it. My new boss isn’t in Toronto, so it’s likely that I will be traveling there less. But I still hold a special place in my heart for it and hope that Mozilla Toronto will continue to serve as a model of mentoring and leadership for Mozilla.

Now I’m a senior leader at Mozilla. Now it’s my job to mentor, teach, and empower Mozilla’s leaders. I hope that I can be nearly as good at it as these wonderful Mozillians have been for me.

Categorieën: Mozilla-nl planet

Nathan Froyd: multiple return values in C++

Mozilla planet - ti, 17/02/2015 - 23:01

I’d like to think that I know a fair amount about C++, but I keep discovering new things on a weekly or daily basis.  One of my recent sources of new information is the presentations from CppCon 2014.  And the most recent presentation I’ve looked at is Herb Sutter’s Back to the Basics: Essentials of Modern C++ Style.

In the presentation, Herb mentions a feature of tuple that enables returning multiple values from a function.  Of course, one can already return a pair<T1, T2> of values, but accessing the fields of a pair is suboptimal and not very readable:

pair<...> p = f(...); if (p.second) { // do something with p.first }

The designers of tuple must have listened, because of the function std::tie, which lets you destructure a tuple:

typename container::iterator position; bool already_existed; std::tie(position, already_existed) = mMap.insert(...);

It’s not quite as convenient as destructuring multiple values in other languages, since you need to declare the variables prior to std::tie‘ing them, but at least you can assign them sensible names. And since pair implicitly converts to tuple, you can use tie with functions in the standard library that return pairs, like the insertion functions of associative containers.

Sadly, we’re somewhat limited in our ability to use shiny new concepts from the standard library because of our C++ standard library situation on Android (we use stlport there, and it doesn’t feature useful things like <tuple>, <function>, or <thread_local>. We could, of course, polyfill some of these (and other) headers, and indeed we’ve done some of that in MFBT already. But using our own implementations limits our ability to share code with other projects, and it also takes up time to produce the polyfills and make them appropriately high quality. I’ve seen several people complain about this, and I think it’s something I’d like to fix in the next several months.

Categorieën: Mozilla-nl planet

Johnathan Nightingale: Home for a Rest

Mozilla planet - ti, 17/02/2015 - 21:59

Earlier today, I sent this note to the global mozilla employees list. It was not an easy send button to push.

===

One of the many, many things Mozilla has taught me over the years is not to bury the lede, so here goes:

March 31 will be my last day at Mozilla.

2014 was an incredible year, and it ended so much better than it started. I’m really proud of what we all accomplished, and I’m so hopeful for Mozilla’s future. But it took a lot out of me. I need to take a break. And as the dust settled on 2014 I realized, for the first time in a while, that I could take one.

You can live the Mozilla mission, feel it in your bones, and still worry about the future; I’ve had those moments over the last 8 years. Maybe you have, too. But Mozilla today is stronger than I’ve seen it in a long time. Our new strategy in search gives us a solid foundation and room to breathe, to experiment, and to make things better for our users and the web. We’re executing better than we ever have, and we’re seeing the shift in our internal numbers, while we wait for the rest of the world to catch up. January’s desktop download numbers are the best they’ve been in years. Accounts are being counted in tens of millions. We’re approaching 100MM downloads on Android. Dev Edition is blowing away targets faster than we can set them; Firefox on iOS doesn’t even exist yet, and already you can debug it with our devtools. Firefox today has a fierce momentum.

None of which will stop the trolls, of course. When this news gets out, I imagine someone will say something stupid. That it’s a Sign Of Doom. Predictable, and dead wrong; it misunderstands us completely. When things looked really rough, at the beginning of 2014, say, and people wanted to write about rats and sinking ships, that’s when I, and all of you, stayed.

You stayed or, in Chris’ case, you came back. And I’ve gotta say, having Chris in the seat is one of the things that gives me the most confidence. I didn’t know what Mozilla would feel like with Chris at the helm, but my CEO in 2014 was a person who pushed me and my team to do better, smarter work, to measure our results, and to keep the human beings who use our stuff at the center of every conversation. In fact, the whole senior team blows me away with their talent and their dedication.

You all do. And it makes me feel like a chump to be packing up in the midst of it all; but it’s time. And no, I haven’t been poached by facebook. I don’t actually know what my next thing will be. I want to take some time to catch up on what’s happened in the world around me. I want to take some time with my kid before she finishes her too-fast sprint to adulthood. I want to plant deeper roots in Toronto tech, which is incredibly exciting right now and may be a place where I can help. And I want a nap.

You are the very best I’ve met. It’s been a privilege to call myself your colleague, and to hand out a business card with the Firefox logo. I’m so immensely grateful for my time at Mozilla, and got so much more done here than I could have hoped. I’m talking with Chris and others about how I can meaningfully stay involved after March as an advisor, alumnus, and cheerleader. Once a Mozillian, always.

Excelsior!

Johnathan

Categorieën: Mozilla-nl planet

Christie Koehler: Fun with git submodules

Mozilla planet - ti, 17/02/2015 - 21:59

Git submodules are amazingly useful. Because they provide a way for you to connect external, separate git repositories they can be used to organize your vim scripts, your dotfiles, or even a whole mediawiki deployment.

As incredibly useful as git submodules are, they can also be a bit confusing to use. This goal of this article is to walk you through the most common git submodule tasks: adding, removing and updating. We’ll also review briefly how to make changes to code you have checked out as a submodule.

I’ve created some practice repositories. Fork submodule-practice if you’d like to follow along. We’ll used these test repositories as submodules:

I’ve used version 2.3.0 of the git client for these examples. If you’re seeing something different, check your version with git --version.

Initializing a repository with submodules

First, let’s clone our practice repository:

[skade ;) ~/Work] christie$ git clone git@github.com:christi3k/submodule-practice.git Cloning into 'submodule-practice'... remote: Counting objects: 63, done. remote: Compressing objects: 100% (16/16), done. remote: Total 63 (delta 9), reused 0 (delta 0), pack-reused 47 Receiving objects: 100% (63/63), 6.99 KiB | 0 bytes/s, done. Resolving deltas: 100% (25/25), done. Checking connectivity... done.

And then cd into the working directory:

christie$ cd submodule-practice/

Currently, this project has two submodules: furry-octo-nemesis and psychic-avenger.

When we run ls we see directories for these submodules:

[skade ;) ~/Work/submodule-practice (master)] christie$ ll ▕ drwxrwxr-x▏christie:christie│4 min │ 4K│furry-octo-nemesis ▕ drwxrwxr-x▏christie:christie│4 min │ 4K│psychic-avenger ▕ -rw-rw-r--▏christie:christie│4 min │ 29B│README.md ▕ -rw-rw-r--▏christie:christie│4 min │ 110B│README.mediawiki

But if we run ls for either submodule directory we see they are empty. This is because the submodules have not yet been initialized or updated.

[skade ;) ~/Work/submodule-practice (master)] christie$ git submodule init Submodule 'furry-octo-nemesis' (git@github.com:christi3k/furry-octo-nemesis.git) registered for path 'furry-octo-nemesis' Submodule 'psychic-avenger' (git@github.com:christi3k/psychic-avenger.git) registered for path 'psychic-avenger'

git submodule init copies the submodule names, urls and other details from .gitmodules to .git/config, which is where git looks for config details it should apply to your working copy.

git submodule init does not update or otherwise alter information in .git/config. If you have changed .gitmodules for any submodule already initialized, you’ll need to deinit and init the submodule again for changes to be reflected in .git/config.

You can initiliaze specific submodules by specifying their name:

git submodule init psychich-avenger

At this point you could customized git submodule urls for use in your local checkout by editing them in .git/config before proceeding to git submodule update.

Now let’s actually checkout the submodules with submodule update:

skade ;) ~/Work/submodule-practice (master)] christie$ git submodule update --recursive Cloning into 'furry-octo-nemesis'... remote: Counting objects: 6, done. remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 6 Receiving objects: 100% (6/6), done. Checking connectivity... done. Submodule path 'furry-octo-nemesis': checked out '1c4b231fa0bcfd5ce8b8a2773c6616689032d353' Cloning into 'psychic-avenger'... remote: Counting objects: 25, done. remote: Compressing objects: 100% (9/9), done. remote: Total 25 (delta 1), reused 0 (delta 0), pack-reused 15 Receiving objects: 100% (25/25), done. Resolving deltas: 100% (3/3), done. Checking connectivity... done. Submodule path 'psychic-avenger': checked out '169c5c56154f58fd745352c4f30aa0d4a1d7a88e'

Note: The --recursive flag tells git to recurse into submodule directories and run update on any submodules those submodules include. It’s not needed for this example, but it does no harm so I’ve included it here since it’s common for projecs to have nested submodules.

Now when we run ls on either directory, we see they now contain our submodule’s files:

[skade ;) ~/Work/submodule-practice (master)] christie$ ls furry-octo-nemesis/ ▕ -rw-rw-r--▏42 sec │ 52B│README.md [skade ;) ~/Work/submodule-practice (master)] christie$ ls psychic-avenger/ ▕ -rw-rw-r--▏46 sec │ 133B│README.md ▕ -rw-rw-r--▏46 sec │ 0B│other.txt

Note: It’s possible to run init and update in one command with git submodule update --init --recursive

Adding a git submodule

We’ll start in the working directory of submodule-practice.

To add a submodule, use:

git submodule add <git-url>

Let’s try adding sample project scaling-octo-wallhack as a submodule.

[2495][skade ;) ~/Work/submodule-practice (master)] christie$ git submodule add git@github.com:christi3k/scaling-octo-wallhack.git Cloning into 'scaling-octo-wallhack'... remote: Counting objects: 19, done. remote: Compressing objects: 100% (8/8), done. remote: Total 19 (delta 1), reused 0 (delta 0), pack-reused 9 Receiving objects: 100% (19/19), done. Resolving deltas: 100% (3/3), done. Checking connectivity... done.

Note: If you want the submodule to be cloned into a directory other than ‘scaling-octo-wallhack’ then you need to specify a directory to clone into as you would when cloning any other project. For example, this command will clone psychic-avenger to the subdirectory submodules:

christie$ git submodule add git@github.com:christi3k/scaling-octo-wallhack.git submodules/scaling-octo-wallhack

Let’s see what git status tells us:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: .gitmodules new file: scaling-octo-wallhack

And running ls we see that there are files in scaling-octo-wallhack directory:

[skade ;) ~/Work/submodule-practice (master +)] christie$ ll scaling-octo-wallhack/ ▕ -rw-rw-r--▏christie:christie│< min │ 180B│README.md ▕ -rw-rw-r--▏christie:christie│< min │ 0B│cutting-edge-changes.txt ▕ -rw-rw-r--▏christie:christie│< min │ 0B│file1.txt ▕ -rw-rw-r--▏christie:christie│< min │ 0B│file2.txt Specifying a branch

When you add a git submodule, git makes some assumptions for you. It sets up a remote repository to the submodule called ‘origin’ and it checksout the ‘master’ branch for you. In many cases you may no want to use the master branch. Luckily, this is easy to change.

There are two methods to specific which branch of the submodule should be checked out by your project.

Method 1: Specify a branch in .gitmodules

Here’s what the modified section of .gitmodules looks like for scaling-octo-wallhack:

[submodule "scaling-octo-wallhack"] path = scaling-octo-wallhack url = git@github.com:christi3k/scaling-octo-wallhack.git branch = REL_1

Be sure to save .gitmodules and then run git submodule update --remote:

[skade ;( ~/Work/submodule-practice (master *+)] christie$ git submodule update --remote Submodule path 'psychic-avenger': checked out 'fba086dbb321109e5cd2d9d1bc3b59478dacf6ee' Submodule path 'scaling-octo-wallhack': checked out '88d66d5ecc58d2ab82fec4fea06ffbfd2c55fd7d' Method 2: Checkout specific branch in submodule directory

In the submodule directory, checkout the branch you want with git checkout origin/:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((b49591a...))] christie$ git checkout origin/REL_1 Previous HEAD position was b49591a... Cutting-edge changes. HEAD is now at 88d66d5... Prep Release 1.

Either method will result will yield the following from git status:

[skade ;) ~/Work/submodule-practice (master *+)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: .gitmodules new file: scaling-octo-wallhack Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: scaling-octo-wallhack (new commits)

Now let’s stage and commit the changes:

[skade ;) ~/Work/submodule-practice (master *+)] christie$ git add scaling-octo-wallhack [skade ;) ~/Work/submodule-practice (master +)] christie$ git commit -m "Add scaling-octo-wallhack submodule, REL_1." [master 4a97a6f] Add scaling-octo-wallhack submodule, REL_1. 2 files changed, 4 insertions(+) create mode 160000 scaling-octo-wallhack

And don’t forget to push them to our remote repository so they are available for others:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push -n origin master To git@github.com:christi3k/submodule-practice.git 7e6d09e..4a97a6f master -> master

Looks good, do it for real now:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push origin master Counting objects: 3, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 439 bytes | 0 bytes/s, done. Total 3 (delta 2), reused 0 (delta 0) To git@github.com:christi3k/submodule-practice.git 7e6d09e..4a97a6f master -> master Removing a git submodule

Removing a submodule is a bit trickier than adding one.

Deinitialize

First, deinit the submodule with git submodule deinit :

[skade ;) ~/Work/submodule-practice (master)] christie$ git submodule deinit psychic-avenger Cleared directory 'psychic-avenger' Submodule 'psychic-avenger' (git@github.com:christi3k/psychic-avenger.git) unregistered for path 'psychic-avenger'

This command removes the submodule’s confg entries in .git/config and .gitmodules and it removes files from the submodule’s working directory. This command will delete untracked files, even when they are listed in .gitignore.

Note: You can also use this command if you simply want to prevent having a local checkout of the submodule in your working tree, without actually removing the submodule from your main project.

Let’s check our work:

[skade ;) ~/Work/submodule-practice (master)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean

This shows no changes because git submodule deinit only makes changes to our local working copy.

Running ls we also see the directories are still present:

[skade ;) ~/Work/submodule-practice (master)] christie$ ll ▕ drwxrwxr-x▏christie:christie│4 day │ 4K│furry-octo-nemesis ▕ drwxrwxr-x▏christie:christie│16 sec │ 4K│psychic-avenger ▕ -rw-rw-r--▏christie:christie│4 day │ 29B│README.md ▕ -rw-rw-r--▏christie:christie│4 day │ 110B│README.mediawiki [skade ;) ~/Work/submodule-practice (master)] Remove with git rm

To actually remove the submodule from your project’s repository, use git rm:

[skade ;) ~/Work/submodule-practice (master)] christie$ git rm psychic-avenger rm 'psychic-avenger'

Let’s check our work:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD ..." to unstage) modified: .gitmodules deleted: psychic-avenger

These changes have been staged by git automatically, so to see what has changed about .gitmodules, use --cached flag or its alias --staged:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git diff --cached diff --git a/.gitmodules b/.gitmodules index dec1204..e531507 100644 --- a/.gitmodules +++ b/.gitmodules @@ -1,6 +1,3 @@ [submodule "furry-octo-nemesis"] path = furry-octo-nemesis url = git@github.com:christi3k/furry-octo-nemesis.git -[submodule "psychic-avenger"] - path = psychic-avenger - url = git@github.com:christi3k/psychic-avenger.git diff --git a/psychic-avenger b/psychic-avenger deleted file mode 160000 index fdd4b36..0000000 --- a/psychic-avenger +++ /dev/null @@ -1 +0,0 @@ -Subproject commit fdd4b366458757940d7692b61e22f4d1b21c825a

So we see that in .gitmodules, lines related to psychic-avenger have been removed and that the psychic-avenger directory and commit hash has also been removed.

And a directory listing shows the files are no longer in our working directory:

christie$ ll ▕ drwxrwxr-x▏christie:christie│4 day │ 4K│furry-octo-nemesis ▕ -rw-rw-r--▏christie:christie│4 day │ 29B│README.md ▕ -rw-rw-r--▏christie:christie│4 day │ 110B│README.mediawiki Remvoing all reference to the submodule (optional)

For whatever reason, git does not remove all trace of the submodule even after these commands. To completely remove all reference, you need to also delete the .git/modules entry to really have it be gone:

[skade ;) ~/Work/submodule-practice (master)] christie$ rm -rf .git/modules/psychic-avenger

Note: This probably optional for most use-cases. The only time you might run into trouble if you leave this reference is if you later add a submodule of the same name. In that case, git will complain and ask you to pick a different name or to simply checkout the submodule from the remote source it already knows about.

Commit changes

Now we commit our changes:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git commit -m "Remove psychic-avenger submodule." [master 7833c1c] Remove psychic-avenger submodule. 2 files changed, 4 deletions(-) delete mode 160000 psychic-avenger

Looks good, let’s push our changes:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push -n origin master To git@github.com:christi3k/submodule-practice.git d89b5cb..7833c1c master -&gt; master

Looks good, let’s do it for real:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push origin master Counting objects: 3, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 402 bytes | 0 bytes/s, done. Total 3 (delta 1), reused 0 (delta 0) To git@github.com:christi3k/submodule-practice.git d89b5cb..7833c1c master -&gt; master Updating submodules within your project

The simplest use case for updating submodules within your project is when you simply want to pull in the most recent version of that submodule or want to change to a different branch.

There are two methods for updating modules.

Method 1: Specify a branch in .gitmodules and use git submodule update --remote

Using this method, you first need to ensure that the branch you want to use is specified for each submodule in .gitmodules.

Let’s take a look at the .gitmodules file for our sample project:

[submodule "furry-octo-nemesis"] path = furry-octo-nemesis url = git@github.com:christi3k/furry-octo-nemesis.git [submodule "psychic-avenger"] path = psychic-avenger url = git@github.com:christi3k/psychic-avenger.git branch = RELEASE_E [submodule "scaling-octo-wallhack"] path = scaling-octo-wallhack url = git@github.com:christi3k/scaling-octo-wallhack.git

The submodule psychic-avenger is set to checkout branch RELEASE_E and both furry-octo-nemesis and scaling-octo-wallhack will checkout master because no branch is specified.

Edit .gitsubmodules

To change the branch that is checked out, update the value of branch:

[submodule "scaling-octo-wallhack"] path = scaling-octo-wallhack url = git@github.com:christi3k/scaling-octo-wallhack.git brach = REL_2

Now scaling-octo-wallhack is set to checkout the REL_2 branch.

Update with git submodule update –remote [skade ;) ~/Work/submodule-practice (master *)] christie$ git submodule update --remote Submodule path 'scaling-octo-wallhack': checked out 'e845f5431119b527b7cde1ad138a373c5b2d4ec1'

And if we cd into scaling-octo-wallhack and run branch -vva we confirm we’ve checked out the REL_2 branch:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((e845f54...))] christie$ git branch -vva * (detached from e845f54) e845f54 Release 2. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 e845f54 Release 2. remotes/origin/master b49591a Cutting-edge changes. Method 2: git fetch and git checkout within submodule

First, change into the directory of the submodule you wish to update.

fetch from remote repository

Then run git fetch origin to grab any new commits:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((b49591a...))] christie$ git fetch origin remote: Counting objects: 3, done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 1), reused 3 (delta 1), pack-reused 0 Unpacking objects: 100% (3/3), done. From github.com:christi3k/scaling-octo-wallhack e845f54..1cc1044 REL_2 -> origin/REL_2

Here was can see that the last commit for the REL_2 branch changed from e845f54 to 1cc1044.

Running branch -vva confirms this and that we haven’t changed which commit is checked out yet:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((88d66d5...))] christie$ git branch -vva * (detached from 88d66d5) 88d66d5 Prep Release 1. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. Checkout branch

So now we can re-checkout the REL_2 remote branch:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((88d66d5...))] christie$ git checkout origin/REL_2 Previous HEAD position was 88d66d5... Prep Release 1. HEAD is now at 1cc1044... Hotfix for Release 2 branch.

Let’s check our work with branch -vva:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((1cc1044...))] christie$ git branch -vva * (detached from origin/REL_2) 1cc1044 Hotfix for Release 2 branch. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. Commiting the changes

Moving back to our main project directory, let’s check our work with git status && git diff:

[skade ;) ~/Work/submodule-practice (master *)] christie$ git status && git diff On branch master Your branch is up-to-date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: scaling-octo-wallhack (new commits) no changes added to commit (use "git add" and/or "git commit -a") diff --git a/scaling-octo-wallhack b/scaling-octo-wallhack index 88d66d5..1cc1044 160000 --- a/scaling-octo-wallhack +++ b/scaling-octo-wallhack @@ -1 +1 @@ -Subproject commit 88d66d5ecc58d2ab82fec4fea06ffbfd2c55fd7d +Subproject commit 1cc104418a6a24b9a3cc227df4ebaf707ea23b49

Notice that there are no changes to .gitmodules with this method. Instead, we’ve simply changed the commit hash that the super project is pointing to for this submodule.

Now let’s add, commit and push our changes:

[skade ;) ~/Work/submodule-practice (master *)] christie$ git add scaling-octo-wallhack [skade ;) ~/Work/submodule-practice (master +)] christie$ git commit -m "Updating to current REL_2." [master 5ddbe87] Updating to current REL_2. 1 file changed, 1 insertion(+), 1 deletion(-) [skade ;) ~/Work/submodule-practice (master)] christie$ git push -n origin master To git@github.com:christi3k/submodule-practice.git 4a97a6f..5ddbe87 master -> master [skade ;) ~/Work/submodule-practice (master)] christie$ git push origin master Counting objects: 2, done. Delta compression using up to 4 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 261 bytes | 0 bytes/s, done. Total 2 (delta 1), reused 0 (delta 0) To git@github.com:christi3k/submodule-practice.git 4a97a6f..5ddbe87 master -> master what’s the difference between git submodule update and git submodule update –remote?

Note: git submodule update –remote looks at the value you have in .gitmodules for branch. If there isn’t a value there, it assumes master. git submodule update looks at your repository has for the commit of the submodule project and checks that commit out. Both checkout to a detached state by default unless you specify –merge or –rebase.

These two commands have the ability to step on each other. If you have checked out a specific commit in the submodule directory, it’s possible for it to be different than the commit that would be checked out by git submdoule update –remote specificied in the branch value of .gitmodules.
Likewise, simply looking at the branch value in .gitmodules does not guarentee that’s the branch you have checked out for the submodule. When in doubt, cd to the submodule directory and run git branch -vva. git branch -vva is your friend!

When a subbmodule has been removed

When a submodule has been removed from a repository, what’s the best way to update your working directory to reflect this change?

The answer is that it depends on whether or not you have local, untracked files in the submodule directory that you want to keep.

Method 1: deinit and then fetch and merge

Use this method if you want to completely remove the submodule directory even if you have local, untracked files in it.

Note: In the following examples, we’re working in another checkout of our submodule-practice.

First, use git submodule deinit to deinitialize the submodule:

[skade ;) ~/Work/submodule-elsewhere (master *)] christie$ git submodule deinit psychic-avenger error: the following file has local modifications: psychic-avenger (use --cached to keep the file, or -f to force removal) Submodule work tree 'psychic-avenger' contains local modifications; use '-f' to discard them

We have untracked changes, so we need to use -f to remove them:

[skade ;( ~/Work/submodule-elsewhere (master *)] christie$ git submodule deinit -f psychic-avenger Cleared directory 'psychic-avenger' Submodule 'psychic-avenger' (git@github.com:christi3k/psychic-avenger.git) unregistered for path 'psychic-avenger'

Now fetch changes from the remote repository and merge them:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ git fetch origin remote: Counting objects: 3, done. remote: Compressing objects: 100% (1/1), done. remote: Total 3 (delta 2), reused 3 (delta 2), pack-reused 0 Unpacking objects: 100% (3/3), done. From github.com:christi3k/submodule-practice 666af5d..6038c72 master -> origin/master [skade ;) ~/Work/submodule-elsewhere (master)] christie$ git merge origin/master Updating 666af5d..6038c72 Fast-forward .gitmodules | 3 --- psychic-avenger | 1 - 2 files changed, 4 deletions(-) delete mode 160000 psychic-avenger

Running ls on our project directory shows that the all of psychic-avenger’s files have been removed:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ ll ▕ drwxrwxr-x▏christie:christie│3 hour│ 4K│furry-octo-nemesis ▕ drwxrwxr-x▏christie:christie│5 min │ 4K│scaling-octo-wallhack ▕ -rw-rw-r--▏christie:christie│3 hour│ 29B│README.md ▕ -rw-rw-r--▏christie:christie│3 hour│ 110B│README.mediawiki Method 2: fetch and merge and clean up as needed

Use this method if you have local, untracked (and/or ignored) changes that you want to keep, or if you want to remove files manually.

First, fetch changes from the remote repository and merge them with your local branch:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ git fetch origin remote: Counting objects: 3, done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 1), reused 3 (delta 1), pack-reused 0 Unpacking objects: 100% (3/3), done. From github.com:christi3k/submodule-practice d89b5cb..7833c1c master -> origin/master [skade ;) ~/Work/submodule-elsewhere (master)] christie$ git merge origin/master Updating d89b5cb..7833c1c warning: unable to rmdir psychic-avenger: Directory not empty Fast-forward .gitmodules | 3 --- psychic-avenger | 1 - 2 files changed, 4 deletions(-) delete mode 160000 psychic-avenger

Note the warning “unable to rm dir…” and let’s check our work:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Untracked files: (use "git add ..." to include in what will be committed) psychic-avenger/ nothing added to commit but untracked files present (use "git add" to track)

No uncommited or staged changes, but the directory that was our submodule psychic-avenger is now untracked. Running ls shows that there are still files in the directory, too:

[skade ;( ~/Work/submodule-elsewhere (master)] christie$ ll psychic-avenger/ ▕ -rw-rw-r--▏christie:christie│30 min │ 192B│README.md

Now you can clean up files as you like. In this example we’ll delete the entire psychic-avenger directory:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ rm -rf psychic-avenger Working on projects checked out as submodules

Working on projects checked out as submodules is rather straight-forward, particularly is you are comfortable with git branching and make liberal use of git branch -vva.

Let’s pretend that scaling-octo-wallhack is an extension that I’m developing for my project submodule-practice. I want to work on with while it’s checked out as a submodule because doing so makes it easy to test the extension within my larger project.

Create a working branch

First switch the the branch that you want to use as the base for your work. I’m going to use local tracking branch master, which I’ll first ensure is up to date with the remote origin/master:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((1cc1044...))] christie$ git branch -vva * (detached from origin/REL_2) 1cc1044 Hotfix for Release 2 branch. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((b49591a...))] christie$ git checkout master Switched to branch 'master' Your branch is up-to-date with 'origin/master'.

If master had not been up-to-date with orgin/master, I would have merged.

Next, let’s create a tracking branch for this awesome feature we’re going to work on:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (master)] christie$ git checkout -b awesome-feature Switched to a new branch 'awesome-feature' [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git branch -vva * awesome-feature b49591a Cutting-edge changes. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. Do some work, add and commit changes

No we’ll do some work on the feature, add and commit that work:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ touch awesome_feature.txt [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git add awesome_feature.txt [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature +)] christie$ git commit -m "first round of work on awesome feature" [awesome-feature 005994b] first round of work on awesome feature 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 awesome_feature.txt Push to remote repository

Now we’ll push that to our remost repository so others can contribute:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git push -n origin awesome-feature To git@github.com:christi3k/scaling-octo-wallhack.git * [new branch] awesome-feature -> awesome-feature [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git push origin awesome-feature Counting objects: 2, done. Delta compression using up to 4 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 265 bytes | 0 bytes/s, done. Total 2 (delta 1), reused 0 (delta 0) To git@github.com:christi3k/scaling-octo-wallhack.git * [new branch] awesome-feature -> awesome-feature Switch back to remote branch, headless checkout

If we’d like to switch back to a remote branch, we can:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git checkout origin/REL_2 Note: checking out 'origin/REL_2'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at 1cc1044... Hotfix for Release 2 branch. Using this new branch to collaborate

To try this awesome feature in another checkout, use git fetch:

[skade ;) ~/Work/submodule-elsewhere/scaling-octo-wallhack ((1cc1044...))] christie$ git fetch origin remote: Counting objects: 2, done. remote: Compressing objects: 100% (1/1), done. remote: Total 2 (delta 1), reused 2 (delta 1), pack-reused 0 Unpacking objects: 100% (2/2), done. From github.com:christi3k/scaling-octo-wallhack * [new branch] awesome-feature -> origin/awesome-feature

If you just want to try the feature, checkout orgin/branch:

[2724][skade ;) ~/Work/submodule-elsewhere/scaling-octo-wallhack ((1cc1044...))] christie$ git checkout origin/awesome-feature Previous HEAD position was 1cc1044... Hotfix for Release 2 branch. HEAD is now at 005994b... first round of work on awesome feature

If you plan to work on the feature, create a tracking branch:

[2725][skade ;) ~/Work/submodule-elsewhere/scaling-octo-wallhack ((005994b...))] christie$ git checkout -b awesome-feature Switched to a new branch 'awesome-feature' Acknowledgements

Thanks GPHemsley for helping me figure out git submodules within the context of our MozillaWiki work. I couldn’t have written this post without those conversations or the notes I took during them.

Categorieën: Mozilla-nl planet

Pascal Finette: My Yesterbox Gmail Setup

Mozilla planet - ti, 17/02/2015 - 19:21

In my potentially never-ending quest to get on top of the ever-growing email onslaught, I came across Tony Hsieh's Yesterbox method/manifesto. It's a deceptively simple but effective way to deal with your inbox: You only answer the emails from yesterday (plus the very few emails which require immediate attention). That way you get a chance to be on top of your email (as the number of emails from yesterday is finite) instead of being caught in an endless game of whack-a-mole. Plus people will get a guaranteed response from you in less than 48 hours - whereas in the past I often skipped more complex emails for days as I was constantly dealing with new incoming mail.

For a while I toyed around with different setups. Until I settled on the following Gmail configuration which works beautifully for me:

The left box shows you your incoming email (which allows for quick scanning and identifying those pesky emails which require immediate attention), the top right box is your Yesterbox and thus the email list I focus on. And the lower right box shows emails I starred - typically I use this for important emails I need to keep an eye on (say for example I am waiting for an answer to an email).

It's a simple but incredibly effective setup - here's how you set this up in your Gmail account:

  1. Activate the Gmail Labs feature "Multiple Inboxes" in Settings/Labs

  1. After activating Multiple Inboxes and reloading the Settings page in Gmail you will have a new section fittingly called "Multiple Inboxes". Here you add two inboxes with custom searches: One will be your Yesterbox with a search for "in:inbox older_than:24h", the other one will be your Starred inbox with a custom search for "is:starred". Set the extra panels to show on the right side and increase the number of mails to be displayed to 50 (or whatever works for you) and you're done.

  1. There is no step three! :)

Enjoy and let me know if this works for you (or if you have an even better setup).

Categorieën: Mozilla-nl planet

Is Mozilla building a walled garden around Firefox? - Liliputing

Nieuws verzameld via Google - ti, 17/02/2015 - 19:12

Is Mozilla building a walled garden around Firefox?
Liliputing
Last week over on the Mozilla blog, the Foundation announced a major change that's coming to a future release of Firefox. In the name of security, they're going to start requiring that all add-ons be digitally signed. Extensions that are submitted to ...

en meer »
Categorieën: Mozilla-nl planet

Mike Conley: On unsafe CPOW usage, and “why is my Nightly so sluggish with e10s enabled?”

Thunderbird - ti, 17/02/2015 - 17:47

If you’ve opened the Browser Console lately while running Nightly with e10s enabled, you might have noticed a warning message – “unsafe CPOW usage” – showing up periodically.

I wanted to talk a little bit about what that means, and what’s being done about it. Brad Lassey already wrote a bit about this, but I wanted to expand upon it (especially since one of my goals this quarter is to get a handle on unsafe CPOW usage in core browser code).

I also wanted to talk about sluggishness that some of our brave Nightly testers with e10s enabled have been experiencing, and where that sluggishness is coming from, and what can be done about it.

What is a CPOW?

“CPOW” stands for “Cross-process Object Wrapper”1, and is part of the glue that has allowed e10s to be enabled on Nightly without requiring a full re-write of the front-end code. It’s also part of the magic that’s allowing a good number of our most popular add-ons to continue working (albeit slowly).

In sum, a CPOW is a way for one process to synchronously access and manipulate something in another process, as if they were running in the same process. Anything that can be considered a JavaScript Object can be represented as a CPOW.

Let me give you an example.

In single-process Firefox, easy and synchronous access to the DOM of web content was more or less assumed. For example, in browser code, one could do this from the scope of a browser window:

let doc = gBrowser.selectedBrowser.contentDocument; let contentBody = doc.body;

Here contentBody corresponds to the <body> element of the document in the currently selected browser. In single-process Firefox, querying for and manipulating web content like this is quick and easy.

In multi-process Firefox, where content is processed and rendered in a completely separate process, how does something like this work? This is where CPOWs come in2.

With a CPOW, one can synchronously access and manipulate these items, just as if they were in the same process. We expose a CPOW for the content document in a remote browser with contentDocumentAsCPOW, so the above could be rewritten as:

let doc = gBrowser.selectedBrowser.contentDocumentAsCPOW; let contentBody = doc.body;

I should point out that contentDocumentAsCPOW and contentWindowAsCPOW are exposed on <xul:browser> objects, and that we don’t make every accessor of a CPOW have the “AsCPOW” suffix. This is just our way of making sure that consumers of the contentWindow and contentDocument on the main process side know that they’re probably working with CPOWs3. contentBody.firstChild would also be a CPOW, since CPOWs can only beget more CPOWs.

So for the most part, with CPOWs, we can continue to query and manipulate the <body> of the document loaded in the current browser just like we used to. It’s like an invisible compatibility layer that hops us right over that process barrier.

Great, right?

Well, not really.

CPOWs are really a crutch to help add-ons and browser code exist in this multi-process world, but they’ve got some drawbacks. Most noticeably, there are performance drawbacks.

Why is my Nightly so sluggish with e10s enabled?

Have you been noticing sluggish performance on Nightly with e10s? Chances are this is caused by an add-on making use of CPOWs (either knowingly or unknowingly). Because CPOWs are used for synchronous reading and manipulation of objects in other processes, they send messages to other processes to do that work, and block the main process while they wait for a response. We call this “CPOW traffic”, and if you’re experiencing a sluggish Nightly, this is probably where the sluggishness if coming from.

Instead of using CPOWs, add-ons and browser code should be updated to use frame scripts sent over the message manager. Frame scripts cannot block the main process, and can be optimized to send only the bare minimum of information required to perform an action in content and return a result.

Add-ons built with the Add-on SDK should already be using “content scripts” to manipulate content, and therefore should inherit a bunch of fixes from the SDK as e10s gets closer to shipping. These add-ons should not require too many changes. Old-style add-ons, however, will need to be updated to use frame scripts unless they want to be super-sluggish and bog the browser down with CPOW traffic.

And what constitutes “unsafe CPOW usage”?

“unsafe” might be too strong a word. “unexpected” might be a better term. Brad Lassey laid this out in his blog post already, but I’ll quickly rehash it.

There are two main cases to consider when working with CPOWs:

  1. The content process is already blocked sending up a synchronous message to the parent process
  2. The content process is not blocked

The first case is what we consider “the good case”. The content process is in a known good state, and its primed to receive IPC traffic (since it’s otherwise just idling). The only bad part about this is the IPC traffic.

The second case is what we consider the bad case. This is when the parent is sending down CPOW messages to the child (by reading or manipulating objects in the content process) when the child process might be off processing other things. This case is far more likely than the first case to cause noticeable performance problems, as the main thread of the content process might be bogged down doing other things before it can handle the CPOW traffic – and the parent will be blocked waiting for the messages to be responded to!

There’s also a more speculative fear that the parent might send down CPOW traffic at a time when it’s “unsafe” to communicate with the content process. There are potentially times when it’s not safe to run JS code in the content process, but CPOWs traffic requires both processes to execute JS. This is a concern that was expressed to me by someone over IRC, and I don’t exactly understand what the implications are – but if somebody wants to comment and let me know, I’ll happily update this post.

So, anyhow, to sum – unsafe CPOW usage is when CPOW traffic is initiated on the parent process side while the content process is not blocked. When this unsafe CPOW usage occurs, we log an “unsafe CPOW usage” message to the Browser Console, along with the script and line number where the CPOW traffic was initiated from.

Measuring

We need to measure and understand CPOW usage in Firefox, as well as add-ons running in Firefox, and over time we need to reduce this CPOW usage. The priority should be on reducing the “unsafe CPOW usage” CPOWs in core browser code.

If there’s anything that working on the Australis project taught me, it’s that in order to change something, you need to know how to measure it first. That way, you can make sure your efforts are having an effect.

We now have a way of measuring the amount of time that Firefox code and add-ons spend processing CPOW messages. You can look at it yourself – just go to about:compartments.

It’s not the prettiest interface, but it’s a start. The second column is the time processing CPOW traffic, and the higher the number, the longer it’s been doing it. Naturally, we’ll be working to bring those numbers down over time.

A possibly quick-fix for a slow Nightly with e10s

As I mentioned, we also list add-ons in about:compartments, so if you’re experiencing a slow Nightly, check out about:compartments and see if there’s an add-on with a high number in the second column. Then, try disabling that add-on to see if your performance problem is reduced.

If so, great! Please file a bug on Bugzilla in this component for the add-on, mention the name of the add-on4, describe the performance problem, and mark it blocking e10s-addons if you can.

We’re hoping to automate this process by exposing some UI that informs the user when an add-on is causing too much CPOW traffic. This will be landing in Nightly near you very soon.

PKE Meter, a CPOW Geiger Counter

Logging “unsafe CPOW usage” is all fine and dandy if you’re constantly looking at the Browser Console… but who is constantly looking at the Browser Console? Certainly not me.

Instead, I whipped up a quick and dirty add-on that plays a click, like a Geiger Counter, anytime “unsafe CPOW usage” is put into the Browser Console. This has already highlighted some places where we can reduce unsafe CPOW usage in core Firefox code – particularly:

  1. The Page Info dialog. This is probably the worse offender I’ve found so far – humongous unsafe CPOW traffic just by opening the dialog, and it’s really sluggish.
  2. Closing tabs. SessionStore synchronously communicates with the content process in order to read the tab state before the tab is closed.
  3. Back / forward gestures, at least on my MacBook
  4. Typing into an editable HTML element after the Findbar has been opened.

If you’re interested in helping me find more, install this add-on5, and listen for clicks. At this point, I’m only interested in unsafe CPOW usage caused by core Firefox code, so you might want to disable any other add-ons that might try to synchronously communicate with content.

If you find an “unsafe CPOW usage” that’s not already blocking this bug, please file a new one! And cc me on it! I’m mconley at mozilla dot com.

  1. I pronounce CPOW as “kah-POW”, although I’ve also heard people use “SEE-pow”. To each his or her own. 

  2. For further reading, Bill McCloskey discusses CPOWs in greater detail in this blog post. There’s also this handy documentation

  3. I say probably, because in the single-process case, they’re not working with CPOWs – they’re accessing the objects directly as they used to. 

  4. And say where to get it from, especially if it’s not on AMO. 

  5. Source code is here 

Categorieën: Mozilla-nl planet

Pages