mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Will Kahn-Greene: Input status: February 18th, 2015

Mozilla planet - wo, 18/02/2015 - 11:00
Development

High-level summary:

  • Some minor fixes
  • Upgraded to Django 1.7

Thank you to contributors!:

  • Adam Okoye: 1
  • L Guruprasad: 5 (now over 25 commits!)
  • Ricky Rosario: 8

Landed and deployed:

  • 57a540f Rename test_browser.py to something more appropriate
  • 6c360d9 bug 1129579 Fix user agent parser for firefox for android
  • 0fa7c28 bug 1093341 Fix gengo warning emails
  • 39f3d25 bug 1053384 Make the filters visible even when there are no results (L. Guruprasad)
  • 9d009d7 bug 1130009 Add pyflakes and mccabe to requirements/dev.txt with hashes (L. Guruprasad)
  • 5b5f9b9 bug 1129085 Infer version for Firefox Dev
  • b0e0447 bug 1130474 Add sample data for heartbeat
  • 91de653 Update django-celery to master tip. (Ricky Rosario)
  • 6eda058 Update django-nose to v1.3 (Ricky Rosario)
  • f2ba0d0 Fix docs: remove stale note about test_utils. (Ricky Rosario)
  • 3b7811f bug 1116848 Change thank you page view (Adam Okoye)
  • 8d8ee31 bug 1053384 Fix selected sad/happy filters not showing up on 0 results (L. Guruprasad)
  • fea60dc bug 1118765 Upgrade django to 1.7.4 (Ricky Rosario)
  • 7aa9750 bug 1118765 Replace south with the new django 1.7 migrations. (Ricky Rosario)
  • dcd6acb bug 1118765 Update db docs for django 1.7 (new migration system) (Ricky Rosario)
  • c55ae2c bug 1118765 Fake the migrations for the first deploy of 1.7 (Ricky Rosario)
  • 1288d5b bug 1118765 Fix wsgi file
  • c9a326d bug 1118765 Run migrations for real during deploy. (Ricky Rosario)
  • f2398c2 Add "migrate --list" to let us know migration status
  • bf8bf4c Split up peep line into multiple commands
  • 0710080 Add a "version" to the jingo requirement so it updates
  • 0d1ca43 bug 1131664 Quell Django 1.6 warning
  • 7545259 bug 1131391 update to pep8 1.6.1 (L. Guruprasad)
  • 0fa0aab bug 1130762 Alerts app, models and modelfactories
  • be95d8e bug 1130469 Add filter for hb test rows and distinguish them by color (L. Guruprasad)
  • f3abd8e Add help_text for Survey model fields
  • f6ba2a2 Migration for help_text fields in Survey
  • f8cd339 bug 1133734 Fix waffle cookie thing
  • c8a6805 bug 1133895 Upgrade grappelli to 2.6.3

Current head: 11aa7a4

Rough plan for the next two weeks
  1. Adam is working on the new Thank You page
  2. I'm working on the Alerts API
  3. I'm working on the implementation work for the Gradient Sentiment project

That's it!

Categorieën: Mozilla-nl planet

Mozilla vil godkende alle udvidelser til Firefox - Version2

Nieuws verzameld via Google - wo, 18/02/2015 - 09:27

Mozilla vil godkende alle udvidelser til Firefox
Version2
Noget kunne tyde på, at Mozilla er træt af malware, der dukker op i butikken for ekstra tilføjelser til firmaets populære Firefoz-browser. Nu lukker Mozilla nemlig den frie og uregulerede adgang og sætter sig tungt på godkendelsen af nye add-ons ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla Addons: Kritik an geplanten digitalen Signaturen - Golem.de

Nieuws verzameld via Google - wo, 18/02/2015 - 08:13

Golem.de

Mozilla Addons: Kritik an geplanten digitalen Signaturen
Golem.de
Alle Addons für Firefox sollen künftig nach einer Prüfung von Mozilla digital signiert werden. Ohne die Signatur soll Firefox die Installation und Verwendung der Addons verweigern. Das solle die Verbreitung manipulierter und schädlicher Erweiterungen ...
Signaturpflicht für Add-Ons: Mozilla verspricht Sicherheit für FirefoxSPIEGEL ONLINE
Mozilla verspricht mehr Sicherheit für Firefox-ErweiterungenRP ONLINE
Mozilla will Erweiterungen prüfenEngadget German
PC Games Hardware -Mac & i -MacTechNews.de
alle 11 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: Building for a Demo

Mozilla planet - wo, 18/02/2015 - 07:00

I’ve been trying to work through a post on technology choices, as I had it in my mind that we should rewrite substantial portions of the product. We’ve just upped the team size to two, adding Donovan Preston, and it’s an opportunity to share in some of these decisions. And get rid of code that was desperately expedient. The server is only 400ish lines, with some significant copy-and-paste, so we’re not losing any big investment.

Now I wonder if part of the danger of a rewrite isn’t the effort, but that it’s an excuse to go heads-down and starve your situational awareness.

In other news there has been a major resignation at Mozilla. I’d read into it largely what Johnathan implies in his post: things seem to be on a good track, so he’s comfortable leaving. But the VP of Firefox can’t leave without some significant organizational impact. Now is an important time for me to be situationally aware, and for the product itself to show situational awareness. The technical underpinnings aren’t that relevant at this moment.

So instead, if only for a few days, I want to move back into expedient demoable product mode. Now is the time to explain the product to other people in Mozilla.

The choices this implies feel weird at times. What is most important? Security bugs? Hardly! It needs to demonstrate some things to different stakeholders:

  1. There are some technical parts that require demonstration. Can we freeze the DOM and produce something usable? Only an existence proof is really convincing. Can we do a login system? Of course! So I build out the DOM freezing and fix bugs in it, but I’m preparing to build a login system where you type in your email address. I’m sure you wouldn’t lie so we’ll just believe you are who you say you are.

  2. But I want to get to the interesting questions. Do we require a login for this system? If not, what can an anonymous user do? I don’t have an answer, but I want to engage people in the question. I think one of the best outcomes of a demo is having people think about these questions, offer up solutions and criticisms. If the demo makes everyone really impressed with how smart I am that is very self-gratifying, but it does not engage people with the product, and I want to build engagement. To ask a good question I do need to build enough of the context to clarify the question. I at least need fake logins.

  3. I’ve been getting design/user experience help from Bram Pitoyo too, and now we have a number of interesting mockups. More than we can implemented in short order. I’m trying to figure out how to integrate these mockups into the demo itself — as simple as “also look at this idea we have”. We should maintain a similar style (colors, basic layout), so that someone can look at a mockup and use all the context that I’ve introduced from the live demo.

  4. So far I’ve put no effort into onboarding. A person who picks up the tool may have no idea how it is supposed to be used. Or maybe they would figure it out: I haven’t even thought it through. Since I know how it works, and I’m doing the demo, that’s okay. My in-person narration is the onboarding experience. But even if I’m trying to explain the product internally, I should recognize I’m cutting myself off from an organic growth of interest.

  5. There are other stakeholders I keep forgetting about. I need to speak to the Mozilla Mission. I think I have a good story to tell there, but it’s not the conventional wisdom of what it means to embody the mission. I see this as a tool of direct outward-facing individual empowerment, not the mediated power of federation, not the opting-out power of privacy, not the committee-mediated and developer driven power of standards.

  6. Another stakeholder: people who care about the Firefox brand and marketing our products. Right now the tool lacks any branding, and it would be inappropriate to deploy this as a branded product right now. But I can demo a branded product. There may also be room to experiment with a call to action, and to start a discussion about what that would mean. I shouldn’t be afraid to do it really badly, because that starts the conversation, and I’d rather attract the people who think deeply about these things than try to solve them myself.

So I’m off now on another iteration of really scrappy coding, along with some strategic fakery.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

Mozilla planet - wo, 18/02/2015 - 06:12

the following changes have been pushed to bugzilla.mozilla.org:

  • [1131622] update the description of the “Any other issue” option on the itrequest form
  • [1124810] Searching for ‘—‘ in Simple Search causes a SQL error
  • [1111343] Wrapping of table-header when sorting with “Mozilla” skin
  • [1130590] Changes to the new data compliance bug form
  • [1108816] Project kickoff form, changes to privacy review
  • [1120048] Strange formatting on inline URLs
  • [1118987] create a new bug form for discourse issues (based on form.reps.it)

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Meeting Notes: Thunderbird: 2015-02-17

Thunderbird - wo, 18/02/2015 - 05:00

Thunderbird meeting notes 2015-02-17. NOON PST. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

fallen, wsmwk, rkent, aceman, paenglab, makemyday, magnus, jorgk,

Current status and discussions
  • 36.0 beta is out
Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31
  • ldap crasher
  • certificate crasher
  • Lightning integration
  • AB all-acount search bug 170270
  • maildir UI
  • video chat The initial set of patches, with IB UI, may land this week (they’re up for final review). We’re considering also landing a set of matching strings for TB so uplifting a port of the UI becomes possible. I’m not sure the feature will be ready to ship in TB38 as it has not undergone much real world testing yet, but you never know, there may not be any nasty surprises ;)

Release Issues

Upcoming
  • Thunderbird 38 moves to Earlybird ~ February 24, 2015
    • string freeze
Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles
Round Table Paenglab
  • I’ve requested for bug 1096006 “Add AccountManager to the prefs in tab” for Tracking_TB38.
    • Is this bug desired for TB 38? It would be needed to enable PrefsInTab.
    • If yes, I have a string only patch to land before string freeze.
  • I’ve also requested for Hiro’s bug 1087233 “Create about:downloads to migrate to Downloads.jsm” for Tracking_TB38.
    • I’ve needinfoed him to ask if he has time to finish, but no answer until now.
    • It has also strings in it. I could make a strings only patch if needed.
sshagarwal
  • Plan to land AB fix bug 170270 for TB 38.
  • Bundled chat desktop notifications bug 1127802 waiting for final review.
  • Discussing schema design and appropriate db backend for next gen address book with mconley. We plan to get an approximate idea of the number of contacts in the users’ address books on an average bug 1132588 as a required minimum performance measure.
wsmwk
  • 36.0 beta QA organized
  • triage topcrashes
  • working on HWA question bug 1131879 Disable hardware acceleration (HWA)
aceman
  • having an active week with fixing smaller backend bugs (landing right now), polishing for the release. Proud to fix long-standing dataloss bug 840418.
Question Time Other
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.
Action Items
  • organize 36 beta postmortem meeting (wsmwk)
  • lightning integration meeting (rkrent/fallen)
Retrieved from “https://wiki.mozilla.org/index.php?title=Thunderbird/StatusMeetings/2015-02-17&oldid=1056531

Categorieën: Mozilla-nl planet

Benjamin Smedberg: Gratitude Comes in Threes

Mozilla planet - di, 17/02/2015 - 23:37

Today Johnathan Nightingale announced his departure from Mozilla. There are three special people at Mozilla who shaped me into the person I am today, and Johnathan Nightingale is one of them:

 Shaver, Johnathan, Beltzner

Mike Shaver taught me how to be an engineer. I was a full-time musician who happened to be pretty good at writing code and volunteering for Mozilla. There were many people at Mozilla who helped teach me the fine points of programming, and techniques for being a good programmer, but it was shaver who taught me the art of software engineering: to focus on simplicity, to keep the ultimate goal always in mind, when to compromise in order to ship, and when to spend the time to make something impossibly great. Shaver was never my manager, but I credit him with a lot of my engineering success. Shaver left Mozilla a while back to do great things at Facebook, and I still miss him.

Mike Beltzner taught me to care about users. Beltzner was never my manager either, but his single-minded and sometimes pugnacious focus on users and the user experience taught me how to care about users and how to engineer products that people might actually want to use. It’s easy for an engineer to get caught up in the most perfect technology and forget why we’re building any of this at all. Or to get caught up trying to change the world, and forget that you can’t change the world without a great product. Beltzner left Mozilla a while back and is now doing great things at Pinterest.

Perhaps it is just today talking, but I will miss Johnathan Nightingale most of all. He taught me many things, but mostly how to be a leader. I have had the privilege of reporting to Johnathan for several years now. He taught me the nuances of leadership and management; how to support and grow my team and still be comfortable applying my own expertise and leadership. He has been a great and empowering leader, both for me personally and for Firefox as a whole. He also taught me how to edit my own writing and others, and especially never to bury the lede. Now Johnathan will also be leaving Mozilla, and undoubtedly doing great things on his next adventure.

It doesn’t seem coincidental that triumverate were all Torontonians. Early Toronto Mozillians, including my three mentors, built a culture of teaching, leading, mentoring, and Mozilla is better because of it. My new boss isn’t in Toronto, so it’s likely that I will be traveling there less. But I still hold a special place in my heart for it and hope that Mozilla Toronto will continue to serve as a model of mentoring and leadership for Mozilla.

Now I’m a senior leader at Mozilla. Now it’s my job to mentor, teach, and empower Mozilla’s leaders. I hope that I can be nearly as good at it as these wonderful Mozillians have been for me.

Categorieën: Mozilla-nl planet

Nathan Froyd: multiple return values in C++

Mozilla planet - di, 17/02/2015 - 23:01

I’d like to think that I know a fair amount about C++, but I keep discovering new things on a weekly or daily basis.  One of my recent sources of new information is the presentations from CppCon 2014.  And the most recent presentation I’ve looked at is Herb Sutter’s Back to the Basics: Essentials of Modern C++ Style.

In the presentation, Herb mentions a feature of tuple that enables returning multiple values from a function.  Of course, one can already return a pair<T1, T2> of values, but accessing the fields of a pair is suboptimal and not very readable:

pair<...> p = f(...); if (p.second) { // do something with p.first }

The designers of tuple must have listened, because of the function std::tie, which lets you destructure a tuple:

typename container::iterator position; bool already_existed; std::tie(position, already_existed) = mMap.insert(...);

It’s not quite as convenient as destructuring multiple values in other languages, since you need to declare the variables prior to std::tie‘ing them, but at least you can assign them sensible names. And since pair implicitly converts to tuple, you can use tie with functions in the standard library that return pairs, like the insertion functions of associative containers.

Sadly, we’re somewhat limited in our ability to use shiny new concepts from the standard library because of our C++ standard library situation on Android (we use stlport there, and it doesn’t feature useful things like <tuple>, <function>, or <thread_local>. We could, of course, polyfill some of these (and other) headers, and indeed we’ve done some of that in MFBT already. But using our own implementations limits our ability to share code with other projects, and it also takes up time to produce the polyfills and make them appropriately high quality. I’ve seen several people complain about this, and I think it’s something I’d like to fix in the next several months.

Categorieën: Mozilla-nl planet

Johnathan Nightingale: Home for a Rest

Mozilla planet - di, 17/02/2015 - 21:59

Earlier today, I sent this note to the global mozilla employees list. It was not an easy send button to push.

===

One of the many, many things Mozilla has taught me over the years is not to bury the lede, so here goes:

March 31 will be my last day at Mozilla.

2014 was an incredible year, and it ended so much better than it started. I’m really proud of what we all accomplished, and I’m so hopeful for Mozilla’s future. But it took a lot out of me. I need to take a break. And as the dust settled on 2014 I realized, for the first time in a while, that I could take one.

You can live the Mozilla mission, feel it in your bones, and still worry about the future; I’ve had those moments over the last 8 years. Maybe you have, too. But Mozilla today is stronger than I’ve seen it in a long time. Our new strategy in search gives us a solid foundation and room to breathe, to experiment, and to make things better for our users and the web. We’re executing better than we ever have, and we’re seeing the shift in our internal numbers, while we wait for the rest of the world to catch up. January’s desktop download numbers are the best they’ve been in years. Accounts are being counted in tens of millions. We’re approaching 100MM downloads on Android. Dev Edition is blowing away targets faster than we can set them; Firefox on iOS doesn’t even exist yet, and already you can debug it with our devtools. Firefox today has a fierce momentum.

None of which will stop the trolls, of course. When this news gets out, I imagine someone will say something stupid. That it’s a Sign Of Doom. Predictable, and dead wrong; it misunderstands us completely. When things looked really rough, at the beginning of 2014, say, and people wanted to write about rats and sinking ships, that’s when I, and all of you, stayed.

You stayed or, in Chris’ case, you came back. And I’ve gotta say, having Chris in the seat is one of the things that gives me the most confidence. I didn’t know what Mozilla would feel like with Chris at the helm, but my CEO in 2014 was a person who pushed me and my team to do better, smarter work, to measure our results, and to keep the human beings who use our stuff at the center of every conversation. In fact, the whole senior team blows me away with their talent and their dedication.

You all do. And it makes me feel like a chump to be packing up in the midst of it all; but it’s time. And no, I haven’t been poached by facebook. I don’t actually know what my next thing will be. I want to take some time to catch up on what’s happened in the world around me. I want to take some time with my kid before she finishes her too-fast sprint to adulthood. I want to plant deeper roots in Toronto tech, which is incredibly exciting right now and may be a place where I can help. And I want a nap.

You are the very best I’ve met. It’s been a privilege to call myself your colleague, and to hand out a business card with the Firefox logo. I’m so immensely grateful for my time at Mozilla, and got so much more done here than I could have hoped. I’m talking with Chris and others about how I can meaningfully stay involved after March as an advisor, alumnus, and cheerleader. Once a Mozillian, always.

Excelsior!

Johnathan

Categorieën: Mozilla-nl planet

Christie Koehler: Fun with git submodules

Mozilla planet - di, 17/02/2015 - 21:59

Git submodules are amazingly useful. Because they provide a way for you to connect external, separate git repositories they can be used to organize your vim scripts, your dotfiles, or even a whole mediawiki deployment.

As incredibly useful as git submodules are, they can also be a bit confusing to use. This goal of this article is to walk you through the most common git submodule tasks: adding, removing and updating. We’ll also review briefly how to make changes to code you have checked out as a submodule.

I’ve created some practice repositories. Fork submodule-practice if you’d like to follow along. We’ll used these test repositories as submodules:

I’ve used version 2.3.0 of the git client for these examples. If you’re seeing something different, check your version with git --version.

Initializing a repository with submodules

First, let’s clone our practice repository:

[skade ;) ~/Work] christie$ git clone git@github.com:christi3k/submodule-practice.git Cloning into 'submodule-practice'... remote: Counting objects: 63, done. remote: Compressing objects: 100% (16/16), done. remote: Total 63 (delta 9), reused 0 (delta 0), pack-reused 47 Receiving objects: 100% (63/63), 6.99 KiB | 0 bytes/s, done. Resolving deltas: 100% (25/25), done. Checking connectivity... done.

And then cd into the working directory:

christie$ cd submodule-practice/

Currently, this project has two submodules: furry-octo-nemesis and psychic-avenger.

When we run ls we see directories for these submodules:

[skade ;) ~/Work/submodule-practice (master)] christie$ ll ▕ drwxrwxr-x▏christie:christie│4 min │ 4K│furry-octo-nemesis ▕ drwxrwxr-x▏christie:christie│4 min │ 4K│psychic-avenger ▕ -rw-rw-r--▏christie:christie│4 min │ 29B│README.md ▕ -rw-rw-r--▏christie:christie│4 min │ 110B│README.mediawiki

But if we run ls for either submodule directory we see they are empty. This is because the submodules have not yet been initialized or updated.

[skade ;) ~/Work/submodule-practice (master)] christie$ git submodule init Submodule 'furry-octo-nemesis' (git@github.com:christi3k/furry-octo-nemesis.git) registered for path 'furry-octo-nemesis' Submodule 'psychic-avenger' (git@github.com:christi3k/psychic-avenger.git) registered for path 'psychic-avenger'

git submodule init copies the submodule names, urls and other details from .gitmodules to .git/config, which is where git looks for config details it should apply to your working copy.

git submodule init does not update or otherwise alter information in .git/config. If you have changed .gitmodules for any submodule already initialized, you’ll need to deinit and init the submodule again for changes to be reflected in .git/config.

You can initiliaze specific submodules by specifying their name:

git submodule init psychich-avenger

At this point you could customized git submodule urls for use in your local checkout by editing them in .git/config before proceeding to git submodule update.

Now let’s actually checkout the submodules with submodule update:

skade ;) ~/Work/submodule-practice (master)] christie$ git submodule update --recursive Cloning into 'furry-octo-nemesis'... remote: Counting objects: 6, done. remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 6 Receiving objects: 100% (6/6), done. Checking connectivity... done. Submodule path 'furry-octo-nemesis': checked out '1c4b231fa0bcfd5ce8b8a2773c6616689032d353' Cloning into 'psychic-avenger'... remote: Counting objects: 25, done. remote: Compressing objects: 100% (9/9), done. remote: Total 25 (delta 1), reused 0 (delta 0), pack-reused 15 Receiving objects: 100% (25/25), done. Resolving deltas: 100% (3/3), done. Checking connectivity... done. Submodule path 'psychic-avenger': checked out '169c5c56154f58fd745352c4f30aa0d4a1d7a88e'

Note: The --recursive flag tells git to recurse into submodule directories and run update on any submodules those submodules include. It’s not needed for this example, but it does no harm so I’ve included it here since it’s common for projecs to have nested submodules.

Now when we run ls on either directory, we see they now contain our submodule’s files:

[skade ;) ~/Work/submodule-practice (master)] christie$ ls furry-octo-nemesis/ ▕ -rw-rw-r--▏42 sec │ 52B│README.md [skade ;) ~/Work/submodule-practice (master)] christie$ ls psychic-avenger/ ▕ -rw-rw-r--▏46 sec │ 133B│README.md ▕ -rw-rw-r--▏46 sec │ 0B│other.txt

Note: It’s possible to run init and update in one command with git submodule update --init --recursive

Adding a git submodule

We’ll start in the working directory of submodule-practice.

To add a submodule, use:

git submodule add <git-url>

Let’s try adding sample project scaling-octo-wallhack as a submodule.

[2495][skade ;) ~/Work/submodule-practice (master)] christie$ git submodule add git@github.com:christi3k/scaling-octo-wallhack.git Cloning into 'scaling-octo-wallhack'... remote: Counting objects: 19, done. remote: Compressing objects: 100% (8/8), done. remote: Total 19 (delta 1), reused 0 (delta 0), pack-reused 9 Receiving objects: 100% (19/19), done. Resolving deltas: 100% (3/3), done. Checking connectivity... done.

Note: If you want the submodule to be cloned into a directory other than ‘scaling-octo-wallhack’ then you need to specify a directory to clone into as you would when cloning any other project. For example, this command will clone psychic-avenger to the subdirectory submodules:

christie$ git submodule add git@github.com:christi3k/scaling-octo-wallhack.git submodules/scaling-octo-wallhack

Let’s see what git status tells us:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: .gitmodules new file: scaling-octo-wallhack

And running ls we see that there are files in scaling-octo-wallhack directory:

[skade ;) ~/Work/submodule-practice (master +)] christie$ ll scaling-octo-wallhack/ ▕ -rw-rw-r--▏christie:christie│< min │ 180B│README.md ▕ -rw-rw-r--▏christie:christie│< min │ 0B│cutting-edge-changes.txt ▕ -rw-rw-r--▏christie:christie│< min │ 0B│file1.txt ▕ -rw-rw-r--▏christie:christie│< min │ 0B│file2.txt Specifying a branch

When you add a git submodule, git makes some assumptions for you. It sets up a remote repository to the submodule called ‘origin’ and it checksout the ‘master’ branch for you. In many cases you may no want to use the master branch. Luckily, this is easy to change.

There are two methods to specific which branch of the submodule should be checked out by your project.

Method 1: Specify a branch in .gitmodules

Here’s what the modified section of .gitmodules looks like for scaling-octo-wallhack:

[submodule "scaling-octo-wallhack"] path = scaling-octo-wallhack url = git@github.com:christi3k/scaling-octo-wallhack.git branch = REL_1

Be sure to save .gitmodules and then run git submodule update --remote:

[skade ;( ~/Work/submodule-practice (master *+)] christie$ git submodule update --remote Submodule path 'psychic-avenger': checked out 'fba086dbb321109e5cd2d9d1bc3b59478dacf6ee' Submodule path 'scaling-octo-wallhack': checked out '88d66d5ecc58d2ab82fec4fea06ffbfd2c55fd7d' Method 2: Checkout specific branch in submodule directory

In the submodule directory, checkout the branch you want with git checkout origin/:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((b49591a...))] christie$ git checkout origin/REL_1 Previous HEAD position was b49591a... Cutting-edge changes. HEAD is now at 88d66d5... Prep Release 1.

Either method will result will yield the following from git status:

[skade ;) ~/Work/submodule-practice (master *+)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: .gitmodules new file: scaling-octo-wallhack Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: scaling-octo-wallhack (new commits)

Now let’s stage and commit the changes:

[skade ;) ~/Work/submodule-practice (master *+)] christie$ git add scaling-octo-wallhack [skade ;) ~/Work/submodule-practice (master +)] christie$ git commit -m "Add scaling-octo-wallhack submodule, REL_1." [master 4a97a6f] Add scaling-octo-wallhack submodule, REL_1. 2 files changed, 4 insertions(+) create mode 160000 scaling-octo-wallhack

And don’t forget to push them to our remote repository so they are available for others:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push -n origin master To git@github.com:christi3k/submodule-practice.git 7e6d09e..4a97a6f master -> master

Looks good, do it for real now:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push origin master Counting objects: 3, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 439 bytes | 0 bytes/s, done. Total 3 (delta 2), reused 0 (delta 0) To git@github.com:christi3k/submodule-practice.git 7e6d09e..4a97a6f master -> master Removing a git submodule

Removing a submodule is a bit trickier than adding one.

Deinitialize

First, deinit the submodule with git submodule deinit :

[skade ;) ~/Work/submodule-practice (master)] christie$ git submodule deinit psychic-avenger Cleared directory 'psychic-avenger' Submodule 'psychic-avenger' (git@github.com:christi3k/psychic-avenger.git) unregistered for path 'psychic-avenger'

This command removes the submodule’s confg entries in .git/config and .gitmodules and it removes files from the submodule’s working directory. This command will delete untracked files, even when they are listed in .gitignore.

Note: You can also use this command if you simply want to prevent having a local checkout of the submodule in your working tree, without actually removing the submodule from your main project.

Let’s check our work:

[skade ;) ~/Work/submodule-practice (master)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean

This shows no changes because git submodule deinit only makes changes to our local working copy.

Running ls we also see the directories are still present:

[skade ;) ~/Work/submodule-practice (master)] christie$ ll ▕ drwxrwxr-x▏christie:christie│4 day │ 4K│furry-octo-nemesis ▕ drwxrwxr-x▏christie:christie│16 sec │ 4K│psychic-avenger ▕ -rw-rw-r--▏christie:christie│4 day │ 29B│README.md ▕ -rw-rw-r--▏christie:christie│4 day │ 110B│README.mediawiki [skade ;) ~/Work/submodule-practice (master)] Remove with git rm

To actually remove the submodule from your project’s repository, use git rm:

[skade ;) ~/Work/submodule-practice (master)] christie$ git rm psychic-avenger rm 'psychic-avenger'

Let’s check our work:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD ..." to unstage) modified: .gitmodules deleted: psychic-avenger

These changes have been staged by git automatically, so to see what has changed about .gitmodules, use --cached flag or its alias --staged:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git diff --cached diff --git a/.gitmodules b/.gitmodules index dec1204..e531507 100644 --- a/.gitmodules +++ b/.gitmodules @@ -1,6 +1,3 @@ [submodule "furry-octo-nemesis"] path = furry-octo-nemesis url = git@github.com:christi3k/furry-octo-nemesis.git -[submodule "psychic-avenger"] - path = psychic-avenger - url = git@github.com:christi3k/psychic-avenger.git diff --git a/psychic-avenger b/psychic-avenger deleted file mode 160000 index fdd4b36..0000000 --- a/psychic-avenger +++ /dev/null @@ -1 +0,0 @@ -Subproject commit fdd4b366458757940d7692b61e22f4d1b21c825a

So we see that in .gitmodules, lines related to psychic-avenger have been removed and that the psychic-avenger directory and commit hash has also been removed.

And a directory listing shows the files are no longer in our working directory:

christie$ ll ▕ drwxrwxr-x▏christie:christie│4 day │ 4K│furry-octo-nemesis ▕ -rw-rw-r--▏christie:christie│4 day │ 29B│README.md ▕ -rw-rw-r--▏christie:christie│4 day │ 110B│README.mediawiki Remvoing all reference to the submodule (optional)

For whatever reason, git does not remove all trace of the submodule even after these commands. To completely remove all reference, you need to also delete the .git/modules entry to really have it be gone:

[skade ;) ~/Work/submodule-practice (master)] christie$ rm -rf .git/modules/psychic-avenger

Note: This probably optional for most use-cases. The only time you might run into trouble if you leave this reference is if you later add a submodule of the same name. In that case, git will complain and ask you to pick a different name or to simply checkout the submodule from the remote source it already knows about.

Commit changes

Now we commit our changes:

[skade ;) ~/Work/submodule-practice (master +)] christie$ git commit -m "Remove psychic-avenger submodule." [master 7833c1c] Remove psychic-avenger submodule. 2 files changed, 4 deletions(-) delete mode 160000 psychic-avenger

Looks good, let’s push our changes:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push -n origin master To git@github.com:christi3k/submodule-practice.git d89b5cb..7833c1c master -&gt; master

Looks good, let’s do it for real:

[skade ;) ~/Work/submodule-practice (master)] christie$ git push origin master Counting objects: 3, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 402 bytes | 0 bytes/s, done. Total 3 (delta 1), reused 0 (delta 0) To git@github.com:christi3k/submodule-practice.git d89b5cb..7833c1c master -&gt; master Updating submodules within your project

The simplest use case for updating submodules within your project is when you simply want to pull in the most recent version of that submodule or want to change to a different branch.

There are two methods for updating modules.

Method 1: Specify a branch in .gitmodules and use git submodule update --remote

Using this method, you first need to ensure that the branch you want to use is specified for each submodule in .gitmodules.

Let’s take a look at the .gitmodules file for our sample project:

[submodule "furry-octo-nemesis"] path = furry-octo-nemesis url = git@github.com:christi3k/furry-octo-nemesis.git [submodule "psychic-avenger"] path = psychic-avenger url = git@github.com:christi3k/psychic-avenger.git branch = RELEASE_E [submodule "scaling-octo-wallhack"] path = scaling-octo-wallhack url = git@github.com:christi3k/scaling-octo-wallhack.git

The submodule psychic-avenger is set to checkout branch RELEASE_E and both furry-octo-nemesis and scaling-octo-wallhack will checkout master because no branch is specified.

Edit .gitsubmodules

To change the branch that is checked out, update the value of branch:

[submodule "scaling-octo-wallhack"] path = scaling-octo-wallhack url = git@github.com:christi3k/scaling-octo-wallhack.git brach = REL_2

Now scaling-octo-wallhack is set to checkout the REL_2 branch.

Update with git submodule update –remote [skade ;) ~/Work/submodule-practice (master *)] christie$ git submodule update --remote Submodule path 'scaling-octo-wallhack': checked out 'e845f5431119b527b7cde1ad138a373c5b2d4ec1'

And if we cd into scaling-octo-wallhack and run branch -vva we confirm we’ve checked out the REL_2 branch:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((e845f54...))] christie$ git branch -vva * (detached from e845f54) e845f54 Release 2. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 e845f54 Release 2. remotes/origin/master b49591a Cutting-edge changes. Method 2: git fetch and git checkout within submodule

First, change into the directory of the submodule you wish to update.

fetch from remote repository

Then run git fetch origin to grab any new commits:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((b49591a...))] christie$ git fetch origin remote: Counting objects: 3, done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 1), reused 3 (delta 1), pack-reused 0 Unpacking objects: 100% (3/3), done. From github.com:christi3k/scaling-octo-wallhack e845f54..1cc1044 REL_2 -> origin/REL_2

Here was can see that the last commit for the REL_2 branch changed from e845f54 to 1cc1044.

Running branch -vva confirms this and that we haven’t changed which commit is checked out yet:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((88d66d5...))] christie$ git branch -vva * (detached from 88d66d5) 88d66d5 Prep Release 1. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. Checkout branch

So now we can re-checkout the REL_2 remote branch:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((88d66d5...))] christie$ git checkout origin/REL_2 Previous HEAD position was 88d66d5... Prep Release 1. HEAD is now at 1cc1044... Hotfix for Release 2 branch.

Let’s check our work with branch -vva:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((1cc1044...))] christie$ git branch -vva * (detached from origin/REL_2) 1cc1044 Hotfix for Release 2 branch. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. Commiting the changes

Moving back to our main project directory, let’s check our work with git status && git diff:

[skade ;) ~/Work/submodule-practice (master *)] christie$ git status && git diff On branch master Your branch is up-to-date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: scaling-octo-wallhack (new commits) no changes added to commit (use "git add" and/or "git commit -a") diff --git a/scaling-octo-wallhack b/scaling-octo-wallhack index 88d66d5..1cc1044 160000 --- a/scaling-octo-wallhack +++ b/scaling-octo-wallhack @@ -1 +1 @@ -Subproject commit 88d66d5ecc58d2ab82fec4fea06ffbfd2c55fd7d +Subproject commit 1cc104418a6a24b9a3cc227df4ebaf707ea23b49

Notice that there are no changes to .gitmodules with this method. Instead, we’ve simply changed the commit hash that the super project is pointing to for this submodule.

Now let’s add, commit and push our changes:

[skade ;) ~/Work/submodule-practice (master *)] christie$ git add scaling-octo-wallhack [skade ;) ~/Work/submodule-practice (master +)] christie$ git commit -m "Updating to current REL_2." [master 5ddbe87] Updating to current REL_2. 1 file changed, 1 insertion(+), 1 deletion(-) [skade ;) ~/Work/submodule-practice (master)] christie$ git push -n origin master To git@github.com:christi3k/submodule-practice.git 4a97a6f..5ddbe87 master -> master [skade ;) ~/Work/submodule-practice (master)] christie$ git push origin master Counting objects: 2, done. Delta compression using up to 4 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 261 bytes | 0 bytes/s, done. Total 2 (delta 1), reused 0 (delta 0) To git@github.com:christi3k/submodule-practice.git 4a97a6f..5ddbe87 master -> master what’s the difference between git submodule update and git submodule update –remote?

Note: git submodule update –remote looks at the value you have in .gitmodules for branch. If there isn’t a value there, it assumes master. git submodule update looks at your repository has for the commit of the submodule project and checks that commit out. Both checkout to a detached state by default unless you specify –merge or –rebase.

These two commands have the ability to step on each other. If you have checked out a specific commit in the submodule directory, it’s possible for it to be different than the commit that would be checked out by git submdoule update –remote specificied in the branch value of .gitmodules.
Likewise, simply looking at the branch value in .gitmodules does not guarentee that’s the branch you have checked out for the submodule. When in doubt, cd to the submodule directory and run git branch -vva. git branch -vva is your friend!

When a subbmodule has been removed

When a submodule has been removed from a repository, what’s the best way to update your working directory to reflect this change?

The answer is that it depends on whether or not you have local, untracked files in the submodule directory that you want to keep.

Method 1: deinit and then fetch and merge

Use this method if you want to completely remove the submodule directory even if you have local, untracked files in it.

Note: In the following examples, we’re working in another checkout of our submodule-practice.

First, use git submodule deinit to deinitialize the submodule:

[skade ;) ~/Work/submodule-elsewhere (master *)] christie$ git submodule deinit psychic-avenger error: the following file has local modifications: psychic-avenger (use --cached to keep the file, or -f to force removal) Submodule work tree 'psychic-avenger' contains local modifications; use '-f' to discard them

We have untracked changes, so we need to use -f to remove them:

[skade ;( ~/Work/submodule-elsewhere (master *)] christie$ git submodule deinit -f psychic-avenger Cleared directory 'psychic-avenger' Submodule 'psychic-avenger' (git@github.com:christi3k/psychic-avenger.git) unregistered for path 'psychic-avenger'

Now fetch changes from the remote repository and merge them:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ git fetch origin remote: Counting objects: 3, done. remote: Compressing objects: 100% (1/1), done. remote: Total 3 (delta 2), reused 3 (delta 2), pack-reused 0 Unpacking objects: 100% (3/3), done. From github.com:christi3k/submodule-practice 666af5d..6038c72 master -> origin/master [skade ;) ~/Work/submodule-elsewhere (master)] christie$ git merge origin/master Updating 666af5d..6038c72 Fast-forward .gitmodules | 3 --- psychic-avenger | 1 - 2 files changed, 4 deletions(-) delete mode 160000 psychic-avenger

Running ls on our project directory shows that the all of psychic-avenger’s files have been removed:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ ll ▕ drwxrwxr-x▏christie:christie│3 hour│ 4K│furry-octo-nemesis ▕ drwxrwxr-x▏christie:christie│5 min │ 4K│scaling-octo-wallhack ▕ -rw-rw-r--▏christie:christie│3 hour│ 29B│README.md ▕ -rw-rw-r--▏christie:christie│3 hour│ 110B│README.mediawiki Method 2: fetch and merge and clean up as needed

Use this method if you have local, untracked (and/or ignored) changes that you want to keep, or if you want to remove files manually.

First, fetch changes from the remote repository and merge them with your local branch:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ git fetch origin remote: Counting objects: 3, done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 1), reused 3 (delta 1), pack-reused 0 Unpacking objects: 100% (3/3), done. From github.com:christi3k/submodule-practice d89b5cb..7833c1c master -> origin/master [skade ;) ~/Work/submodule-elsewhere (master)] christie$ git merge origin/master Updating d89b5cb..7833c1c warning: unable to rmdir psychic-avenger: Directory not empty Fast-forward .gitmodules | 3 --- psychic-avenger | 1 - 2 files changed, 4 deletions(-) delete mode 160000 psychic-avenger

Note the warning “unable to rm dir…” and let’s check our work:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ git status On branch master Your branch is up-to-date with 'origin/master'. Untracked files: (use "git add ..." to include in what will be committed) psychic-avenger/ nothing added to commit but untracked files present (use "git add" to track)

No uncommited or staged changes, but the directory that was our submodule psychic-avenger is now untracked. Running ls shows that there are still files in the directory, too:

[skade ;( ~/Work/submodule-elsewhere (master)] christie$ ll psychic-avenger/ ▕ -rw-rw-r--▏christie:christie│30 min │ 192B│README.md

Now you can clean up files as you like. In this example we’ll delete the entire psychic-avenger directory:

[skade ;) ~/Work/submodule-elsewhere (master)] christie$ rm -rf psychic-avenger Working on projects checked out as submodules

Working on projects checked out as submodules is rather straight-forward, particularly is you are comfortable with git branching and make liberal use of git branch -vva.

Let’s pretend that scaling-octo-wallhack is an extension that I’m developing for my project submodule-practice. I want to work on with while it’s checked out as a submodule because doing so makes it easy to test the extension within my larger project.

Create a working branch

First switch the the branch that you want to use as the base for your work. I’m going to use local tracking branch master, which I’ll first ensure is up to date with the remote origin/master:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((1cc1044...))] christie$ git branch -vva * (detached from origin/REL_2) 1cc1044 Hotfix for Release 2 branch. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack ((b49591a...))] christie$ git checkout master Switched to branch 'master' Your branch is up-to-date with 'origin/master'.

If master had not been up-to-date with orgin/master, I would have merged.

Next, let’s create a tracking branch for this awesome feature we’re going to work on:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (master)] christie$ git checkout -b awesome-feature Switched to a new branch 'awesome-feature' [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git branch -vva * awesome-feature b49591a Cutting-edge changes. master b49591a [origin/master] Cutting-edge changes. remotes/origin/HEAD -> origin/master remotes/origin/REL_1 88d66d5 Prep Release 1. remotes/origin/REL_2 1cc1044 Hotfix for Release 2 branch. remotes/origin/master b49591a Cutting-edge changes. Do some work, add and commit changes

No we’ll do some work on the feature, add and commit that work:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ touch awesome_feature.txt [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git add awesome_feature.txt [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature +)] christie$ git commit -m "first round of work on awesome feature" [awesome-feature 005994b] first round of work on awesome feature 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 awesome_feature.txt Push to remote repository

Now we’ll push that to our remost repository so others can contribute:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git push -n origin awesome-feature To git@github.com:christi3k/scaling-octo-wallhack.git * [new branch] awesome-feature -> awesome-feature [skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git push origin awesome-feature Counting objects: 2, done. Delta compression using up to 4 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 265 bytes | 0 bytes/s, done. Total 2 (delta 1), reused 0 (delta 0) To git@github.com:christi3k/scaling-octo-wallhack.git * [new branch] awesome-feature -> awesome-feature Switch back to remote branch, headless checkout

If we’d like to switch back to a remote branch, we can:

[skade ;) ~/Work/submodule-practice/scaling-octo-wallhack (awesome-feature)] christie$ git checkout origin/REL_2 Note: checking out 'origin/REL_2'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at 1cc1044... Hotfix for Release 2 branch. Using this new branch to collaborate

To try this awesome feature in another checkout, use git fetch:

[skade ;) ~/Work/submodule-elsewhere/scaling-octo-wallhack ((1cc1044...))] christie$ git fetch origin remote: Counting objects: 2, done. remote: Compressing objects: 100% (1/1), done. remote: Total 2 (delta 1), reused 2 (delta 1), pack-reused 0 Unpacking objects: 100% (2/2), done. From github.com:christi3k/scaling-octo-wallhack * [new branch] awesome-feature -> origin/awesome-feature

If you just want to try the feature, checkout orgin/branch:

[2724][skade ;) ~/Work/submodule-elsewhere/scaling-octo-wallhack ((1cc1044...))] christie$ git checkout origin/awesome-feature Previous HEAD position was 1cc1044... Hotfix for Release 2 branch. HEAD is now at 005994b... first round of work on awesome feature

If you plan to work on the feature, create a tracking branch:

[2725][skade ;) ~/Work/submodule-elsewhere/scaling-octo-wallhack ((005994b...))] christie$ git checkout -b awesome-feature Switched to a new branch 'awesome-feature' Acknowledgements

Thanks GPHemsley for helping me figure out git submodules within the context of our MozillaWiki work. I couldn’t have written this post without those conversations or the notes I took during them.

Categorieën: Mozilla-nl planet

Pascal Finette: My Yesterbox Gmail Setup

Mozilla planet - di, 17/02/2015 - 19:21

In my potentially never-ending quest to get on top of the ever-growing email onslaught, I came across Tony Hsieh's Yesterbox method/manifesto. It's a deceptively simple but effective way to deal with your inbox: You only answer the emails from yesterday (plus the very few emails which require immediate attention). That way you get a chance to be on top of your email (as the number of emails from yesterday is finite) instead of being caught in an endless game of whack-a-mole. Plus people will get a guaranteed response from you in less than 48 hours - whereas in the past I often skipped more complex emails for days as I was constantly dealing with new incoming mail.

For a while I toyed around with different setups. Until I settled on the following Gmail configuration which works beautifully for me:

The left box shows you your incoming email (which allows for quick scanning and identifying those pesky emails which require immediate attention), the top right box is your Yesterbox and thus the email list I focus on. And the lower right box shows emails I starred - typically I use this for important emails I need to keep an eye on (say for example I am waiting for an answer to an email).

It's a simple but incredibly effective setup - here's how you set this up in your Gmail account:

  1. Activate the Gmail Labs feature "Multiple Inboxes" in Settings/Labs

  1. After activating Multiple Inboxes and reloading the Settings page in Gmail you will have a new section fittingly called "Multiple Inboxes". Here you add two inboxes with custom searches: One will be your Yesterbox with a search for "in:inbox older_than:24h", the other one will be your Starred inbox with a custom search for "is:starred". Set the extra panels to show on the right side and increase the number of mails to be displayed to 50 (or whatever works for you) and you're done.

  1. There is no step three! :)

Enjoy and let me know if this works for you (or if you have an even better setup).

Categorieën: Mozilla-nl planet

Mike Conley: On unsafe CPOW usage, and “why is my Nightly so sluggish with e10s enabled?”

Thunderbird - di, 17/02/2015 - 17:47

If you’ve opened the Browser Console lately while running Nightly with e10s enabled, you might have noticed a warning message – “unsafe CPOW usage” – showing up periodically.

I wanted to talk a little bit about what that means, and what’s being done about it. Brad Lassey already wrote a bit about this, but I wanted to expand upon it (especially since one of my goals this quarter is to get a handle on unsafe CPOW usage in core browser code).

I also wanted to talk about sluggishness that some of our brave Nightly testers with e10s enabled have been experiencing, and where that sluggishness is coming from, and what can be done about it.

What is a CPOW?

“CPOW” stands for “Cross-process Object Wrapper”1, and is part of the glue that has allowed e10s to be enabled on Nightly without requiring a full re-write of the front-end code. It’s also part of the magic that’s allowing a good number of our most popular add-ons to continue working (albeit slowly).

In sum, a CPOW is a way for one process to synchronously access and manipulate something in another process, as if they were running in the same process. Anything that can be considered a JavaScript Object can be represented as a CPOW.

Let me give you an example.

In single-process Firefox, easy and synchronous access to the DOM of web content was more or less assumed. For example, in browser code, one could do this from the scope of a browser window:

let doc = gBrowser.selectedBrowser.contentDocument; let contentBody = doc.body;

Here contentBody corresponds to the <body> element of the document in the currently selected browser. In single-process Firefox, querying for and manipulating web content like this is quick and easy.

In multi-process Firefox, where content is processed and rendered in a completely separate process, how does something like this work? This is where CPOWs come in2.

With a CPOW, one can synchronously access and manipulate these items, just as if they were in the same process. We expose a CPOW for the content document in a remote browser with contentDocumentAsCPOW, so the above could be rewritten as:

let doc = gBrowser.selectedBrowser.contentDocumentAsCPOW; let contentBody = doc.body;

I should point out that contentDocumentAsCPOW and contentWindowAsCPOW are exposed on <xul:browser> objects, and that we don’t make every accessor of a CPOW have the “AsCPOW” suffix. This is just our way of making sure that consumers of the contentWindow and contentDocument on the main process side know that they’re probably working with CPOWs3. contentBody.firstChild would also be a CPOW, since CPOWs can only beget more CPOWs.

So for the most part, with CPOWs, we can continue to query and manipulate the <body> of the document loaded in the current browser just like we used to. It’s like an invisible compatibility layer that hops us right over that process barrier.

Great, right?

Well, not really.

CPOWs are really a crutch to help add-ons and browser code exist in this multi-process world, but they’ve got some drawbacks. Most noticeably, there are performance drawbacks.

Why is my Nightly so sluggish with e10s enabled?

Have you been noticing sluggish performance on Nightly with e10s? Chances are this is caused by an add-on making use of CPOWs (either knowingly or unknowingly). Because CPOWs are used for synchronous reading and manipulation of objects in other processes, they send messages to other processes to do that work, and block the main process while they wait for a response. We call this “CPOW traffic”, and if you’re experiencing a sluggish Nightly, this is probably where the sluggishness if coming from.

Instead of using CPOWs, add-ons and browser code should be updated to use frame scripts sent over the message manager. Frame scripts cannot block the main process, and can be optimized to send only the bare minimum of information required to perform an action in content and return a result.

Add-ons built with the Add-on SDK should already be using “content scripts” to manipulate content, and therefore should inherit a bunch of fixes from the SDK as e10s gets closer to shipping. These add-ons should not require too many changes. Old-style add-ons, however, will need to be updated to use frame scripts unless they want to be super-sluggish and bog the browser down with CPOW traffic.

And what constitutes “unsafe CPOW usage”?

“unsafe” might be too strong a word. “unexpected” might be a better term. Brad Lassey laid this out in his blog post already, but I’ll quickly rehash it.

There are two main cases to consider when working with CPOWs:

  1. The content process is already blocked sending up a synchronous message to the parent process
  2. The content process is not blocked

The first case is what we consider “the good case”. The content process is in a known good state, and its primed to receive IPC traffic (since it’s otherwise just idling). The only bad part about this is the IPC traffic.

The second case is what we consider the bad case. This is when the parent is sending down CPOW messages to the child (by reading or manipulating objects in the content process) when the child process might be off processing other things. This case is far more likely than the first case to cause noticeable performance problems, as the main thread of the content process might be bogged down doing other things before it can handle the CPOW traffic – and the parent will be blocked waiting for the messages to be responded to!

There’s also a more speculative fear that the parent might send down CPOW traffic at a time when it’s “unsafe” to communicate with the content process. There are potentially times when it’s not safe to run JS code in the content process, but CPOWs traffic requires both processes to execute JS. This is a concern that was expressed to me by someone over IRC, and I don’t exactly understand what the implications are – but if somebody wants to comment and let me know, I’ll happily update this post.

So, anyhow, to sum – unsafe CPOW usage is when CPOW traffic is initiated on the parent process side while the content process is not blocked. When this unsafe CPOW usage occurs, we log an “unsafe CPOW usage” message to the Browser Console, along with the script and line number where the CPOW traffic was initiated from.

Measuring

We need to measure and understand CPOW usage in Firefox, as well as add-ons running in Firefox, and over time we need to reduce this CPOW usage. The priority should be on reducing the “unsafe CPOW usage” CPOWs in core browser code.

If there’s anything that working on the Australis project taught me, it’s that in order to change something, you need to know how to measure it first. That way, you can make sure your efforts are having an effect.

We now have a way of measuring the amount of time that Firefox code and add-ons spend processing CPOW messages. You can look at it yourself – just go to about:compartments.

It’s not the prettiest interface, but it’s a start. The second column is the time processing CPOW traffic, and the higher the number, the longer it’s been doing it. Naturally, we’ll be working to bring those numbers down over time.

A possibly quick-fix for a slow Nightly with e10s

As I mentioned, we also list add-ons in about:compartments, so if you’re experiencing a slow Nightly, check out about:compartments and see if there’s an add-on with a high number in the second column. Then, try disabling that add-on to see if your performance problem is reduced.

If so, great! Please file a bug on Bugzilla in this component for the add-on, mention the name of the add-on4, describe the performance problem, and mark it blocking e10s-addons if you can.

We’re hoping to automate this process by exposing some UI that informs the user when an add-on is causing too much CPOW traffic. This will be landing in Nightly near you very soon.

PKE Meter, a CPOW Geiger Counter

Logging “unsafe CPOW usage” is all fine and dandy if you’re constantly looking at the Browser Console… but who is constantly looking at the Browser Console? Certainly not me.

Instead, I whipped up a quick and dirty add-on that plays a click, like a Geiger Counter, anytime “unsafe CPOW usage” is put into the Browser Console. This has already highlighted some places where we can reduce unsafe CPOW usage in core Firefox code – particularly:

  1. The Page Info dialog. This is probably the worse offender I’ve found so far – humongous unsafe CPOW traffic just by opening the dialog, and it’s really sluggish.
  2. Closing tabs. SessionStore synchronously communicates with the content process in order to read the tab state before the tab is closed.
  3. Back / forward gestures, at least on my MacBook
  4. Typing into an editable HTML element after the Findbar has been opened.

If you’re interested in helping me find more, install this add-on5, and listen for clicks. At this point, I’m only interested in unsafe CPOW usage caused by core Firefox code, so you might want to disable any other add-ons that might try to synchronously communicate with content.

If you find an “unsafe CPOW usage” that’s not already blocking this bug, please file a new one! And cc me on it! I’m mconley at mozilla dot com.

  1. I pronounce CPOW as “kah-POW”, although I’ve also heard people use “SEE-pow”. To each his or her own. 

  2. For further reading, Bill McCloskey discusses CPOWs in greater detail in this blog post. There’s also this handy documentation

  3. I say probably, because in the single-process case, they’re not working with CPOWs – they’re accessing the objects directly as they used to. 

  4. And say where to get it from, especially if it’s not on AMO. 

  5. Source code is here 

Categorieën: Mozilla-nl planet

Mike Conley: On unsafe CPOW usage, and “why is my Nightly so sluggish with e10s enabled?”

Mozilla planet - di, 17/02/2015 - 17:45

If you’ve opened the Browser Console lately while running Nightly with e10s enabled, you might have noticed a warning message – “unsafe CPOW usage” – showing up periodically.

I wanted to talk a little bit about what that means, and what’s being done about it. Brad Lassey already wrote a bit about this, but I wanted to expand upon it (especially since one of my goals this quarter is to get a handle on unsafe CPOW usage in core browser code).

I also wanted to talk about sluggishness that some of our brave Nightly testers with e10s enabled have been experiencing, and where that sluggishness is coming from, and what can be done about it.

What is a CPOW?

“CPOW” stands for “Cross-process Object Wrapper”1, and is part of the glue that has allowed e10s to be enabled on Nightly without requiring a full re-write of the front-end code. It’s also part of the magic that’s allowing a good number of our most popular add-ons to continue working (albeit slowly).

In sum, a CPOW is a way for one process to synchronously access and manipulate something in another process, as if they were running in the same process. Anything that can be considered a JavaScript Object can be represented as a CPOW.

Let me give you an example.

In single-process Firefox, easy and synchronous access to the DOM of web content was more or less assumed. For example, in browser code, one could do this from the scope of a browser window:

let doc = gBrowser.selectedBrowser.contentDocument; let contentBody = doc.body;

Here contentBody corresponds to the <body> element of the document in the currently selected browser. In single-process Firefox, querying for and manipulating web content like this is quick and easy.

In multi-process Firefox, where content is processed and rendered in a completely separate process, how does something like this work? This is where CPOWs come in2.

With a CPOW, one can synchronously access and manipulate these items, just as if they were in the same process. We expose a CPOW for the content document in a remote browser with contentDocumentAsCPOW, so the above could be rewritten as:

let doc = gBrowser.selectedBrowser.contentDocumentAsCPOW; let contentBody = doc.body;

I should point out that contentDocumentAsCPOW and contentWindowAsCPOW are exposed on <xul:browser> objects, and that we don’t make every accessor of a CPOW have the “AsCPOW” suffix. This is just our way of making sure that consumers of the contentWindow and contentDocument on the main process side know that they’re probably working with CPOWs3. contentBody.firstChild would also be a CPOW, since CPOWs can only beget more CPOWs.

So for the most part, with CPOWs, we can continue to query and manipulate the <body> of the document loaded in the current browser just like we used to. It’s like an invisible compatibility layer that hops us right over that process barrier.

Great, right?

Well, not really.

CPOWs are really a crutch to help add-ons and browser code exist in this multi-process world, but they’ve got some drawbacks. Most noticeably, there are performance drawbacks.

Why is my Nightly so sluggish with e10s enabled?

Have you been noticing sluggish performance on Nightly with e10s? Chances are this is caused by an add-on making use of CPOWs (either knowingly or unknowingly). Because CPOWs are used for synchronous reading and manipulation of objects in other processes, they send messages to other processes to do that work, and block the main process while they wait for a response. We call this “CPOW traffic”, and if you’re experiencing a sluggish Nightly, this is probably where the sluggishness if coming from.

Instead of using CPOWs, add-ons and browser code should be updated to use frame scripts sent over the message manager. Frame scripts cannot block the main process, and can be optimized to send only the bare minimum of information required to perform an action in content and return a result.

Add-ons built with the Add-on SDK should already be using “content scripts” to manipulate content, and therefore should inherit a bunch of fixes from the SDK as e10s gets closer to shipping. These add-ons should not require too many changes. Old-style add-ons, however, will need to be updated to use frame scripts unless they want to be super-sluggish and bog the browser down with CPOW traffic.

And what constitutes “unsafe CPOW usage”?

“unsafe” might be too strong a word. “unexpected” might be a better term. Brad Lassey laid this out in his blog post already, but I’ll quickly rehash it.

There are two main cases to consider when working with CPOWs:

  1. The content process is already blocked sending up a synchronous message to the parent process
  2. The content process is not blocked

The first case is what we consider “the good case”. The content process is in a known good state, and its primed to receive IPC traffic (since it’s otherwise just idling). The only bad part about this is the IPC traffic.

The second case is what we consider the bad case. This is when the parent is sending down CPOW messages to the child (by reading or manipulating objects in the content process) when the child process might be off processing other things. This case is far more likely than the first case to cause noticeable performance problems, as the main thread of the content process might be bogged down doing other things before it can handle the CPOW traffic – and the parent will be blocked waiting for the messages to be responded to!

There’s also a more speculative fear that the parent might send down CPOW traffic at a time when it’s “unsafe” to communicate with the content process. There are potentially times when it’s not safe to run JS code in the content process, but CPOWs traffic requires both processes to execute JS. This is a concern that was expressed to me by someone over IRC, and I don’t exactly understand what the implications are – but if somebody wants to comment and let me know, I’ll happily update this post.

So, anyhow, to sum – unsafe CPOW usage is when CPOW traffic is initiated on the parent process side while the content process is not blocked. When this unsafe CPOW usage occurs, we log an “unsafe CPOW usage” message to the Browser Console, along with the script and line number where the CPOW traffic was initiated from.

Measuring

We need to measure and understand CPOW usage in Firefox, as well as add-ons running in Firefox, and over time we need to reduce this CPOW usage. The priority should be on reducing the “unsafe CPOW usage” CPOWs in core browser code.

If there’s anything that working on the Australis project taught me, it’s that in order to change something, you need to know how to measure it first. That way, you can make sure your efforts are having an effect.

We now have a way of measuring the amount of time that Firefox code and add-ons spend processing CPOW messages. You can look at it yourself – just go to about:compartments.

It’s not the prettiest interface, but it’s a start. The second column is the time processing CPOW traffic, and the higher the number, the longer it’s been doing it. Naturally, we’ll be working to bring those numbers down over time.

A possibly quick-fix for a slow Nightly with e10s

As I mentioned, we also list add-ons in about:compartments, so if you’re experiencing a slow Nightly, check out about:compartments and see if there’s an add-on with a high number in the second column. Then, try disabling that add-on to see if your performance problem is reduced.

If so, great! Please file a bug on Bugzilla in this component for the add-on, mention the name of the add-on4, describe the performance problem, and mark it blocking e10s-addons if you can.

We’re hoping to automate this process by exposing some UI that informs the user when an add-on is causing too much CPOW traffic. This will be landing in Nightly near you very soon.

PKE Meter, a CPOW Geiger Counter

Logging “unsafe CPOW usage” is all fine and dandy if you’re constantly looking at the Browser Console… but who is constantly looking at the Browser Console? Certainly not me.

Instead, I whipped up a quick and dirty add-on that plays a click, like a Geiger Counter, anytime “unsafe CPOW usage” is put into the Browser Console. This has already highlighted some places where we can reduce unsafe CPOW usage in core Firefox code – particularly:

  1. The Page Info dialog. This is probably the worse offender I’ve found so far – humongous unsafe CPOW traffic just by opening the dialog, and it’s really sluggish.
  2. Closing tabs. SessionStore synchronously communicates with the content process in order to read the tab state before the tab is closed.
  3. Back / forward gestures, at least on my MacBook
  4. Typing into an editable HTML element after the Findbar has been opened.

If you’re interested in helping me find more, install this add-on5, and listen for clicks. At this point, I’m only interested in unsafe CPOW usage caused by core Firefox code, so you might want to disable any other add-ons that might try to synchronously communicate with content.

If you find an “unsafe CPOW usage” that’s not already blocking this bug, please file a new one! And cc me on it! I’m mconley at mozilla dot com.

  1. I pronounce CPOW as “kah-POW”, although I’ve also heard people use “SEE-pow”. To each his or her own. 

  2. For further reading, Bill McCloskey discusses CPOWs in greater detail in this blog post. There’s also this handy documentation

  3. I say probably, because in the single-process case, they’re not working with CPOWs – they’re accessing the objects directly as they used to. 

  4. And say where to get it from, especially if it’s not on AMO. 

  5. Source code is here 

Categorieën: Mozilla-nl planet

Air Mozilla: Martes mozilleros

Mozilla planet - di, 17/02/2015 - 17:00

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Categorieën: Mozilla-nl planet

Gregory Szorc: Lost Productivity Due to Non-Unified Repositories

Mozilla planet - di, 17/02/2015 - 15:50

I'm currently working on annotating moz.build files with metadata that defines things like which bug component and code reviewers map to which files. It's going to enable a lot of awesomeness.

As part of this project, I'm implementing a new moz.build processing mode. Instead of reading moz.build files by traversing DIRS variables from previously-executed moz.build files, we're evaluating moz.build files according to filesystem topology. This has uncovered a few cases where a moz.build file errors because of assumptions that no longer hold. For example, for directories that are only active on Windows, the moz.build file might assume that if Windows is always true.

One such problem was with gfx/angle/srx/libGLESv2/moz.build. This file contained code similar to the following:

if CONFIG['IS_WINDOWS']: SOURCES += ['foo.cpp'] ... SOURCES['foo.cpp'].flags += ['-DBAR']

This always ran without issue because this moz.build was only included if building for Windows. This assumption is of course invalid when in filesystem traversal mode.

Anyway, as part of updating this trouble file, I lost maybe an hour of productivity. Here's how.

The top of the trouble moz.build file has a comment:

# Please note this file is autogenerated from generate_mozbuild.py, # so do not modify it directly

OK. So, I need to modify generate_mozbuild.py. First thing's first: I need to locate it:

$ hg locate generate_mozbuild.py gfx/skia/generate_mozbuild.py

So I load up this file. I see a main(). I run the script in my shell and get an error. Weird. I look around gfx/skia and see a README_MOZILLA file. I open it. README_MOZILLA contains some instructions. They aren't very good. I hop in #gfx on IRC and ask around. They tell me to do a Subversion clone of Skia and to check out the commit referenced in README_MOZILLA. There is no repo URL in README_MOZILLA. I search Google. I find a Git URL. I notice that README_MOZILLA contains a SHA-1 commit, not a Subversion integer revision. I figure the Git repo is what was meant. I clone the Git repo. I attempt to run the generation script referenced by README_MOZILLA. It fails. I ask again in #gfx. They are baffled at first. I dig around the source code. I see a reference in Skia's upstream code to a path that doesn't exist. I tell the #gfx people. They tell me sub-repos are likly involved and to use gclient to clone the repo. I search for the proper Skia source code docs and type the necessary gclient commands. (Fortunately I've used gclient before, so this wasn't completely alien to me.)

I get the Skia clone in the proper state. I run the generation script and all works. But I don't see it writing the trouble moz.build file I set out to fix. I set some breakpoints. I run the code again. I'm baffled.

Suddenly it hits me: I've been poking around with gfx/skia which is separate from gfx/angle! I look around gfx/angle and see a README.mozilla file. I open it. It reveals the existence of the Git repo https://github.com/mozilla/angle. I open GitHub in my browser. I see a generate_mozbuild.py script.

I now realize there are multiple files named generate_mozbuild.py. Unfortunately, the one I care about - the ANGLE one - is not checked into mozilla-central. So, my search for it with hg files did not reveal its existence. Between me trying to get the Skia code cloned and generating moz.build files, I probably lost an hour of work. All because a file with a similar name wasn't checked into mozilla-central!

I assumed that the single generate_mozbuild.py I found under source control was the only file of that name and that it must be the file I was interested in.

Maybe I should have known to look at gfx/angle/README.mozilla first. Maybe I should have known that gfx/angle and gfx/skia are completely independent.

But I didn't. My ignorance cost me.

Had the contents of the separate ANGLE repository been checked into mozilla-central, I would have seen the multiple generate_mozbuild.py files and I would likely have found the correct one immediately. But they weren't and I lost an hour of my time.

And I'm not done. Now I have to figure out how the separate ANGLE repo integrates with mozilla-central. I'll have to figure out how to submit the patch I still need to write. The GitHub description of this repo says Talk to vlad, jgilbert, or kamidphish for more info. So now I have to bother them before I can submit my patch. Maybe I'll just submit a pull request and see what happens.

I'm convinced I wouldn't have encountered this problem if a monolithic repository were used. I would have found the separate generate_mozbuild.py file immediately. And, the change process would likely have been known to me since all the code was in a repository I already knew how to submit patches from.

Separate repos are just lots of pain. You can bet I'll link to this post when people propose splitting up mozilla-central into multiple repositories.

Categorieën: Mozilla-nl planet

Soledad Penades: How to organise a WebGL event

Mozilla planet - di, 17/02/2015 - 15:30

I got asked this:

Going to organize a series of open, and free, events covering WebGL / Web API […]

We ended up opting for an educational workshop format. Knowing you have experience with WebGL, I’d like to ask you if you woudl support us in setting up the materials […]

In the interest of helping more people that might be wanting to start a WebGL group in their town, I’m posting the answer I gave them:

I think you’re putting too much faith on me

I first learnt maths and then OpenGL and then WebGL. I can’t possibly give you a step by step tutorial that mimics my learning process.

If you have no prior experience with WebGL, I suggest you either look for a (somewhat) local speaker and try to get them to give an introductory talk. Probably people that attend the event will be interested in WebGL already or will get interested after the talk.

Then just get someone from the audience excited about WebGL and have them give the next talk

If you can’t find any speaker, then you’ll need to become one, and for that you’ll need to document yourself. I can’t write a curriculum for you, as it will take way more time than I currently have. WebGL implies many things, from understanding JavaScript to understanding 3D geometry and maths, to how to set the whole system up and running on a browser.

Or can start by learning to use a library such as three.js and once you become acquainted with its fundamentals, start digging into “pure WebGL” if you want, for example writing your own custom shaders.

Or another thing you could do is get together a bunch of people interested in WebGL and try to follow along the tutorials on WebGL or the examples on three.js. So people can discuss aloud what they understand and what they don’t, and help and learn from each other.

I hope this helps you find your way around this incredibly vast subject! Good luck and have fun!

Now you know how to do this. Go and organise events! EASY!

It’s actually not easy.

flattr this!

Categorieën: Mozilla-nl planet

Gervase Markham: Alice and Bob Are Weird

Mozilla planet - di, 17/02/2015 - 13:24

Suppose Alice and Bob live in a country with 50 states. Alice is currently in state a and Bob is currently in state b. They can communicate with one another and Alice wants to test if she is currently in the same state as Bob. If they are in the same state, Alice should learn that fact and otherwise she should learn nothing else about Bob’s location. Bob should learn nothing about Alice’s location.

They agree on the following scheme:

  • They fix a group G of prime order p and generator g of G

Cryptographic problems. Gotta love ‘em.

Categorieën: Mozilla-nl planet

Wil Clouser: Marketplace and Payments Systems Diagrams

Mozilla planet - di, 17/02/2015 - 09:00

A couple years ago Krupa filled up a whiteboard with boxes and arrows, diagramming what the AMO systems looked like. There was recently interest in reviving that diagram and seeing what the Marketplace systems would look like in the same style so I sat down and drew the diagrams below, one for the Marketplace and one for Payments.

Marketplace:

Payments:

Honestly, I appreciate the view, but I wince at first glance because of all the duplication. It's supposed to be "services from the perspective of a single service." Meaning, if the box is red, anything connected to it is what that box talks to. Since the webheads talk to nearly everything it made sense to put them in the middle, and the dotted lines simply connect duplicate services. I'm unsure whether that's intuitive though, or if it would be easier to understand if I simply had a single node for each service and drew lines all over the diagram. I might try that next time, unless someone gives me a different idea. :)

Lastly, this is the diagram that came out first when I was trying to draw the two above. It breaks the Marketplace down into layers which I like because we emphasize being API driven frequently, but I'm not sure the significant vertical alignment is clear unless you're already familiar with the project. I think finding a way to use color here would be helpful - maybe as a background for each "column."

Or maybe I'm being too hard on the diagrams. What would you change? Are there other areas you'd like to see drawn out or maybe this same area but through a different lens?

Categorieën: Mozilla-nl planet

Chris McDonald: Owning Your Stack

Mozilla planet - di, 17/02/2015 - 07:23

A few months back I wrote about reinventing wheels. Going down that course has been interesting and I hope to continue reinventing parts of the personal cloud stack. Personal cloud meaning taking all of the services you have hosted elsewhere and pulling them in. This feeds into the IndieWeb movement as well.

A couple years ago, I deployed my first collocated server with my friends. I got a pretty monstrous setup compared to my needs, but I figured it’d pay for itself over time and it has. One side effect of having all this space was that I could let my friends also have slices of my server. It was nice sharing those extra resources. Unfortunately, by hosting my friend’s slices of my server, it meant doing anything to the root system or how the system was organized was a bit tedious or even off limits.

In owning my own services, I want to restructure my server. Also I want to have interesting routing between containers and keep all the containers down to the single process ideal. In order to move to this world I’ve had to ask my friends to give up their spots on my server. Everyone was really great about this thanking me for hosting for this long and such. I was worried people would complain and I’d have to be more forceful, but instead things were wonderful.

The next step I want to take after deploying my personal cloud, will be to start one by one replacing pieces with my own custom code. The obvious first one will be the SMTP server since I’ve already started implementing one in rust. After that it may be something like my blog, or redis or a number of other parts of the cloud. The eventual goal being that I’ve implemented a fair portion of all cloud services and I can better understand them. I wont be restricting myself to any one language. I will be pushing for a container per process with linking between containers to share services.

Overall, I hope to learn a bunch and have some fun in the process. I recently picked up the domain http://ownstack.club and hope to have something up on it in the near future!


Categorieën: Mozilla-nl planet

Mark Côté: Pulse update

Mozilla planet - di, 17/02/2015 - 04:10

After languishing for a few years, Pulse got a burst of interest and development in 2014. Since I first heard of it, I’ve found the idea of a central message bus for the goings-on in Mozilla’s various systems rather intruiging, and I’m excited to have been able to grow it over the last year.

Pulse falls into that class of problem that is a result of, to borrow from a past Mozilla leader, our tendency to make our lives difficult, that is, to work in the open. Using RabbitMQ as a generic event stream is nothing special; Mozilla’s use of it as an open system is, I believe, completely unique.

Adapting a system intended for private networks into a public service always results in fascinating problems. Pulse has a decent permission-control system, but it’s not designed for self service. It is also very trusting of its users, who can easily overwhelm the system by just subscribing to streams and never consuming the messages.

The solution to both these problems was to design a management application: PulseGuardian. Via Persona, it handles account management, and it comes with a service that monitors Pulse’s queues. Since we presume users are not malicious, it sends a friendly warning when it notices a queue growing too large, but if ignored it will eventually kill the queue to save the system.

If you build it, they will come, or so says some movie I’ve never seen, but in this case it appears to be true. TaskCluster has moved whole-hog over to Pulse for its messaging needs, and the devs wrote a really nice web app for inspecting live messages. MozReview is using it for code-review bots and autolanding commits. Autophone is exploring its use for providing Try support to non-BuildBot-based testing frameworks.

Another step for Pulse beyond the prototype phase is a proper library. The existing mozillapulse Python library works decently, aside from some annoying problems, but it suffers from a lack of extensibility, and, I’m beginning to believe, should be based directly on a lower-level amqp or RabbitMQ-specific Python package and not the strange, overly generic kombu messaging library, in part because of the apparent lack of confirm channels in kombu. We’re looking into taking ideas from TaskCluster’s Pulse usage in bug 1133602.

Recently I presented the State of Pulse to the A-Team. I should do that as a general brownbag at some point, but, until then, you can look at the slides.

Categorieën: Mozilla-nl planet

Pagina's