mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 21 uur 16 min geleden

Erik Vold: Jetpack Pro Tip - Using JPM on Travis

za, 18/10/2014 - 02:00

First, enable Travis on your repo.

Then, Add the following .travis.yml file to the repo:

This will download Firefox 36.0a1 (which at the moment needs to be manually updated..), installs jpm, then runs jpm test -v on your JPM based Firefox add-on.

Examples Add-ons Third Party NPM Modules
Categorieën: Mozilla-nl planet

Asa Dotzler: Private Browsing Coming Soon to Firefox OS

vr, 17/10/2014 - 20:38

This week, the team landed code changes for Bug 832700 – Add private browsing to Firefox OS. This was the back end implementation in Gecko and we still have to determine how this will surface in the front end. That work is tracked at Bug 1081731 - Add private browsing to Firefox OS in Gaia.

We also got a couple of nice fixes to one of my favorite new features, the still experimental “app grouping” feature for the Firefox OS home screen. The fixes for Bug 1082627 and Bug 1082629 ensure that the groups align properly and have the right sizes. You can enable this experimental feature in settings -> developer -> homescreen -> app grouping.

There’s lots going on every day in Firefox OS development. I’ll be keeping y’all up to date here and on Twitter.

 

 

Categorieën: Mozilla-nl planet

Frédéric Harper: I’m leaving Mozilla, looking for a new challenge

vr, 17/10/2014 - 20:20
//flic.kr/p/nDrPAL

Copyright: Eva Blue https://flic.kr/p/nDrPAL

January 1st will be my last day as a Senior Technical Evangelist at Mozilla. I truly believe in the Mozilla’s mission, and I’ll continue to share my passion for the open web, but this time, as a volunteer. From now on, I’ll be on the search for a new challenge.

I want to thank my rock star team for everything: Havi Hoffman, Jason Weathersby, Robert Nyman, and Christian Heilmann. I also want to thank Mark Coggins for his strong leadership as my manager. It was a real pleasure to work with you all! Last, but not least, thanks to all Mozillians, and continue the good work: let’s keep in touch!

What’s next

I’m now reflecting on what will be next for me, and open to discussing all opportunities. Having ten years as a software developer, and four years as a technical evangelist in my backpack, here are some ideas, in no particular order, I have in mind:

  • Principal Technical Evangelist about a product/service/technology I believe in;
  • General manager of a startup accelerator program;
  • CTO of a startup.

I have no issue to travel extensively: I was on the road one-third of last year – speaking in more than twelves countries. I may not have an issue to move depending on the offer, and country. I like to share my passion on stage – more than 100 talks in the last three years. Also, my book on personal branding for developers will be published at Apress before the end of the year.

I like technology, but I’m not a developer anymore, and not looking to go back in a developer role. I may also be open to a non-technical role, but it need to target other of my passions like startups. For the last five years, I’ve been working at home, with no schedule, just end goals to reach. I can’t deal with micro-management, so I need some freedom to be effective. No matter what will be next, it need to be an interesting challenge as I have a serial entrepreneur profile: I like to take ideas, and make them a reality.

You can find more about my experience on my LinkedIn profile. If you want to grab a coffee or discuss any opportunities, send me an email.


--
I’m leaving Mozilla, looking for a new challenge is a post on Out of Comfort Zone from Frédéric Harper

Related posts:

  1. I’m joining Mozilla It was a bold move to leave Microsoft without knowing...
  2. I’m leaving Microsoft, looking for a new opportunity For two years, and a half, I was a proud...
  3. One year at Mozilla On July 15th last year, I was starting a new...
Categorieën: Mozilla-nl planet

David Boswell: Mozillians of the world, unite!

vr, 17/10/2014 - 16:48

When i got involved with Mozilla in 1999, it was clear that something big was going on. The mozilla.org site had a distinctly “Workers of the world, unite!” feel to it. It caught my attention and made me interested to find out more.

600px-1998_site2_cropped

The language on the site had the same revolutionary feel as the design. One of the pages talked about Why Mozilla Matters and it was an impassioned rallying cry for people to get involved with the audacious thing Mozilla was trying to do.

“The mozilla.org project is terribly important for the state of open-source software. [...] And it’s going to be an uphill battle. [...] A successful mozilla.org project could be the lever that moves a dozen previously immobile stones. [...] Maximize the opportunity here or you’ll be kicking yourself for years to come.”

With some minor tweaks, these words are still true today. One change: we call the project just Mozilla now instead of mozilla.org. Our mission today is also broader than creating software, we also educate people about the web, advocate to keep the Internet open and more.

Photo of a Maker Party in India by  Kaustav Das Modak

Photo of a Maker Party in India by Kaustav Das Modak

Another change is that our competition has adopted many of the tactics of working in the open that we pioneered. Google, Apple and Microsoft all have their own open source communities today. So how can we compete with companies that are bigger than us and are borrowing our playbook?

We do something radical and audicious. We build a new playbook. We become pioneers for 21st century participation. We tap into the passion, skills and expertise of people around the world better than anyone else. We build the community that will give Mozilla the long-term impact that Mitchell spoke about at the Summit.

mitchell_summit

Mozilla just launched the Open Standard site and one of the first articles posted is “Struggle For An Open Internet Grows“. This shows how the challenges of today are not the same challenges we faced 16 years ago, so we need to do new things in new ways to advance our mission.

If the open Internet is blocked or shut down in places, let’s build communities on the ground that turn it back on. If laws threaten the web, let’s make that a public conversation. If we need to innovate to be relevant in the coming Internet of Things, let’s do that.

Building the community that can do this is work we need to start on. What doesn’t serve our community any more? What do we need to do that we aren’t? What works that needs to get scaled up? Mozillians of the world, unite and help answer these questions.


Categorieën: Mozilla-nl planet

Daniel Stenberg: curl is no POODLE

vr, 17/10/2014 - 10:28

Once again the internet flooded over with reports and alerts about a vulnerability using a funny name: POODLE. If you have even the slightest interest in this sort of stuff you’ve already grown tired and bored about everything that’s been written about this so why on earth do I have to pile on and add to the pain?

This is my way of explaining how POODLE affects or doesn’t affect curl, libcurl and the huge amount of existing applications using libcurl.

Is my application using HTTPS with libcurl or curl vulnerable to POODLE?

No. POODLE really is a browser-attack.

Motivation

The POODLE attack is a combination of several separate pieces that when combined allow attackers to exploit it. The individual pieces are not enough stand-alone.

SSLv3 is getting a lot of heat now since POODLE must be able to downgrade a connection to SSLv3 from TLS to work. Downgrade in a fairly crude way – in libcurl, only libcurl built to use NSS as its TLS backend supports this way of downgrading the protocol level.

Then, if an attacker manages to downgrade to SSLv3 (both the client and server must thus allow this) and get to use the sensitive block cipher of that protocol, it must maintain a connection to the server and then retry many similar requests to the server in order to try to work out details of the request – to figure out secrets it shouldn’t be able to. This would typically be made using javascript in a browser and really only HTTPS allows this so no other SSL-using protocol can be exploited like this.

For the typical curl user or a libcurl user, there’s A) no javascript and B) the application already knows the request it is doing and normally doesn’t inject random stuff from 3rd party sources that could be allowed to steal secrets. There’s really no room for any outsider here to steal secrets or cookies or whatever.

How will curl change

There’s no immediate need to do anything as curl and libcurl are not vulnerable to POODLE.

Still, SSLv3 is long overdue and is not really a modern protocol (TLS 1.0, the successor, had its RFC published 1999) so in order to really avoid the risk that it will be possible exploit this protocol one way or another now or later using curl/libcurl, we will disable SSLv3 by default in the next curl release. For all TLS backends.

Why? Just to be extra super cautious and because this attack helped us remember that SSLv3 is old and should be let down to die.

If possible, explicitly requesting SSLv3 should still be possible so that users can still work with their legacy systems in dire need of upgrade but placed in corners of the world that every sensible human has since long forgotten or just ignored.

In-depth explanations of POODLE

I especially like the ones provided by PolarSSL and GnuTLS, possibly due to their clear “distance” from browsers.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl and POODLE

vr, 17/10/2014 - 09:29

Once again the internet flooded over with reports and alerts about a vulnerability using a funny name.

Categorieën: Mozilla-nl planet

Justin Dolske: Sans Flash

vr, 17/10/2014 - 03:20

I upgraded to a new MacBook about a week ago, and thought I’d use the opportunity to try living without Flash for a while. I had previously done this two years ago (for my last laptop upgrade), and I lasted about a week before breaking down and installing it. In part because I ran into too many sites that needed Flash, but the main reason was that the adoption and experience of HTML5 video wasn’t great. In particular, the HTML5 mode on YouTube was awful — videos often stalled or froze. (I suspect that was an issue on YouTube’s end, but the exact cause didn’t really matter.) So now that the Web has had a few additional years to shift away from Flash, I wanted to see if the experience was any better.

The short answer is that I’m pleased (with a few caveats). The most common Flash usage for me had been the major video sites (YouTube and Vimeo), and they now have HTML5 video support that’s good. YouTube previously had issues where they still required the use of Flash for some popular videos (for ads?), but either they stopped or AdBlock avoids the problem.

I was previously using Flash in click-to-play mode, which I found tedious. On the whole, the experience is better now — instead of clicking a permission prompt, I find myself just happy to not be bothered at all. Most of the random Flash-only videos I encountered (generally news sites) were not worth the time anyway, and on the rare occasion I do want to see one it’s easy to find an equivalent on YouTube. I’m also pleased to have run across very few Flash-only sites this time around. I suspect we can thank the rise of mobile (thanks iPad!) for helping push that shift.

There are a few problem sites, though, which so far I’m just living with.

Ironically, the first site I needed Flash for was our own Air Mozilla. We originally tried HTML5, but streaming at scale is (was?) a hard problem, so we needed a solution that worked. Which meant Flash. It’s unfortunate, but that’s Mozilla pragmatism. In the meantime, I just cheat and use Chrome (!) which comes with a private copy of Flash. Facebook (and specifically the videos people post/share) were the next big thing I noticed, but… I can honestly live without that too. Sorry if I didn’t watch your recent funny video.

I will readily admit that my Web usage is probably atypical. I’ve rarely play online Flash games, which are probably close to video usage. And I’m willing to put up with at least a little bit of pain to avoid Flash, which isn’t something fair to expect of most users.

But so far, so good!

[Coincidental aside: Last month I removed the Plugin Finder Service from Firefox. So now Firefox won't even offer to install a plugin (like Flash) that you don't have installed.]


Categorieën: Mozilla-nl planet

Asa Dotzler: Firefox OS 2.0 Pre-release for Flame

vr, 17/10/2014 - 00:15

About 4,000 of y’all have a Flame Firefox OS reference phone. This is the developer phone for Firefox OS. If you’re writing apps or contributing directly to the open source Firefox OS project, Flame is the device you should have.

The Flame shipped with Firefox OS 1.3 and we’re getting close to the first major update for the device, Firefox OS 2.0. This will be a significant update with lots of new features and APIs for app developers and for Firefox OS developers. I don’t have a date to share with y’all yet, but it should be days and not weeks.

If you’re like me, you cannot wait to see the new stuff. With the Flame reference phone, you don’t have to wait. You can head over to MDN today and get a 2.0 pre-release base image, give that a whirl, and report any problems to Bugzilla. You can even flash the latest 2.1 and 2.2 nightly builds to see even further into the future.

If you don’t have a Flame yet, and you’re planning on contributing testing or coding to Firefox OS or to write apps for Firefox OS, I encourage you to get one soon. We’re going to be wrapping up sales in about 6 weeks.

Categorieën: Mozilla-nl planet

Asa Dotzler: I’m Back!

do, 16/10/2014 - 23:34

PROTIP: Don’t erase the Android phone with your blog’s two-factor authentication setup to see if you can get Firefox OS running on it unless you are *sure* you have printed out your two-factor back-up codes. Sort of thinking you probably printed them out is not the same thing as being sure :-)

Thank you to fellow Tennessean, long-time Mozillian, and WordPress employee Daryl Houston for helping me get my blog back.

Categorieën: Mozilla-nl planet

Daniel Stenberg: FOSS them students

do, 16/10/2014 - 23:01

On October 16th, I visited DSV at Stockholm University where I had the pleasure of holding a talk and discussion with students (and a few teachers) under the topic Contribute to Open Source. Around 30 persons attended.

Here are the slides I use, as usual possibly not perfectly telling stand-alone without the talk but there was no recording made and I talked in Swedish anyway…

Contribute to Open Source from Daniel Stenberg
Categorieën: Mozilla-nl planet

Julien Vehent: Mitigating Poodle SSLv3 vulnerability on a Go server

do, 16/10/2014 - 22:16

If you run a Go server that supports SSL/TLS, you should update your configuration to disable SSLv3 today. The sample code below sets the minimal accepted version to TLSv1, and reorganizes the default ciphersuite to match Mozilla's Server Side TLS guidelines.

Thank you to @jrconlin for the code cleanup!

package main

import (
    "crypto/rand"
    "crypto/tls"
    "fmt"
)

func main() {
    certificate, err := tls.LoadX509KeyPair("server.pem", "server.key")
    if err != nil {
        panic(err)
    }
    config := tls.Config{
        Certificates:             []tls.Certificate{certificate},
        MinVersion:               tls.VersionTLS10,
        PreferServerCipherSuites: true,
        CipherSuites: []uint16{
            tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
            tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
            tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
            tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
            tls.TLS_RSA_WITH_AES_128_CBC_SHA,
            tls.TLS_RSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
            tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA},
    }
    config.Rand = rand.Reader

    netlistener, err := tls.Listen("tcp", "127.0.0.1:50443", &config)
    if err != nil {
        panic(err)
    }
    newnetlistener := tls.NewListener(netlistener, &config)
    fmt.Println("I am listening...")
    for {
        newconn, err := newnetlistener.Accept()
        if err != nil {
            fmt.Println(err)
        }
        fmt.Printf("Got a new connection from %s. Say Hi!\n", newconn.RemoteAddr())
        newconn.Write([]byte("ohai"))
        newconn.Close()
    }
}

Run the server above with $ go run tls_server.go and test the output with cipherscan:

$ ./cipherscan 127.0.0.1:50443
........
Target: 127.0.0.1:50443

prio  ciphersuite                  protocols              pfs_keysize
1     ECDHE-RSA-AES128-GCM-SHA256  TLSv1.2                ECDH,P-256,256bits
2     ECDHE-RSA-AES128-SHA         TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
3     ECDHE-RSA-AES256-SHA         TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
4     AES128-SHA                   TLSv1,TLSv1.1,TLSv1.2
5     AES256-SHA                   TLSv1,TLSv1.1,TLSv1.2
6     ECDHE-RSA-DES-CBC3-SHA       TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
7     DES-CBC3-SHA                 TLSv1,TLSv1.1,TLSv1.2

Certificate: UNTRUSTED, 2048 bit, sha1WithRSAEncryption signature
TLS ticket lifetime hint: None
OCSP stapling: not supported
Server side cipher ordering
Categorieën: Mozilla-nl planet

Gregory Szorc: The Rabbit Hole of Using Docker in Automated Tests

do, 16/10/2014 - 15:45

Warning: This post is long and rambling. There is marginal value in reading beyond the first few paragraphs unless you care about Docker.

I recently wrote about how Mozilla tests version control. In this post, I want to talk about the part of that effort that consumed the most time: adding Docker support to the test harness.

Introducing the Problem and Desired End State

Running Docker containers inside tests just seems like an obvious thing you'd want to do. I mean, wouldn't it be cool if your tests could spin up MySQL, Redis, Cassandra, Nginx, etc inside Docker containers and test things against actual instances of the things running in your data centers? Of course it would! If you ask me, this approach beats mocking because many questions around accuracy of the mocked interface are removed. Furthermore, you can run all tests locally, while on a plane: no data center or staging environment required. How cool is that! And, containers are all isolated so there's no need to pollute your system with extra packages and system services. Seems like wins all around.

When Mozilla started adding customizations to the Review Board code review software in preparation for deploying it at Mozilla as a replacement for Bugzilla's Splinter, it quickly became apparant that we had a significant testing challenge ahead of us. We weren't just standing up Review Board and telling people to use it, we were integrating user authentication with Bugzilla, having Review Board update Bugzilla after key events, and were driving the initiation of code review in Review Board by pushing code to a Mercurial server. That's 3 user-visible services all communicating with each to expose a unified workflow. It's the kind of thing testing nightmares are made of.

During my early involvement with the project, I recognized the challenge ahead and was quick to insist that we write automated tests for as much as possible. I insisted that all the code (there are multiple extensions to ReviewBoard, a Mercurial hook, and a Mercurial extension) live under one common repository and share testing. That way we could tinker with all the parts easily and test them in concern without having to worry about version sync. We moved all the code to the version-control-tools repository and Review Board was the driving force behind improvements to the test harness in that repository. We had Mercurial .t tests starting Django dev servers hosting Review Board running from per-test SQLite databases and all was nice. Pretty much every scenario involving the interaction between Mercurial and ReviewBoard was tested. If you cared about just these components, life was happy.

A large piece of the integration story was lacking in this testing world: Bugzilla. We had somewhat complex code for having Review Board and Bugzilla talk to each other but no tests for it because nobody had yet hooked Bugzilla up to the tests. As my responsibilities in the project expanded from covering just the Mercurial and Review Board interaction to Bugzilla as well, I again looked at the situation and thought there's a lot of complex interaction here and alpha testing has revealed the presence of many bugs: we need a better testing story. So, I set out to integrate Bugzilla into the test harness.

My goals were for Review Board tests to be able to make requests against a Bugzilla instance configured just like bugzilla.mozilla.org, to allow tests to execute concurrently (don't make developers wait on machines), for tests to run as quickly as possible, to run tests in an environment as similar to production as possible, and to be able to run tests from a plane or train or anywhere without internet connectivity. I was unwilling to budge on these core testing requirements because they represent what's best from test accuracy and developer productivity standpoints: you want your tests to be representative of the real world and you want to enable people to hack on this service anywhere, anytime, and not be held back by tests that take too long to execute. Move fast and don't break things.

Before I go on, it's time for a quick aside on tolerable waiting times. Throughout this article I talk about minimizing the run time of tests. This may sound like premature optimization. I argue it isn't, at least not if you are optimizing for developer productivity. There is a fair bit of academic research in this area. A study on tolerable waiting time: how long are Web users willing to wait gets cited a lot. It says 2 seconds for web content. If you read a few paragraphs in, it references other literature. They disagree on specific thresholds, but one thing is common: the thresholds are typically low - just a few seconds. The latencies I deal with are all longer than what research says leads to badness. When given a choice, I want to optimize workflows for what humans are conditioned to tolerate. If I can't do that, I've failed and the software will be ineffective.

The architecture of Bugzilla created some challenges and eliminated some implementation possibilities. First, I wasn't using any Bugzilla: I was using Mozilla's branch of Bugzilla that powers bugzilla.mozilla.org. Let's call it BMO. I could try hosting it from local SQLite files and running a local, Perl-based HTTP server (Bugzilla is written in Perl). But my experience with Perl and takeaways from talking to the BMO admins was that pain would likely be involved. Plus, this would be a departure from test accuracy. So, I would be using MySQL, Apache HTTPd, and mod_perl, just like BMO uses them in production.

Running Apache and MySQL is always a... fun endeavor. It wasn't a strict requirement, but I also highly preferred that the tests didn't pollute the system they ran on. In other words, having tests connect to an already-running MySQL or Apache server felt like the wrong solution. That's just one more thing people must set up and run locally to run the tests. That's just one more thing that could differ from production and cause bad testing results. It felt like a dangerous approach. Plus, there's the requirement to run things concurrently. Could I have multiple tests talking to the same MySQL server concurrently? They'd have to use separate databases so they don't conflict. That's a possibility. Honestly, I didn't entertain the thought of running Apache and MySQL manually for too long. I knew about this thing called Docker and that it theoretically fit my use case perfectly: construct building blocks for your application and then dymanically hook things up. Perfect. I could build Docker containers for all the required services and have each test start a new, independent set of containers for just that test.

So, I set out integrating Docker into the version-control-tools test harness. Specifically, my goal was to enable the running of independent BMO instances during individual tests. It sounded simple enough.

What I didn't know was that integrating a Dockerized BMO into the test harness would take the better part of 2 weeks. And it's still not up to my standards. This post is the story about the trials and tribulations I encountered along the way. I hope it serves as a warning and potentially a guide for others attempting similar feats. If any Docker developers are reading, I hope it gives you ideas on how to improve Docker.

Running Bugzilla inside Docker

First thing's first: to run BMO inside Docker I needed to make Docker containers for BMO. Fortunately, David Lawrence has prior art here. I really just wanted to take that code, dump it into version-control-tools and call it a day. In hindsight, I probably should have done that. Instead, armed with the knowledge of the Docker best practice of one container per service and David Lawrence's similar wishes to make his code conform to that ideal, I decided to spend some time to fix David's code so that MySQL and Apache were in separate containers, not part of a single container running supervisord. Easy enough, right?

It was relatively easy extracting the MySQL and Apache parts of BMO into separate containers. For MySQL, I started with the official MySQL container from the Docker library and added a custom my.cnf. Simple enough. For Apache, I just copied everything from David's code that wasn't MySQL. I was able to manually hook the containers together using the Docker CLI. It sort of just worked. I was optimistic this project would only take a few hours.

A garbage collection bug in Docker

My first speed bump came as I was iterating on Dockerfiles. All of a sudden I get an error from Docker that it is out of space. Wat? I look at docker images and don't see anything too obvious eating up space. What could be going on? At this point, I'm using boot2docker to host Docker. boot2docker is this nifty tool that allows Windows and OS X users to easily run Docker (Docker requires a Linux host). boot2docker spins up a Linux virtual machine running Docker and tells you how to point your local docker CLI interface at that VM. So, when Docker complains it is out of space, I knew immediately that the VM must be low on space. I SSH into it, run df, and sure enough, the VM is nearly out of space. But I looked at docker images -a and confirmed there's not enough data to fill the disk. What's going on? I can't find the issue right now, but it turns out there is a bug in Docker! When running Docker on aufs filesystems (like boot2docker does), Docker does not always remove data volumes containers when deleting a container. It turns out that the MySQL containers from the official Docker library were creating a data-only container to hold persistent MySQL data that outlives the container itself. These containers are apparently light magic. They are containers that are attached to other containers, but they don't really show up in the Docker interfaces. When you delete the host container, these containers are supposed to be garbage collected. Except on aufs, they aren't. My MySQL containers were creating 1+ GB InnoDB data files on start and the associated data containers were sitting around after container deletion, effectively leaking 1+ GB every time I created a MySQL container, quickly filling the boot2docker disk. Derp.

I worked around this problem by forking the official MySQL container. I didn't need persistent MySQL data (the containers only need to live for one invocation - for the lifetime of a single test), so I couldn't care less about persisted data volumes. So, I changed the MySQL container to hold its data locally, not in a data volume container. The solution was simple enough. But it took me a while to identify the problem. Here I was seeing Docker do something extremely stupid. Surely my understanding of Docker was wrong and I was doing something stupid to cause it to leak data. I spent hours digging through the documentation to make sure I was doing things exactly as recommended. It wasn't until I started an Ubuntu VM and tried the same thing there did I realize this looked like a bug in boot2docker. A few Google searches later led me to a comment hiding at the bottom of an existing GitHub issue that pins aufs as the culprit. And here I thought Docker reached 1.0 and wouldn't have bad bugs like this. I certainly wouldn't expect boot2docker to be shipping a VM with a sub-par storage driver (shouldn't it be using devicemapper or btrfs instead). Whatever.

Wrangling with Mozilla's Branch of Bugzilla

At this point, I've got basic Docker containers for MySQL and Apache+mod_perl+Bugzilla being created. Now, I needed to convert from vanilla Bugzilla to BMO. Should be straightforward. Just change the Git remote URL and branch to check out. I did this and all-of-a-sudden my image started encountering errors building! It turns out that the BMO code base doesn't work on a fresh database! Fortunately, this is a known issue and I've worked around it previously. When I tackled it a few months ago, I spent a handful of hours disecting this problem. It wasn't pretty. But this time I knew what to do. I even had a Puppet manifest for installing BMO on a fresh machine. So, I just needed to translate that Puppet config into Dockerfile commands. No big deal, right? Well, when I did that Puppet config a few months ago, I based it on Ubuntu because I'm more familiar with Debian-based distros and figured Ubuntu would be the easiest since it tends to have the largest package diversity. Unfortunately, David's Docker work is based on Fedora. So, I spent some time converting the Dockerfile to Ubuntu rather than trying to port things to Fedora. Arguably the wrong decision since Mozilla operates the RedHat flavor of Linux distributions in production. But I was willing to trade accuracy for time here, having lost time dealing with the aufs bug.

Unfortunately, I under-estimated how long it would take to port the image to Ubuntu. It didn't take so long from a code change perspective. Instead, most of the time was spent waiting for Docker to run the commands to build the image. In the final version, Apt is downloading and installing over 250 packages. And Bugzilla's bootstrap process installs dozens of packages from CPAN. Every time I made a small change, I invalidated Docker's image building cache, causing extreme delays while waiting for Apt and CPAN to do their thing. This experience partially contributed to my displeasure with how Docker currently handles image creation. If Docker images were composed of pre-built pieces instead of stacked commands, my cache hit rate would have been much higher and I would have converted the image in no time. But no, that's not how things work. So I lost numerous hours through this 2 week process waiting for Docker images to perform operations I've already done elsewhere dozens of times before.

Docker Container Orchestration

After porting the Bugzilla image to Ubuntu and getting BMO to bootstrap in a manually managed container (up to this point I'm using the docker CLI to create images, start containers, etc), it was time to automate the process so that tests could run the containers. At this time, I started looking for tools that performed multiple container orchestration. I had multiple containers that needed to be treated as a single logical unit, so I figured I'd use an existing tool to solve this problem for me. Don't reinvent the wheel unless you have to, right? I discovered Fig, which seemed to fit the bill. I read that it is being integrated into Docker itself, so it must be best of breed. Even if it weren't its future seems to be more certain than other tools. So, I stopped my tools search and used Fig without much consideration for other tools.

Lack of a useful feature in Fig

I quickly whipped up a fig.yml and figured it would just work. Nope! Starting the containers from scratch using Fig resulted in an error. I wasn't sure what the error was at first because Fig didn't tell me. After some investigation, I realized that my bmoweb container (the container holding Apache + BMO code) was failing in its entrypoint command (that's a command that runs when the container starts up, but not the primary command a container runs - that's a bit confusing I know - read the docs). I added some debug statements and quickly realized that Bugzilla was erroring connecting to MySQL. Strange, I thought. Fig is essentially a DSL around manual docker commands, so I checked everything by typing everything into the shell. No error. Again on a new set of containers. No error. I thought maybe my environment variable handling was wrong - that the dynamically allocated IP address and port number of the linked MySQL container being passed to the bmoweb container weren't getting honored. I added some logging to disprove that theory. The wheels inside my brain spun for a little bit. And, aided by some real-time logging, I realized I was dealing with a race condition: Fig was starting the MySQL and bmoweb containers concurrently and bmoweb was attempting to access the MySQL server before MySQL had fully initialized and started listening on its TCP port! That made sense. And I think it's a reasonable optimization for Fig to start containers concurrently to speed up start time. But surely a tool that orchestrates different containers has considered the problem of dependencies and has a mechanism to declare them to prevent these race conditions. I check the online docs and there's nothing to be found. A red panda weeps. So, I change the bmoweb entrypoint script to wait until it can open a TCP socket to MySQL before actually using MySQL and sure enough, the race condition goes away and the bmoweb container starts just fine!

OK, I'm real close now. I can feel it.

Bootstrapping Bugzilla

I start playing around with manually starting and stopping containers as part of a toy test. The good news is things appear to work. The bad news is it is extremely slow. It didn't take long for me to realize that the reason for the slowness is Bugzilla's bootstrap on first run. Bugzilla, like many complex applications, has a first run step that sets up database schema, writes out some files on the filesystem, inserts some skeleton data in the database, creates an admin user, etc. Much to my dismay this was taking a long time. Something on the order of 25 to 30 seconds. And that's on a Haswell with plenty of RAM and an SSD. Oy. The way things are currently implemented would result in a 25 to 30 second delay when running every test. Change 1 line and wait say 25s for any kind of output. Are you kidding me?! Unacceptable. It violated my core goal of having tests that are quick to run. Again, humans should not have to wait on machines.

I think about this problem for like half a second and the solution is obvious: take a snapshot of the bootstrapped images and start instances of that snapshot from tests. In other words, you perform the common bootstrap operations once and only once. And, you can probably do that outside the scope of running tests so that the same snapshot can be used across multiple invocations of the test harness. Sounds simple! To the Docker uninitiated, it sounds like the solution would be to move the BMO bootstrapping into the Dockerfile code so it gets executed at image creation time. Yes, that would be ideal. Unfortunately, when building images via Dockerfile, you can't tell Docker to link that image to another container. Without container linking, you can't have MySQL. Without MySQL, you can't do BMO bootstrap. So, BMO bootstrap must be done during container startup. And in Docker land, that means putting it as part of your entrypoint script (where it was conveniently already located for this reason).

Talking Directly to the Docker API

Of course, the tools that I found that help with Docker image building and container orchestration don't seem to have an answer for this create a snapshot of a bootstrapped container problem. I'm sure someone has solved this problem. But in my limited searching, I couldn't find anything. And, I figured the problem would be easy enough to solve manually, so I set about creating a script to do it. I'm not a huge fan of shell script for automation. It's hard to debug and simple things can be hard and hard things can be harder. Plus, why solve solutions such as parsing output for relevant data when you can talk to an API directly and get native types. Since the existing test harness automation in version-control-tools was written in Python, I naturally decided to write some Python to create the bootstrapped images. So, I do a PyPI search and discover docker-py, a Python client library to the Docker Remote API, an HTTP API that the Docker daemon runs and is what the docker CLI tool itself uses to interface with Docker. Good, now I have access to the full power of Docker and am not limited by what the docker CLI may not expose. So, I spent some time looking at source and the Docker Remote API documentation to get an understanding of my new abilities and what I'd need to do. I was pleasantly surprised to learn that the docker CLI is pretty similar to the Remote API and the Python API was similar as well, so the learning was pretty shallow. Yay for catching a break!

Confusion Over Container Stopping

I wrote some Python for building the BMO images, launching the containers, committing the result, and saving state to disk (so it could be consulted later - preventing a bootstrap by subsequent consumers). This was pleasantly smooth at first, but I encountered some bumps along the way. First, I didn't have a complete grasp on the differences between stop and kill. I was seeing some weird behavior by MySQL on startup and didn't know why. Turns out I was forcefully killing the container after bootstrap via the kill API and this was sending a SIGKILL to MySQL, effectively causing unclean shutdown. After some documentation reading, I realized stop is the better API - it issues SIGTERM, waits for a grace period, then issues SIGKILL. Issuing SIGTERM made MySQL shut down gracefully and this issue stemming from my ignorance was resolved. (If anyone from Docker is reading, I think the help output for docker kill should mention the forcefullness of the command versus stop. Not all of us remember the relative forcefullness of the POSIX signals and having documentation reinforce their cryptic meaning could help people select the proper command.) A few lines of Python later and I was talking directly to the Docker Remote API, doing everything I needed to do to save (commit in Docker parlance) a bootstrapped BMO environment for re-use among multiple tests.

It was pretty easy to hook the bootstrapped images up to a single test. Just load the bootstrapped image IDs from the config file and start new containers based on them. That's Docker 101 (except I was using Python to do everything).

Concurrent Execution Confuses Bugzilla

Now that I could start Dockerized BMO from a single test, it was time to make things work concurrently. I hooked Docker up to a few tests and launched them in parallel to see what would happen. The containers appeared to start just fine! Great anticipation on my part to design for concurrency from the beginning, I thought. It appeared I was nearly done. Victory was near. So, I changed some tests to actually interact with BMO running from Docker. (Up until this point I was merely starting containers, not doing anything with them.) Immediately I see errors. Cannot connect to Bugzilla http://... connection refused. Huh? It took a few moments, but I realized the experience I had with MySQL starting and this error were very similar. I changed my start BMO containers code to wait for the HTTP server's TCP socket to start accepting connections before returning control and sure enough, I was able to make HTTP requests against Bugzilla running in Docker! Woo!

Next step, make an authenticated query against Bugzilla running in Docker. HTTP request completes... with an internal server error. What?! I successfully browsed BMO from containers days before and was able to log in just fine - this shouldn't be happening. This problem took me ages to diagnose. I traced every step of provisioning and couldn't figure out what was going on. After resorting to print debugging in nearly every component, including Bugzilla's Perl code itself, I found the culprit: Bugzilla wasn't liking the dynamic nature of the MySQL and HTTP endpoints. You see, when you start Docker containers, network addresses change. The IP address assigned to the container is whatever is available to Docker at the time the container was started. Likewise the IP address and port number of linked services can change. So, your container entrypoint has to deal with this dynamic nature of addresses. For example, if you have a configuration file, you need to update that configuration file on every run with the proper network address info. My Bugzilla entrypoint script was doing this. Or so I thought. It turns out that Bugzilla's bootstrap process has multiple config files. There's an answers file that provides static answers to questions asked when running the bootstrap script (checksetup.pl). checksetup.pl will produce a localconfig file (actually a Perl script) containing all that data. There's also a data/params file containing yet more configuration options. And, the way I was running bootstrap, checksetup.pl refused to update files with new values. I initially had the entrypoint script updating only the answers file and running checksetup.pl, thinking checksetup.pl would update localconfig if the answers change. Nope! checksetup.pl only appears to update localconfig if localconfig is missing a value. So, here my entrypoint script was, successully calling checksetup.pl with the proper network values, which checksetup.pl was more than happy to use. But when I started the web application, it used the old values from localconfig and data/params and blew up. Derp. So, to have dynamic MySQL hosts and ports and a dynamic self-referential HTTP URL, I needed to manually update localconfig and data/params during the entrypoint script. The entrypoint script now rewrites Perl scripts during container load to reflect appropriate variables. Oy.

Resource Constraints

At some point I got working BMO containers running concurrently from multiple tests. This was a huge milestone. But it only revealed my next problem: resource constraints. The running containers were consuming gobs of memory and I couldn't run more than 2 or 3 tests concurrently before running out of memory. Before, I was able to run 8 tests concurrently no problem. Well crap, I just slowed down the test harness significantly by reducing concurrency. No bueno.

Some quick investigation revealed the culprit was MySQL and Apache being greedier than they needed to be. MySQL was consuming 1GB RSS on start. Apache was something like 350 MB. It had been a while since I ran a MySQL server, so I had to scour the net for settings to put MySQL on a diet. The results were not promising. I knew enough about MySQL to know that the answers I found had similar quality to comments on the php.net function documentation circa 2004 (it was not uncommon to see things like SQL injection in the MySQL pages back then - who knows, maybe that's still the case). Anyway, a little tuning later and I was able to get MySQL using a few hundred MB RAM and I reduced the Apache worker pool to something reasonable (maybe 2) to free up enough memory to be able to run tests with the desired concurrency again. If using Docker as part of testing ever takes off, I imagine there will be two flavors of every container: low memory and regular. I'm not running a production service here: I'll happily trade memory for high-end performance as long as it doesn't impact my tests too much.

Caching, Invalidating, and Garbage Collecting Bootstrapped Images

As part of iterating on making BMO bootstrap work, I encountered another problem: knowing when to perform a bootstrap. As mentioned earlier, bootstrap was slow: 25 to 30 seconds. While I had reduced the cost of bootstrap to at most once per test suite execution (as opposed to once per test), there was still the potential for a painful 25-30s delay when running tests. Unacceptable! Furthermore, when I changed how bootstrap worked, I needed a way to invalidate the previous bootstrapped image. Otherwise, we may use an outdated bootstrapped image that doesn't represent the environment it needs to and test execution would fail. How should I do this?

Docker has considered this problem and they have a solution: build context. When you do a docker build, Docker takes all the files from the directory containing the Dockerfile and makes them available to the environment doing the building. If you ADD one of these files in your Dockerfile, the image ID will change if the file changes, invalidating the cache used by Docker to build images. So, if I ADDed the scripts that perform BMO bootstrap to my Docker images, Docker would automagically invalidate the built images and force a bootstrap for me. Nice! Unfortunately, docker build doesn't allow you to add files outside of the current directory to the build context. Internet sleuthing reveals the solution here is to copy things to a temporary directory and run docker build from that. Seriously? Fortunately, I was using the Docker API directly via Python. And that API simply takes an archive of files. And since you can create archives dynamically inside Python using e.g. tarfile, it wasn't too difficult to build proper custom context archives that contained my extra data that could be used to invalidate bootstrapped images. I threw some simple ADD directives into my Dockerfiles and now I got bootstrapped image invalidation!

To avoid having to perform bootstrap on every test run, I needed a mapping between the base images and the bootstrapped result. I ended up storing this in a simple JSON file. I realize now I could have queried Docker for images having the base image as its parent since there is supposed to be a 1:1 relationship between them. I may do this as a follow-up.

With the look-up table in place, ensuring bootstrapped images were current involved doing a couple docker builds, finding the bootstrapped images from those base images, and doing the bootstrap if necessary. If everything is up-to-date, docker build finishes quickly and we have less than 1s of delay. Very acceptable. If things aren't current, well, there's not much you can do there if accuracy is important. I was happy with where I was.

Once I started producing bootstrapped images every time the code impacting the generation of that image changed, I ran into a new problem: garbage collection. All those old bootstrapped images were piling up inside of Docker! I needed a way to prune them. Docker has support for associating a repository and a tag with images. Great, I thought, I'll just associate all images with a well-defined repository, leave the tag blank (because it isn't really relevant), and garbage collection will iterate over all images in to-be-pruned repositories and delete all but the most recent one. Of course, this simple solution did not work. As far as I can tell, Docker doesn't really let you have multiple untagged images. You can set a repository with no tag and Docker will automagically assign the latest tag to that image. But the second you create a new image in that repository, the original image loses that repository association. I'm not sure if this is by design or a bug, but it feels wrong to me. I want the ability to associate tags with images (and containers) so I can easily find all entities in a logical set. It seemed to me that repository facilitated that need (albeit with the restriction of only associating 1 identifier per image). My solution here was to assign type 1 UUIDs to the tag field for each image. This forced Docker to retain the repository association when new images were created. I chose type 1 UUIDs so I can later extract the time component embedded within and do time-based garbage collection e.g. delete all images created more than a week ago.

Making Things Work in Jenkins/Ubuntu

At about this point, I figured things were working well enough on my boot2docker machine that it was time to update the Jenkins virtual machine / Vagrant configuration to run Docker. So, I hacked up the provisioner to install the docker.io package and tried to run things. First, I had to update code that talks to Docker to know where Docker is in an Ubuntu VM. Before, I was keying things off DOCKER_HOST, which I guess is used by the docker CLI and boot2docker reminds you to set. Easy enough. When I finally got things talking to Docker, my scripts threw a cryptic error when talking to Docker. Huh? This worked in boot2docker! When in doubt, always check your package versions. Sure enough, Ubuntu was installing an old Docker version. I added the Docker Apt repo to the Vagrant provisioner and tried again. Bingo - working Docker in an Ubuntu VM!

Choice of storage engines

I started building the BMO Docker images quickly noticed something: building images was horribly slow. Specifically, the part where new images are committed was taking seemingly forever. 5 to 8 seconds or something. Ugh. This wouldn't really bother me except due to subsequent issues, I found myself changing images enough as part of debugging that image building latency became a huge time sink. I felt I was spending more time waiting for layers to commit than making progress. So, I I decided to do something about it. I remembered glancing at an overview of storage options in Docker the week or two prior. I instinctively pinned the difference on different storage drivers between boot2docker and Ubuntu. Sure enough, boot2docker was using aufs and Ubuntu was using devicemapper. OK, now I identified a potential culprit. Time to test the theory. A few paragraphs into that blog post, I see a sorted list of storage driver priorities. I see aufs first, btrfs second, and devicemapper third. I know aufs has kernel inclusion issues (plus a nasty data leakage bug). I don't want that. devicemapper is slow. I figured the list is ordered for a reason and just attempted to use btrfs without really reading the article. Sure enough, btrfs is much faster at committing images than devicemapper. And, it isn't aufs. While images inside btrfs are building, I glance over the article and come to the conclusion that btrfs is in fact good enough for me.

So now I'm running Docker on btrfs on Ubuntu and Docker on aufs in boot2docker. Hopefully that will be the last noticable difference between host environments. After all, Docker is supposed to abstract all this away, right? I wish.

The Mystery of Inconsistent State

It was here that I experienced the most baffling, mind bending puzzle yet. As I was trying to get things working on the Jenkins/Ubuntu VM - things that had already been proved out in boot2docker - I was running into inexplicable issues during creation of the bootstrapped BMO containers. It seemed that my bootstrapped containers were somehow missing data. It appeared as if bootstrap had completed but data written during bootstrap failed to write. You start the committed/bootstrapped image and bootstrap had obviously completed partially, but it appeared to have never finished. Same Docker version. Same images. Same build scripts. Only the host environment was different. Ummmm, Bueller?

This problem had me totally and completely flabbergasted. My brain turned to mush exhausting possibilities. My initial instinct was this was a filesystem buffering problem. Different storage driver (btrfs vs aufs) means different semantics in how data is flushed, right? I once again started littering code with print statements to record the presence or non-presence of files and content therein. MySQL wasn't seeing all its data, so I double and triple check I'm shutting down MySQL correctly. Who knows, maybe one of the options I used to trim the fat from MySQL removed some of the safety from writing data and unclean shutdown is causing MySQL to lose data?

While I was investigating this problem, I noticed an additional oddity: I was having trouble getting reliable debug output from running containers (docker log -f). It seemed as if I was losing log events. I could tell from the state of a container that something happened, but I was seeing no evidence from docker logs -f that that thing actually happened. Weird! On a hunch, a threw some sys.stdout.flush() calls in my Python scripts, and sure enough my missing output started arriving! Pipe buffering strikes again. So, now we have dirty hacks in all the Python scripts related to Docker to unbuffer stdout to prevent data loss. Don't ask how much time was wasted tracking down bad theories due to stdout output being buffered.

Getting back to the problem at hand, I still hand Docker containers seemingly lose data. And it was only happening when Ubuntu/btrfs was the host environment for Docker. I eventually exhausted all leads in my filesystem wasn't flushed theory. At some point, I compared the logs of docker logs -f between boot2docker and Ubuntu and eventually noticed that the bmoweb container in Ubuntu wasn't printing as much. This wasn't obvious at first because the output from bootstrap on Ubuntu looked fine. Besides, the script that waits for bootstrap to complete waits for the Apache HTTP TCP socket to come alive before it gracefully stops the container and snapshots the bootstrapped result: bootstrap must be completing, ignore what docker logs -f says.

Eventually I hit an impasse and resort to context dumping everything on IRC. Ben Kero is around and he picks up on something almost immediately. He simply says ... systemd?. I knew almost instantly what he was referring to and knew the theory fit the facts. Do you know what I was doing wrong?

I still don't know what and quite frankly I don't care, but something in my Ubuntu host environment had a trigger on the TCP port the HTTP server would be listening on. Remember, I was detecting bootstrap completion by waiting until a TCP socket could be opened to the HTTP server. As soon as that connection was established, we stopped the containers gracefully and took a snapshot of the bootstrapped result. Except on Ubuntu something was accepting that socket open, giving a false positive to my wait code, and triggering early shutdown. Docker issued the signal to stop the container gracefully, but it wasn't finished bootstrapping yet, so it forcefully killed the container, resulting in bootstrap being in a remarkably-consistent-across-runs inconsistent state. Changing the code from wait on TCP socket to wait for valid HTTP response fixed the problem. And just for good measure, I changed the code waiting on the MySQL server to also try to establish an actual connection to the MySQL application layer, not merely a TCP socket.

After solving this mystery, I thought there's no way I could be so blind as to not see the container receiving the stop signal during bootstrap. So, I changed things back to prove to myself I wasn't on crack. No matter how hard I tried, I could not get the logs to show that the signal was received. I think what was happening was that my script was starting the container and issuing the graceful stop so quickly that it wasn't captured by log clients. Sure enough, adding some sleeps in the proper places made it possible to catch the events in action. In hindsight, I suppose I could have used docker events to shed some light on this as well. If Docker persisted logs/output from containers and allowed me to scroll back in time, I think this would have saved me. Although, there's a chance my entrypoint script wouldn't have informed me about the received signal. Perhaps checksetup.pl was ignoring it? What I really need is a unified event + log stream from Docker containers so I can debug exactly what's going on.

Everything is Working, Right?

After solving the inconsistent bootstrap state problem, things were looking pretty good. I had BMO bootstrapping and running from tests on both boot2docker and Ubuntu hosts. Tests were seeing completely independent environments and there were no race conditions. I was nearly done.

So, I started porting more and more tests to Docker. I started running tests more and more. Things worked. Most of the time. But I'm still frustrated by periodic apparent bugs in Docker. For example, our containers periodically fail to shut down. Our images periodically fail to delete.

During container shutdown and delete at the end of tests, we periodically see error messagess like the following:

docker.errors.APIError: 500 Server Error: Internal Server Error ("Cannot destroy container f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255: Driver aufs failed to remove root filesystem f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255: rename /mnt/sda1/var/lib/docker/aufs/mnt/f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255 /mnt/sda1/var/lib/docker/aufs/mnt/f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255-removing: device or resource busy") 500 Server Error: Internal Server Error ("Cannot destroy container 7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: Unable to remove filesystem for 7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: remove /mnt/sda1/var/lib/docker/containers/7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: directory not empty")

Due to the way we're executing tests (Mercurial's .t test format), this causes the test's output to change and the test to fail. Sadness.

I think these errors are limited to boot2docker/aufs. But we haven't executed enough test runs in the Jenkins/Ubuntu/btrfs VM yet to be sure. This definitely smells like a bug in Docker and it is very annoying.

Conclusion

After much wrangling and going deeper in a rabbit hole than I ever felt was possible, I finally managed to get BMO running inside Docker as part of our test infrastructure. We're now building tests for complicated components that touch Mercurial, Review Board, and Bugzilla and people are generally happy with how things work.

There are still a handful of bugs, workarounds, and components that aren't as optimal as I would like them to be. But you can't always have perfection.

My takeaway from this ordeal is that Docker still has a number of bugs and user experience issues to resolve. I really want to support Docker and to see it succeed. But every time I try doing something non-trivial with Docker, I get bit hard. Yes, some of the issues I experienced were due to my own ignorance. But at the same time, if one of Docker's mantras is about simplicity and usability, then should there be such gaping cracks for people like me to fall through?

In the end, the promise of Docker fits my use case almost perfectly. I know the architecture is a good fit for testing. We will likely stick with Docker, especially now that I've spent the time to make it work. I really wish this project would have taken a few days, not a few weeks.

Categorieën: Mozilla-nl planet

Robert Nyman: Getting started with & understanding the power of Vim

do, 16/10/2014 - 15:22

Being a developer and having used a lot of code editors over the years, I think it’s a very interesting area both when it comes to efficiently but also in the program we spend many many hours in. At the moment, I’m back with Vim (more specifically, MacVim).

The last years I’ve been using Sublime Text extensively, and before that, TextMate. I’ve really liked Sublime Text, it supports most of what I want to do and I’m happy with it.

As the same time, my belief is that you need to keep on challenging yourself. Try and learn new things, get another perspective, learn about needs and possibilities you didn’t even knew you had. Or, at the very least, go back to what you used before, now being more aware of how much you like and appreciate it.

Vim redux

A few years ago I tried out Vim (MacVim) to see what it was like. A lot of great developers use it, and a few friends swore by how amazing it was. So, naturally I had to try it.

Tried for a while, and did it with the Janus distribution. I did end up in a situation where I didn’t have enough control; o rather, didn’t understand how it all works and didn’t take the time to learn. So I tried Vim for a while, got fed up and aggravated that I could get things done quickly. While I learned a lot about Vim after a while, at that time and during its circumstances, the cost was too big to continue.

But now I’m back again, and so far I’m happy about it. :-)

Let’s be completely honest, though: the learning curve is fairly steep and there are a lot of annoying moments in the beginning, in particular since it is very different from what most people have used before.

Getting started

My recommendation to get started, and really grasp Vim, is to download a clean version, and probably something with a graphical user interface/application wrapper for your operating system. As mainly a Mac OS X user, my choice has been MacVim.

In your home folder, you will get (or create) a folder and a file (there could be more, but this is the start):

.vim folder
Contains your plugins and more
.vimrc file
A file with all kinds of configurations, presets and customizations. For a Vim user, the .vimrc file is the key to success (for my version, see below)
Editing Modes

One of things is that Vim offers a number of different modes, depending on what you want to do. The core ones are:

normal
This is the default mode in Vim, for navigating and manipulating text. Pressing <Esc> at any time takes you back to this mode
insert
Inserting and writing text and code
visual
Any kinds of text selections
command-line
Pressing : takes you to the command line in Vim, from which you can call a plethora of commands

Once you’ve gotten used to switching between these commands, you will realize how extremely powerful they are and, when gained control, how they dramatically improves efficiency. Search/substitute is also very powerful in Vim, but I really do recommend checking out vimregex.com for the low-down on commands and escaping.

Keyboard shortcuts

With the different Modes, there’s an abundance of keyboard shortcuts, some of them for one mode, some of them spanning across modes (and all this customizable as well through your .vimrc file).

Also, Vim is a lot about intent. Not just what you want to do now, but thinking 2, 3 or 4 steps ahead. Where are you going with this entire flow, not just action by action without connections.

For instance, let’s say I have a <h2> element with text in it that I want to replace, like this:

<h2>I am a heading</h2>

My options are (going from most most complicated to most efficient):

  • Press v to go into Visual mode, then use the w (jump by start of words) or e (jump to end of words) to select the text and then delete it (with the delete key or pressing d), press i to go into Insert mode, then enter the new text
  • Press v to go into Visual mode, then use the w (jump by start of words) or e (jump to end of words) to select the text, then press c to go into Insert mode with a change action, i.e. all selected text will be gone and what you type is the new value
  • Press dit in Normal mode, which means “delete in tag”, then press i or c to go into Insert mode and write the new text
  • Press ct< in Normal mode, which means “change to [character]“, then just write the new text
  • Press cit in Normal mode, which means “change in tag”, then just write the new text

Using ct[character] or dt[character], e.g. ct< will to the first action (“change”) to the specified character (“<” in this case). Other quick ways of changing or deleting things on a row is pressing C or D which will automatically do that action to the end of the current line.

There is a ton of options and combinations, and I’ve listed the most common ones below (taken from http://worldtimzone.com/res/vi.html):

Cursor movement h - move left j - move down k - move up l - move right w - jump by start of words (punctuation considered words) W - jump by words (spaces separate words) e - jump to end of words (punctuation considered words) E - jump to end of words (no punctuation) b - jump backward by words (punctuation considered words) B - jump backward by words (no punctuation) 0 - (zero) start of line ^ - first non-blank character of line $ - end of line G - Go To command (prefix with number - 5G goes to line 5)

Note: Prefix a cursor movement command with a number to repeat it. For example, 4j moves down 4 lines.
Insert Mode – Inserting/Appending text

i - start insert mode at cursor I - insert at the beginning of the line a - append after the cursor A - append at the end of the line o - open (append) blank line below current line (no need to press return) O - open blank line above current line ea - append at end of word Esc - exit insert mode Editing r - replace a single character (does not use insert mode) J - join line below to the current one cc - change (replace) an entire line cw - change (replace) to the end of word c$ - change (replace) to the end of line s - delete character at cursor and substitute text S - delete line at cursor and substitute text (same as cc) xp - transpose two letters (delete and paste, technically) u - undo . - repeat last command Marking text (visual mode) v - start visual mode, mark lines, then do command (such as y-yank) V - start Linewise visual mode o - move to other end of marked area Ctrl+v - start visual block mode O - move to Other corner of block aw - mark a word ab - a () block (with braces) aB - a {} block (with brackets) ib - inner () block iB - inner {} block Esc - exit visual mode Visual commands > - shift right < - shift left y - yank (copy) marked text d - delete marked text ~ - switch case Cut and Paste yy - yank (copy) a line 2yy - yank 2 lines yw - yank word y$ - yank to end of line p - put (paste) the clipboard after cursor P - put (paste) before cursor dd - delete (cut) a line dw - delete (cut) the current word x - delete (cut) current character Exiting :w - write (save) the file, but don't exit :wq - write (save) and quit :q - quit (fails if anything has changed) :q! - quit and throw away changes Search/Replace /pattern - search for pattern ?pattern - search backward for pattern n - repeat search in same direction N - repeat search in opposite direction :%s/old/new/g - replace all old with new throughout file :%s/old/new/gc - replace all old with new throughout file with confirmations Working with multiple files :e filename - Edit a file in a new buffer :bnext (or :bn) - go to next buffer :bprev (of :bp) - go to previous buffer :bd - delete a buffer (close a file) :sp filename - Open a file in a new buffer and split window ctrl+ws - Split windows ctrl+ww - switch between windows ctrl+wq - Quit a window ctrl+wv - Split windows vertically Plugins

There are a number of different ways of approaching plugins with Vim, but the most simple and clearest one that I’ve found, in the form of a plugin itself, is using pathogen.vim. Then you will place all other plugins you install in .vim/bundle

These are the plugins I currently use:

command-t
Mimicking the Command + T functionality in TextMate/Sublime Text, to open any file in the current project. I press , + f to use it (where , is my Leader key)
vim-snipmate
To import snippet support in Vim. For instance, in a JavaScript file, type for then tab to have it completed into a full code snippet. As part of this, some other plugins were needed:

vim-multiple-cursors
I love the multiple selection feature in Sublime Text; Command + D to select the next match(es) in the document that are the same as what is currently selected.
This is a version of this for Vim that works very well. Use Ctrl + n to select any matches, and then act on them with all the powerful commands available in Vim. For instance, after you are done selecting, the simplest thing is to press c to change all those occurrences to what you want.
vim-sensible
A basic plugin to help out with some of the key handling.
vim-surround
surround is a great plugin for surround text with anything you wish. Commands starts with pressing ys which stands for “you surround” and then you enter the selection criteria and finally what to surround it with.
Examples:

  • ysiw" – “You surround in word”
  • ysip<C-t> – “You surround in paragraph” and then ask for which tag to surround with
nerdtree
This offers a fairly rudimentary tree navigation to Vim. Don’t use it much at the moment, though, but rather prefer pressing : to go to the command line in Vim and then just type in e. to open a file tree.
My .vimrc file

Here’s is my .vimrc file which is vital for me in adapting Vim to all my needs – keyboard shortcuts, customizations, eficiency flows:

HyperLinkHelper in Vim

Another thing I really like in TextMate and Sublime Text is the HyperlinkHelper, basically wrapping the current selection as a link with what’s in the clipboard set as the href value. So I created this command for Vim, to add in your .vimrc file:

vmap <Space>l c<a href="<C-r>+"><C-r>"</a>

In Visual mode, select text and then press space bar + l to trigger this action.

Scratching the surface

This has only been scratching the surface of all the power in Vim, but I hope it has been inspiring, understandable and hopefully motivated you to give it a go, alternatively taught you something you didn’t know.

Any input, thoughts and suggestions are more than welcome!

Categorieën: Mozilla-nl planet

Doug Belshaw: 99% finished: Badge Alliance Digital & Web Literacies working group's Privacy badge pathway

do, 16/10/2014 - 12:23

I’m the co-chair of the Badge Alliance’s working group on Digital & Web Literacies. We’ve just finished our first cycle of meetings and are almost finished the deliverable. Taking the Web Literacy Map (v1.1) as a starting point, we created a document outlining considerations for creating a badged pathway around the Privacy competency.

Cat x-ray

The document is currently on Google Docs and open for commenting. After the Mozilla Festival next week the plan is to finalise any edits and then use the template we used for the Webmaker whitepaper.

Click here to access the document: http://goo.gl/40byub

Comments? Questions? Get in touch: @dajbelshaw / doug@mozillafoundation.org

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

do, 16/10/2014 - 10:50

the following changes have been pushed to bugzilla.mozilla.org:

  • [1079476] Allow getting and updating groups via the web services
  • [1079463] Bugzilla::WebService::User missing update method
  • [1080600] CVE ID format change: CVE-\d{4}-\d{4} becomes CVE-\d{4}-\d{4,7} this year
  • [1080554] Create custom entry form for submissions to Mozilla Communities newsletter
  • [1074586] Add “Bugs of Interest” to the dashboard
  • [1062775] Create a form to create/update bounty tracking tracking attachments
  • [1074350] “new to bugzilla” indicator should be removed when a user is added to ‘editbugs’, not ‘canconfirm’
  • [1082887] comments made when setting a flag from the attachment details page are not included in the “flag updated” email

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Jennie Rose Halperin: New /contribute page

wo, 15/10/2014 - 21:58

In an uncharacteristically short post, I want to let folks know that we just launched our new /contribute page.

I am so proud of our team! Thank you to Jess, Ben, Larissa, Jen, Rebecca, Mike, Pascal, Flod, Holly, Sean, David, Maryellen, Craig, PMac, Matej, and everyone else who had a hand. You all are the absolute most wonderful people to work with and I look forward to seeing what comes next!

I’ll be posting intermittently about new features and challenges on the site, but I first want to give a big virtual hug to all of you who made it happen and all of you who contribute to Mozilla in the future.

Categorieën: Mozilla-nl planet

David Boswell: Investing more in community building

wo, 15/10/2014 - 21:35

I’m very excited to see the new version of Mozilla’s Get Involved page go live. Hundreds of people each week come to this page to learn about how they can volunteer. Improvements to this page will lead to more people making more of an impact on Mozilla’s projects.

get_involved_2014

This page has a long history—this page existed on www.mozilla.org when Mozilla launched in 1998 and it has been redesigned a few times before. There is something different about the effort this time though.

We’ve spent far more time researching, prototyping, designing, testing, upgrading and building than ever before. This reflects Mozilla’s focus this year of enabling communities that have impact and that goal has mobilized experts from many teams who have made the experience for new volunteers who use this page much better.

Mozilla’s investment in community in 2014 is showing up in other ways too, including a brand new contribution dashboard, a relaunched contributor newsletter, a pilot onboarding program, the first contributor research effort in three years and much more.

All of these pieces are coming together and will give us a number of options for how we can continue and increase the investment in community in 2015. Look for more thoughts soon on why that is important, what that could look like and how you could help shape it.


Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Spotlight on Amnesty International: A Ford-Mozilla Open Web Fellows Host

wo, 15/10/2014 - 18:28

{This is the second installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. Amnesty International is a great addition to the program, especially as new technologies have such a profound impact – both positive and negative – on human rights. With its tremendous grassroots advocacy network and decades of experience advocating for fundamental human rights, Amnesty International, its global community and its Ford-Mozilla Fellow are poised to continue having impact on shaping the digital world for good.}

Spotlight on Amnesty International: A Ford-Mozilla Open Web Fellow Host
By Tanya O’Carroll, Project Officer, Technology and Human Rights, Amnesty International

For more than fifty years Amnesty International has campaigned for human rights globally: exposing information that governments will go to extreme measures to hide; connecting individuals who are under attack with solidarity networks that span the globe; fighting for policy changes that often seem impossible at first.

We’ve developed many tools and tactics to help us achieve change.

But the world we operate in is also changing.

Momentous developments in information and communications networks have introduced new opportunities and threats to the very rights we defend.

amnesty-logoThe Internet has paved the way for unprecedented numbers of people to exercise their rights online, crucially freedom of expression and assembly.

The ability for individuals to publish information and content in real-time has created a new world of possibilities for human rights investigations globally. Today, we all have the potential to act as witnesses to human rights violations that once took place in the dark.

Yet large shadows loom over the free and open Web. Governments are innovating and seeking to exploit new tools to tighten their control, with daunting implications for human rights.

This new environment requires specialist skills to respond. When we challenge the laws and practices that allow governments to censor individuals online or unlawfully interfere with their privacy, it is vital that we understand the mechanics of the Internet itself–and integrate this understanding in our analysis of the problem and solutions.

That’s why we’re so excited to be an official host for the Ford-Mozilla Open Web Fellowship.

We are seeking someone with the expert skill set to help shape our global response to human rights threats in the digital age.

Amnesty International’s work in this area builds on our decades of experience campaigning for fundamental human rights.

Our focus is on the new tools of control – that is the technical and legislative tools that governments are using to clamp down on opposition, restrict lawful expression and the free flow of information and unlawfully spy on private communications on a massive scale.

In 2015 we will be actively campaigning for an end to unlawful digital surveillance and for the protection of freedom of expression online in countries across the world.

Amnesty International has had many successes in tackling entrenched human rights violations. We know that as a global movement of more than 3 million members, supporters and activists in more than 150 countries and territories we can also help to protect the ideal of a free and open web. Our success will depend on building the technical skills and capacities that will keep us ahead of government efforts to do just the opposite.

Demonstrating expert leadership, the fellow will contribute their technical skills and experience to high-quality research reports and other public documents, as well as international advocacy and public campaigns.

If you are passionate about stopping the Internet from becoming a weapon that is used for state control at the expense of freedom, apply now to become a Ford-Mozilla Open Web Fellow and join Amnesty International in the fight to take back control.

Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Categorieën: Mozilla-nl planet

Doug Belshaw: Web Literacy Map 2.0 community calls

wo, 15/10/2014 - 15:00

To support the development of Web Literacy Map v2.0, we’re going to host some calls with the Mozilla community.

Dogs on phone

There is significant overlap between the sub-section of the community interested in the Web Literacy Map and the sub-section involved in the Badge Alliance working group on Digital/Web Literacies. It makes sense, therefore, to use the time between cycles of the Badge Alliance working group to focus on developing the Web Literacy Map.

Calls

We’ll have a series of seven community calls on the following dates. The links take you to the etherpad for that call.

Discussion

You can subscribe to a calendar for these calls at the link below:

Calendar: http://bit.ly/weblitmap2-calls

We’ll be using the #TeachTheWeb forum for asynchronous discussion. I do hope you’ll be able to join us!

Questions? Comments? Direct them to @dajbelshaw / doug@mozillafoundation.org

Categorieën: Mozilla-nl planet

Robert Nyman: How to become efficient at e-mail

wo, 15/10/2014 - 11:21

For many years I’ve constantly been receiving hundreds of e-mails every day. A lot of them work-related, a number of them personal. And I’ve never seen this is an issue, since I have an approach that works for me.

Many people complain that e-mail is broken, but I think it’s a great communication form. I can deal with it when I feel I have the time and can get back to people when it suits me. If I need to concentrate on something else, I won’t let it interrupt my flow – just have notifications off/e-mail closed/don’t read it, and then get to it while you can.

Your miles might, and will, vary, of course, but here are the main things that have proven to work very well for me.

Deal with it

When you open up your Inbox with all new e-mail, deal with it. Now and then. Because having seen the e-mail, maybe even glanced at some of the contents beyond the subjects as well, I believe it has already reserved a mental part of your brain. You’ll keep on thinking about it till you actually deal with it.

In some cases, naturally it’s good to ponder your reply, but mostly, just go with your knowledge and act on it. Some things are easiest to deal with directly, some need a follow-up later on (more on that in Flags and Filters below).

Flags

Utilize different flags for various actions you want. Go through your Inbox directly, reply to the e-mails or flag them accordingly. It doesn’t have to be Inbox Zero or similar, but just that you know and are on top of each and every e-mail.

These are the flags/labels I use:

This needs action
Meaning, I need to act on this: that could be replying, checking something out, contact someone else before I know more etc
Watch this
No need for an immediate action, but watch and follow up on this and see what it happens. Good for things when you never got a reply from people and need to remind them
Reference
No need to act, no need to watch it. But it is plausible that this topic and discussion might come up in the future, so file it just for reference.

The rest of it is Throw away. No need to act, watch or file it? Get rid of it.

Filters

Getting e-mails from the same sender/on the same topic on a regular basis? Set up a filter. This way you can have the vast majority of e-mail already sorted for you, bypassing the Inbox directly.

Make them go into predefined folders (or Gmail labels) per sender/topic. That way you can see in the structure that you have unread e-mails from, say, LinkedIn, Mozilla, Netflix, Facebook, British Airways etc. Or e-mails from your manager or the CEO. Or e-mail sent to the team mailing list, company branch or all of the company. And then deal with it when you have the time.

Gmail also has this nice feature of choosing to only show labels in the left hand navigation if they have unread e-mails in them, making it even easier to see where you’ve got new e-mails.

Let me stress that this is immensely useful, drastically reducing which e-mails you need to manually filter and decide an action for.

Acknowledge people

If you have a busy period when replying properly is hard, still make sure to take the time to acknowledge people. Reply, say that you’ve seen their e-mail and that you will get back to them as soon as you have a chance.

They took the time to write to you, and they respect the common decency of a reply.

Unsubscribe

How many newsletters or information e-mails are you getting that you don’t really care about? Maybe on a monthly basis, so it’s annoying, but not annoying enough? Apply the above suggestion filters with them or, even better, start unsubscribing from crap you don’t want.

Get to know your e-mail client

Whether you use an e-mail client, Gmail or similar, make sure to learn its features. Keyboard shortcuts, filters and any way you can customize it to make you more efficient.

For instance, I’ve set up keyboard shortcuts for the above mentioned flags and for moving e-mails into pre-defined folders. Makes the manual part of dealing with e-mail really fast.

Summing up

E-mail doesn’t have to be bad. On the contrary, it can be extremely powerful and efficient if you just make the effort to streamline the process and use it for you, not against you.

E-mails aren’t a problem, they’re an opportunity.

Categorieën: Mozilla-nl planet

Pagina's