On Saturday I participated in a protest march for the first time ever --- against the TPPA. I support lowering trade barriers, but the TPPA has a lot of baggage that I strongly dislike, such committing member nations to a dysfunctional United States-esque intellectual property regime, and sovereignty-eroding "investor-state dispute settlement". The biggest prize for New Zealand would have been free access to foreign dairy markets, but that was mostly not realized, so it seems like a bad deal for us.
Unsurprisingly there were a lot of different sorts of people involved. Many of them espoused themes I don't agree with --- knee-jerk anti-Americanism, "Socialist Aotearoa", general opposition to free-market economics. That made it more fun and interesting :-). I think it's very important that people who disagree about a lot of things can still work together on issues they do agree about.
I made this presentation at Seneca’s FSOSS a few weeks ago; some of these ideas have been rattling around in my brain for a while, but it was the first time I’d even run through it. I was thoroughly caffeinated at the time so all of my worst verbal tics are on display, right as usual um right pause right um. But if you want to have my perspective on why free and open source software matters, why some institutions and ideas live and others die out, and how I think you should design and build organizations around your idea so that they last a few hundred years, here you go.
There are some mistakes I made, and now that I’m watching it – I meant to say “merchants” rather than “farmers”, there’s a handful of others I may come back here to note later. But I’m still reasonably happy with it.
Many of the people who have bought Windows phones seek relief sooner or later. Sometimes this comes about due to peer pressure or the feeling of isolation, in other cases it is the frustration of the user interface or the realization that they can't run cool apps like Lumicall.
Frequently, the user has been given the phone as a complimentary upgrade when extending a contract without perceiving the time, effort and potential cost involved in getting their data out of the phone, especially if they never owned a smartphone before.
When a Windows phone user does decide to cut their losses, they are usually looking to a friend or colleague with technical expertise to help them out. Personally, I'm not sure that anybody I would regard as an IT expert has ever had a Windows phone though, meaning that many experts are probably also going to be scratching their heads when somebody asks them for help. Therefore, I've put together this brief guide to help deal with these phones more expediently when they are encountered.
The Windows phones have really bad support for things like CalDAV and WebDAV so don't get your hopes up about using such methods to backup the data to any arbitrary server. Searching online you can find some hacks that involve creating a Google or iCloud account in the phone and then modifying the advanced settings to send the data to an arbitrary server. These techniques vary a lot between specific versions of the Windows Phone OS and so the techniques I've described below are probably easier.Identify the Windows Live / Hotmail account
The user may not remember or realize that a Microsoft account was created when they first obtained the phone. It may have been created for them by the phone, a friend or the salesperson in the phone shop.
Look in the settings (Accounts) to find the account ID / email address. If the user hasn't been using this account, they may not recognize it and probably won't know the password for it. It is essential to try and obtain (or reset) the password before going any further, so start with the password recovery process. Microsoft may insist on sending a password reset email to some other email address that the user has previously provided or linked to their phone.Extracting data from the phone
In many cases, the easiest way to extract the data is to download it from Microsoft live.com rather than extracting it from the phone. Even if the user doesn't realize it, the data is probably all replicated in live.com and so there is no further loss of privacy by logging in there to extract it.Set up an IMAP mail client
An IMAP client will be used to download the user's emails (from the live.com account they may never have used) and SMS.
Configure the IMAP mail client to connect to the live.com account. Some clients, like Thunderbird, will automatically set up all the server details when you enter the live.com account ID. For manual account setup, the details here may help.Email backup
If the user was not using the live.com account ID for email correspondence, there may not be a lot of mail in it. There may be some billing receipts or other things that are worth keeping though.
Create a new folder (or set of folders) in the user's preferred email account and drag and drop the messages from the live.com Inbox to the new folder(s).SMS backup
SMS backup can also be done through live.com. It is slightly more complicated than email backup, but similar.
- In the live.com Outlook email index page, look for the settings button and click Manage Categories.
- Enable the Contacts and Photos categories with a tick in each of them.
- Go back to the main Inbox page and look for the categories section on the bottom left-hand side of the screen, under the folder list. Click the Contacts category.
- The page may now appear blank. That is normal.
- On the top right-hand corner of the page, click the Arrange menu and choose Conversation.
- All the SMS messages should now appear on the screen.
- Under the mail folders list on the left-hand side of the page, click to create a new folder with a name like SMS.
- Select all the SMS messages and look for the option to move them to a folder. Send them to the SMS folder you created.
- Now use the IMAP mail client to locate the SMS folder and copy everything from there to a new folder in the user's preferred mail server or local disk.
On the top left-hand corner of the live.com email page, there is a chooser to select other live.com applications. Select People.
You should now see a list of all the user's contacts. Look for the option to export them to Outlook and other programs. This will export them as a CSV file.
You can now import the CSV file into another application. GNOME Evolution has an import wizard with an option for Outlook file format. To load the contacts into a WebDAV address book, such as DAViCal, configure the address book in Evolution and then select it as the destination when running the CSV import wizard.
WARNING: beware of using the Mozilla Thunderbird address book with contact data from mobile devices and other sources. It can't handle more than two email addresses per contact and this can lead to silent data loss if contacts are not fully saved.Calendar backup
Now go to the live.com application chooser again and select the calendar application. Microsoft provides instructions to extract the calendar, summarised here:
- Look for the Share button at the top somewhere and click it.
- On the left-hand side of the page, click Get a link
- On the right-hand side, choose Show event details to ensure you get a full calendar and then click Create underneath it.
- Look for the link with a webcals prefix. If you are downloading with a tool like wget, change the scheme prefix to https. Fetch the file from this link and save it with an ics extension.
- Inspect the ics calendar file to make sure it looks like real iCalendar data.
You can now import the ics file into another application. GNOME Evolution has an import wizard with an option for iCalendar file format. To load the calendar entries into a CalDAV server, such as DAViCal, configure the calendar server in Evolution and then select it as the destination when running the import wizard.Backup the user's photos, videos and other data files
Hopefully you will be able to do this step without going through live.com. Try enabling the MTP or PTP mode in the phone and attach it to the computer using the USB cable. Hopefully the computer will recognize it in at least one of those modes.
Use the computer's file manager or another tool to simply backup the entire directory structure.Reset the phone to factory defaults
Once the user has their hands on a real phone, it is likely they will never want to look at that Windows phone again. It is time to erase the Windows phone, there is no going back.
Go to the Settings and About and tap the factory reset option. It is important to do this before obliterating the live.com account, otherwise there are scenarios where you could be locked out of the phone and unable to erase it.
Erasing may take some time. The phone will reboot and then display an animation of some gears spinning around for a few minutes and then reboot again. Wait for it to completely erase.Permanently close the Microsoft live.com account
Keeping track of multiple accounts and other services is tedious and frustrating for most people, especially with services that try to force the user to receive email in different places.
You can help eliminate user fatigue by helping them permanently close the live.com account so they never have to worry about it again.
Follow the instructions on the Microsoft site.
At some point it will suggest certain actions you should take before closing the account, most can be ignored. One thing you should do is remove the link between the live.com account ID and the phone. It is a good idea to do this as otherwise you may have problems erasing the device, if you haven't already done so. Before completely closing the account, also verify that the factory reset of the phone completed successfully.Dispose of the Windows phone safely
If you can identify any faults with the phone, the user may be able to return it under the terms of the warranty. Some phone companies may allow the user to exchange it for something more desirable when it fails under warranty.
It may be tempting to sell the phone to a complete stranger on eBay or install a custom ROM on it. In practice, neither option may be worth the time and effort involved. You may be tempted to put it beyond use so nobody else will suffer with it, but please try to do so in a way that is respectful of the environment.Putting the data into a new phone
Install the F-Droid app on the new phone.
From F-droid, install the DAVdroid app. DAVdroid will allow you to quickly sync the new phone against any arbitrary CalDAV and WebDAV server to populate it with the user's calendar and contact / address book data.
Endpoint security typically comes in two flavors: with or without a local agent. They both do the same thing - reach out to your endpoints and run a bunch of tests - but one will reach out to your systems over SSH, while the other will require a local agent to be deployed on all endpoints. Both approaches often share the same flaw: the servers that operate the security solution have the keys to become root on all your endpoints. These servers become targets of choice: take control of them, and you are root across the infrastructure.
I have evaluated many endpoint security solutions over the past few years, and I too often run into this broken approach to access control. In extreme cases, vendors are even bold enough to sell hosted services that require their customers to grant root accesses to SaaS operating as blackboxes. These vendors are successful, so I imagine they find customers who think sharing root accesses that way is acceptable. I am not one of them.
For some, trust is a commodity that should be outsource-able. They see trust as something you can write into a contract, and not worry about it afterward. To some extend, contracting does help with trust. More often than not, however, trust is earned over time, and contracts only seal the trust relationship both parties have already established.
I trust AWS because they have a proven track record of doing things securely. I did not use to, but time and experience have changed my mind. You, however, young startup that freshly released a fancy new security product I am about to evaluate, I do not yet trust you. You will have to earn that trust over time, and I won't make it easy.
This is where my issue with most endpoint security solutions lies: I do not want to trust random security vendors with root accesses to my servers. Mistakes happen, they will get hacked some day, or leak their password in a git commit or a pastebin, and I do not wish my organization to be a collateral damage of their operational deficiencies.
Endpoint security without blindly trusting the security platform is doable. MIG is designed around the concept of non-trustable infrastructure. This is achieved by requiring all actions sent to MIG agents to be signed using keys that are not stored on the MIG servers, but on the laptops of investigators, the same way SSH keys are managed. If the MIG servers get hacked, some data may leak, but no access will be compromised.
Another aspect that we included in MIG is the notion that endpoint security can be done without arbitrary remote code exception. Most solutions will happily run code that come from the trusted central platform, effectively opening a backdoor into the infrastructure. MIG does not allow this. Agents will only run specific investigative tasks that have been pre-compiled into modules. There is no vector for remote code execution, such that an investigator's key leaking would not allow an attacker to elevate access to being root on endpoints. This approach does limit the capabilities of the platform - we can only investigate what MIG supports - but if remote code execution is really what you need, you probably should be looking into a provisioning tool, or pssh, but not an endpoint security solution.
While I do take MIG as an example, I am not advocating it as a better solution to all things. Rather, I am advocating for proper access controls in endpoint security solutions. Any security product that has the potential to compromise your entire infrastructure if taken over is bad, and should not be trusted. Even if it brings some security benefits. You should not have to compromise on this. Vendors should not ask customers to accept that risk, and just trust them to keep their servers secure. Doing endpoint security the safe way is possible, it's just a matter of engineering it right.
The Berlin Mozilla Community would like to invite all of you to the Mozilla Tech Weekend on November 28th 2015. There will be tech talks on Saturday and workshops on Sunday.
Saarbrücker Str. 24, Haus C, Berlin
Sign up for free at http://www.meetup.com/Berlin-Mozilla-Meetup/events/226461969/
Schedule for Saturday 28th November:
- Servo: Mozilla’s Parallel & Safe Next-Generation Browser Engine
- Data reporting at Mozilla
- Firefox OS: Why we exist
- What’s new in Firefox
After the talks there will be some food and time to get in touch with developers and each other.
On Sunday there will be workshops on similar topics to follow up or get you all set up if you would like to start contributing to Mozilla projects. Sign-up for the workshops will be on-site on Saturday.
The Berlin Mozilla Community
I am currently investigating how we can make mozregression smarter to handle merges, and I will explain how in this post.
While bisecting builds with mozregression on mozilla-central, we often end up with a merge commit. These commits often incorporate many individual changes, consider for example this url for a merge commit. A regression will be hard to find inside such a large range of commits.
How mozregression currently works
Once we reach a one day range by bisecting mozilla-central or a release branch, we keep the most recent commit tested, and we use that for the end of a new range to bisect mozilla-inbound (or another integration branch, depending on the application) The beginning of that mozilla-inbound range is determined by one commit found 4 days preceding the date of the push of the commit (date pushed on mozilla-central) to be sure we won’t miss any commit in mozilla-central.
But there are multiple problems. First, it is not always the case that the offending commit really comes from m-i. It could be from any other integration branch (fx-team, b2g-inbound, etc). Second, bisecting over a 4 days range in mozilla-inbound may involve testing a lot of builds, with some that are useless to test.
How can we improve this ? As just stated, there are two points that can be improved:
- do not automatically bisect on mozilla-inbound when we finished mozilla-central or a release branch bisection. Merges can comes from fx-team, or another integration branch and this is not really application dependent.
- try to avoid going back 4 days before the merge when going to the integration branch, there is a loss in productivity since we are likely to test commits that we already tested.
So, how can this be achieved ? Here is my current approach (technical):
- Once we are done with the nightlies (one build per day) from a bisection from m-c or any release branch, switch to use taskcluster to download possible builds between. This way we reduce the range to two pushes (one good, one bad) instead of a full day. But since we tested them both, only the commits in the most recent push may contain the regression.
- Read the commit message of the top most commit in the most recent push. If it does not looks like a merge commit, then we can’t do anything (maybe this is not a merge, then we are done).
- We have a merge push. So now we try to find the exact commits around, on the branch where the merged commits come from.
- Bisect this new push range using the changesets and the branch found above, reduce that range and go to 2.
Let’s take an example:
mozregression -g 2015-09-20 -b 2015-10-10
We are bisecting firefox, on mozilla-central. Let’s say we end up with a range 2015-10-01 – 2015-10-02. This is how the pushlog will looks like at the end, 4 pushes and more than 250 changesets.
Now mozregression will automatically reduce the range (still on mozilla-central) by asking you good/bad for those remaining pushes. So, we would end up with two pushes – one we know is good because we tested the top most commit, and the other we know is bad for the same reason. Look at the following pushlog, showing what is still untested (except for the merge commit itself) – 96 commits, coming from m-i.
And then mozregression will detect that it is a merge push from m-i, so automatically it will let you bisect this range of pushes from m-i. That is, our 96 changesets from m-c now converted to testable pushes in m-i. And we will end with a smaller range, for example this one where it will be easy to find our regression because this is one push without any merge.
Note that both methods for the example above would have worked. Mainly because we are ending in commits originated from m-i. I tried with another bisection, this time trying to find a commit in fx-team – in that case, current mozregression is simply out – but with the new method it was handled well.
Also using the current method, it would have required around 7 steps after reducing to the one day range for the example above. The new approach can achieve the same with around 5 steps.
Last but not least, this new flow is much more cleaner:
- start to bisect from a given branch. Reduce the range to one push on that branch.
- if we found a merge, find the branch, the new pushes, and go to 1 to bisect some more with this new data. Else we are done.
Is this applicable ?
Well, it relies on two things. The first one (and we already rely on that a bit currently) is that a merged commit can be found in the branch where it comes from, using the changeset. I have to ask vcs gurus to know if that is reliable, but from my tests this is working well.
Second thing it that we need to detect a merge commit – and from which branch commits comes from. Thanks to the consistency of the sheriffs in their commit messages, this is easy.
Even if it is not applicable everywhere for some reason, it appears that it often works. Using this technique would result in a more accurate and helpful bisection, with speed gain and increased chances to find the root cause of a regression.
This need some more thinking and testing, to determine the limits (what if this doesn’t work ? Should we/can we use the old method in that case ?) but this is definitely something I will explore more to improve the usefulness of mozregression.