Do you have a netbook (from around 2011) with AMD processor, please take a look if it is bobcat processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). If you have one and are willing to help us giving vpn/ssh access please contact me (hverschore [at] mozilla.com).
Improving stability and decreasing crash rate is an ongoing issue for all our teams in Mozilla. That is also true for the JS team. We have fuzzers abusing our JS engine, we review each-others code in order to find bugs, we have static analyzers looking at our code, we have best practices, we look at crash-stats trying to fix the underlying bug … Lately we have identified a source of crashes in our JIT engine on specific hardware. But we haven’t been able to find a solution yet.
Our understanding of the bug is quite limited, but we know it is related to the generated code. We have tried to introduce some work-around to fix this issue, but none have worked yet and the turn-around is quite slow. We have to find a possible way to work-around and release that to nightly and wait for crash-stats to see if it could be fixed.
That is the reason for our call for hardware. We don’t have the hardware our-self and having access to the correct hardware would make it possible to test possible fixes much quicker until we find a possible solution. It would help us a lot.
This is the first time our team tries to leverage our community in order to find specific hardware and I hope it works out. We have a backup plan, but we are hoping that somebody reading this could make our live a little bit easier. We would appreciate it a lot if everybody could see if they still have a laptop/netbook with an bobcat AMD processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). E.g. this processor was used in the Asus Eee variant with AMD. If you do please contact me at (hverschore [at] mozilla.com) in order to discuss a way to access the laptop for a limited time.
The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.
Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.Description
In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.
First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.
Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.Legitimate Uses
Because the Referer header has been around for so long, a number of techniques rely on it.
Armed with the Referer information, analytics tools can figure out:
- where website traffic comes from, and
- how users are navigating the site.
Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.
It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).Problems with the Referrer
Unfortunately, this header also creates significant privacy and security concerns.
The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.
These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by healthcare.gov.Solutions for Firefox Users
While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.
In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:
- 0 to never send the header
- 1 to send the header only when clicking on links and similar elements
- 2 (default) to send the header on all requests (e.g. images, links, etc.)
It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:
- 0 (default) to send the full URL
- 1 to send the URL without its query string
- 2 to only send the scheme, host and port
or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.
Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.
Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:
- 0 (default) to send the referrer in all cases
- 1 to send a referrer only when the base domains are the same
- 2 to send a referrer only when the full hostnames match
If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:
- anything that uses the default Django authentication
- Launchpad logins
- AMD driver downloads
- some CDN-hosted images
The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.
Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.My Recommended Settings
As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.
While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.
If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:network.http.referer.XOriginPolicy = 2
or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:network.http.referer.XOriginPolicy = 1
This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.
On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:network.http.referer.XOriginTrimmingPolicy = 2
I have not yet found user-visible breakage using this last configuration. Let me know if you find any!
Having a mandatory new version of Mac OS X every year is not necessarily the best way to show you’re still caring, Apple. This self-imposed yearly update cycle makes less and less sense as time goes by. Mac OS X is a mature operating system and should be treated as such. The focus should be on making Mac OS X even more robust and reliable, so that Mac users can update to the next version with the same relative peace of mind as when a new iOS version comes out.
I wonder how much the mandatory yearly version cycle is due to the various iOS integration features—which, other than the assorted “bugs introduced by rewriting stuff that ‘just worked,’” seem to be the main changes in every Mac OS X (er, macOS, previously OS X) version of late.
Are these integration features so wide-ranging that they touch every part of the OS and really need an entire new version to ship safely, or are they localized enough that they could safely be released in a point update? Of course, even if they are safe to release in an update, it’s still probably easier on Apple’s part to state “To use this feature, your Mac must be running macOS 10.18 or newer, and your iOS device must be running iOS 16 or newer” instead of “To use this feature, your Mac must be running macOS 10.15.5 or newer, and your iOS device must be running iOS 16 or newer” when advising users on the availability of the feature.
At this point, as Mori mentioned, Mac OS X is a mature, stable product, and Apple doesn’t even have to sell it per se anymore (although for various reasons, they certainly want people to continue to upgrade). So even if we do have to be subjected to yearly Mac OS X releases to keep iOS integration features coming/working, it seems like the best strategy is to keep the scope of those OS releases small (iOS integration, new Safari/WebKit, a few smaller things here and there) and rock-solid (don’t rewrite stuff that works fine, fix lots of bugs that persist). I think a smaller, more scoped release also lessens the “upgrade burnout” effect—there’s less fear and teeth-gnashing over things that will be broken and never fixed each year, but there’s still room for surprise and delight in small areas, including fixing persistent bugs that people have lived with for upgrade after upgrade. (Regressions suck. Regressions that are not fixed, release after release, are an indication that your development/release process sucks or your attention to your users’ needs sucks. Neither is a very good omen.) And when there is something else new and big, perhaps it has been in development and QA for a couple of cycles so that it ships to the user solid and fully-baked.
I think the need not to have to “sell” the OS presents Apple a really unique opportunity that I can imagine some vendors would kill to have—the ability to improve the quality of the software—and thus the user experience—by focusing on the areas that need attention (whatever they may be, new features, improvements, old bugs) without having to cram in a bunch of new tentpole items to entice users to purchase the new version. Even in terms of driving adoption, lots of people will upgrade for the various iOS integration features alone, and with a few features and improved quality overall, the adoption rate could end up being very similar. Though there’s the myth that developers are only happy when they get to write new code and new features (thus the plague of rewrite-itis), I know from working on Camino that I—and, more importantly, most of our actual developers1—got enormous pleasure and satisfaction from fixing bugs in our features, especially thorny and persistent bugs. I would find it difficult to believe that Apple doesn’t have a lot of similar-tempered developers working for it, so keeping them happy without cranking out tons of brand-new code shouldn’t be overly difficult.
I just wish Apple would seize this opportunity. If we are going to continue to be saddled with yearly Mac OS X releases (for whatever reason), please, Apple, make them smaller, tighter, more solid releases that delight us in how pain-free and bug-free they are.
1 Whenever anyone would confuse me for a real developer after I’d answered some questions, my reply was “I’m not a developer; I only play one on IRC.”2 ↩︎
2 A play on the famous television commercial disclaimer, “I’m not a doctor; I only play one on TV,” attributed variously, perhaps first to Robert Young, television’s Marcus Welby, M.D. from 1969-1976.3 ↩︎
3 The nested footnotes are a tribute to former Mozilla build/release engineer J. Paul Reed (“preed” on IRC), who was quite fond of them. ↩︎
Back in 2013, it came to light that Wget was used to to copy the files private Manning was convicted for having leaked. Around that time, EFF made and distributed stickers saying wget is not a crime.
Weirdly enough, it was hard to find a high resolution version of that image today but I’m showing you a version of it on the right side here.
In the 2016 movie Jason Bourne, Swedish actress Alicia Vikander is seen working on her laptop at around 1:16:30 into the movie and there’s a single visible sticker on that laptop. Yeps, it is for sure the same EFF sticker. There’s even a very brief glimpse of the top of the red EFF dot below the “crime” word.
Also recall the wget occurance in The Social Network.
En el día de hoy Mozilla a publicado una nueva actualización para su navegador, en esta ocasión la 49.0.2.
Esta liberación resuelve pequeños problemas que han estado confrontando algunos usuarios, por lo que recomendamos actualizar.
La pueden obtener desde nuestra zona de Descargas para Linux, Mac, Windows y Android en español e inglés.
Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...
We are happy to let you know that Friday, October 28th, we are organizing Firefox 51.0 Aurora Testday. We’ll be focusing our testing on the following features: Zoom indicator, Downloads dropmaker.
Check out the detailed instructions via this etherpad.
No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.
Join us and help us make Firefox better!
See you on Friday!
I just found, and read, Clément Delafargue’s post “Why Auto Increment Is A Terrible Idea” (via @CoreRamiro). I agree that an opaque primary key is very nice and clean from an information architecture viewpoint.
However, in practice, a serial (or monotonically increasing) key can be handy to have around. I was reminded of this during a recent situation where we (app developers & ops) needed to be highly confident that a replica was consistent before performing a failover. (None of us had access to the back end to see what the DB thought the replication lag was.)Read more...
I played a bit of devil’s advocate interviewing Monica as she has a lot of great opinions and the information to back up her point of view. It was very enjoyable seeing the current state of the web through the eyes of someone talented who just joined the party. It is far too easy for those who have been around for a long time to get stuck in a rut of trying not to break up with the past or considering everything broken as we’ve seen too much damage over the years. Not so Monica. She is very much of the opinion that we can trust developers to do the right thing and that by giving them tools to analyse their work the web of tomorrow will be great.
I’m happy that there are people like her in our market. It is good to pass the torch to those with a lot of dedication rather than those who are happy to use whatever works.
Hello, SUMO Nation!
We had a bit of a break, but we’re back! First, there was the meeting in Toronto with the Lithium team about the migration (which is coming along nicely), and then I took a short holiday. I missed you all, it’s great to be back, time to see what’s up in the world of SUMO!Welcome, new contributors!
If you just joined us, don’t hesitate – come over and say “hi” in the forums!Contributors of the week
- All the forum supporters who tirelessly helped users out for the last week.
- All the writers of all languages who worked tirelessly on the KB for the last week.
We salute you!Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month! SUMO Community meetings
- LATEST ONE: 19th of October – you can read the notes here and see the video at AirMozilla.
- NEXT ONE: happening on the 26th of October!
- If you want to add a discussion topic to the upcoming meeting agenda:
- Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
- Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
- If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.
- The Firefox Release Report for version 49 (including your input) has been published recently! Thanks to everyone who contributed to the document (and to the whole release). You make things happen :-)
- Check the notes from the last meeting in this document. (and don’t forget about our meeting recordings).
- You can also follow the migration through our public Trello board.
- Highlights from the last few days include a walkthrough of the basic style and layout of the site (watch the video for more details) and locking down a list of locales for the first wave of migration (more details in the L10n section).
- There was a test migration performed by the Lithium team… More details as we get them!
- Giorgos (our glorious technical admin for the duration of the migration) has also proposed creating a snapshot of Kitsune’s state as an archive version of the site. More details to follow as we get them, but now you can be sure that none of the previously invested work is going to disappear from the web.
- Take a look at the first iteration of the upcoming post-migration community page – you can find the background and provide feedback for this here. For the front page, take a look here (background and feedback are here) – huge thanks to Joni for working on these two!
- Don’t forget about the main migration thread, with the list of areas that can benefit from your input:
- If you are interested in test-driving the new platform now, please contact Madalina.
- IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing.
- QUESTIONS? CONCERNS? Use the migration thread to put questions/comments about it for everyone to share and discuss.
- Sierra from the Social team joined us recently for a meeting – you can reach out to her anytime!
- Want to join us? Please email Rachel and/or Madalina to get started supporting Mozilla’s product users on Facebook and Twitter. We need your help! Use the step-by-step guide here. Take a look at some useful videos:
- Got question about the changes coming up for the toolkit used in Social? Email us!
- You probably haven’t missed it, but just in case… A Flash Player update!
- Don’t forget that the support forum for forum supporters is there for you if you need more help helping others ;-)
- We are 3 weeks before next release / 1 week after current release What does that mean? (Reminder: we are following the process/schedule outlined here).
- Joni will finalize next release content by the end of this week; no work for localizers for the next release yet
- All existing content is open for editing and localization as usual; please focus on localizing the most recent / popular content
- Migration: please check this spreadsheet to see which locales are going to be migrated in the first wave
- Locale packages that will be migrated are marked as “match” and “needed” in the spreadsheet
- Other locales will be stored as an archive at sumo-archive.mozilla.org – and will be added whenever there are contributors ready to keep working on them
- We are also waiting for confirmation about the mechanics of l10n, we may be launching the first version without an l10n system built in – but all the localized content and UI will be there in all the locales listed in the spreadsheet above
- Remember the MozPizza L10n Hackathon in Brazil? Take a look here!
- for Desktop
- for iOS
- No news, keep biting the apple ;-)
- No news, keep biting the apple ;-)
…Whew, that’s it for now, then! I hope you could catch up with everything… I’m still digging through my post-holiday inbox ;-) Take care, stay safe, and keep rocking the helpful web! WE <3 YOU ALL!
Intel processors since at least the Pentium use a relatively simple BTB to aid these computations when finding the target of a branch instruction. The buffer is essentially a dictionary with virtual addresses of recent branch instructions mapping to their predicted target: if the branch is taken, the chip has the new actual address right away, and time is saved. To save space and complexity, most processors that implement a BTB only do so for part of the address (or they hash the address), which reduces the overhead of maintaining the BTB but also means some addresses will map to the same index into the BTB and cause a collision. If the addresses collide, the processor will recover, but it will take more cycles to do so. This is the key to the side-channel attack.
(For the record, the G3 and the G4 use a BTIC instead, or a branch target instruction cache, where the table actually keeps two of the target instructions so it can be executing them while the rest of the branch target loads. The G4/7450 ("G4e") extends the BTIC to four instructions. This scheme is highly beneficial because these cached instructions essentially extend the processor's general purpose caches with needed instructions that are less likely to be evicted, but is more complex to manage. It is probably for this reason the BTIC was dropped in the G5 since the idea doesn't work well with the G5's instruction dispatch groups; the G5 uses a three-level hybrid predictor which is unlike either of these schemes. Most PowerPC implementations also have a return address stack for optimizing the blr instruction. With all of these unusual features Power ISA processors may be vulnerable to a similar timing attack but certainly not in the same way and probably not as predictably, especially on the G5 and later designs.)
To get around ASLR, an attacker needs to find out where the code block of interest actually got moved to in memory. Certain attributes make kernel ASLR (KASLR) an easier nut to crack. For performance reasons usually only part of the kernel address is randomized, in open-source operating systems this randomization scheme is often known, and the kernel is always loaded fully into physical memory and doesn't get swapped out. While the location it is loaded to is also randomized, the kernel is mapped into the address space of all processes, so if you can find its address in any process you've also found it in every process. Haswell makes this even easier because all of the bits the Linux kernel randomizes are covered by the low 30 bits of the virtual address Haswell uses in the BTB index, which covers the entire kernel address range and means any kernel branch address can be determined exactly. The attacker finds branch instructions in the kernel code such as by disassembling it that service a particular system call and computes (this is feasible due to the smaller search space) all the possible locations that branch could be at, creates a "spy" function with a branch instruction positioned to try to force a BTB collision by computing to the same BTB index, executes the system call, and then executes the spy function. If the spy process (which times itself) determines its branch took longer than an average branch, it logs a hit, and the delta between ordinary execution and a BTB collision is unambiguously high (see Figure 7 in the paper). Now that you have the address of that code block branch, you can deduce the address of the entire kernel code block (because it's generally in the same page of memory due to the typical granularity of the randomization scheme), and try to get at it or abuse it. The entire process can take just milliseconds on a current CPU.
The kernel is often specifically hardened against such attacks, however, and there are more tempting targets though they need more work. If you want to attack a user process (particularly one running as root, since that will have privileges you can subvert), you have to get your "spy" on the same virtual core as the victim process or otherwise they won't share a BTB -- in the case of the kernel, the system call always executes on the same virtual core via context switch, but that's not the case here. This requires manipulating the OS' process scheduler or running lots of spy processes, which slows the attack but is still feasible. Also, since you won't have a kernel system call to execute, you have to get the victim to do a particular task with a branch instruction, and that task needs to be something repeatable. Once this is done, however, the basic notion is the same. Even though only a limited number of ASLR bits can be recovered this way (remember that in Haswell's case, bit 30 and above are not used in the BTB, and full Linux ASLR uses bits 12 to 40, unlike the kernel), you can dramatically narrow the search space to the point where brute-force guessing may be possible. The whole process is certainly much more streamlined than earlier ASLR attacks which relied on fragile things like cache timing.
As it happens, software mitigations can blunt or possibly even completely eradicate this exploit. Brute-force guessing addresses in the kernel usually leads to a crash, so anything that forces the attacker to guess the address of a victim routine in the kernel will likely cause the exploit to fail catastrophically. Get a couple of those random address bits outside the 30 bits Haswell uses in the BTB table index and bingo, a relatively simple fix. One could also make ASLR more granular to occur at the function, basic block or even single instruction level rather than merely randomizing the starting address of segments within the address space, though this is much more complicated. However, hardware is needed to close the gap completely. A proper hardware solution would be to either use most or all of the virtual address in the BTB to reduce the possibility of a collision, and/or to add a random salt to whatever indexing or hashing function is used for BTB entries that varies from process to process so a collision becomes less predictable. Either needs a change from Intel.
This little fable should serve to remind us that monocultures are bad. This exploit in question is viable and potentially ugly but can be mitigated. That's not the point: the point is that the attack, particularly upon the kernel, is made more feasible by particular details of how Haswell chips handle branching. When everything gets funneled through the same design and engineering optics and ends up with the same implementation, if someone comes up with a simple, weapons-grade exploit for a flaw in that implementation that software can't mask, we're all hosed. This is another reason why we need an auditable, powerful alternative to x86/x86_64 on the desktop. And there's only one system in that class right now.
Okay, okay, I'll stop banging you over the head with this stuff. I've got a couple more bugs under investigation that will be fixed in 45.5.0, and if you're having the issue where TenFourFox is not remembering your search engine of choice, please post your country and operating system here.
Weekly project updates from the Mozilla Connected Devices team.
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Please join us in congratulating Mijanur Rahman Rayhan, Rep of the Month for September 2016!
Mijanur is a Mozilla Rep and Tech Speaker from Sylhet, Bangladesh. With his diverse knowledge he organized hackathons around Connected Devices and held a Web Compatibility event to find differences in different browsers.
Mijanur proved himself as a very active Mozillian through his different activities and work with different communities. With his patience and consistency to reach his goals he is always ready and prepared for these. He showed commitment to the Reps program and his proactive spirit these last elections by running as a nominee for the Cohort position in Reps Council.
Be sure to follow his activities as he continues the activate series with a Rust workshop, Dive Into Rust events, Firefox Testpilot MozCoffees, Web Compatibility Sprint and Privacy and Security seminar with Bangladesh Police!
One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.
You know that problem where you want to label a coffee pot, but you just don’t have the right label? Technology to the rescue!
Of course, new technology does come with some disadvantages compared to the old, as well as its many advantages:
And pinch-to-zoom on the picture viewer (because that’s what it uses) does mean you can play some slightly mean tricks on people looking for their caffeine fix:
And how do you define what label the tablet displays? Easy:
Seriously, can any reader give me one single advantage this system has over a paper label?
Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.
If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.
Choice of smart card
For standard PGP use, the OpenPGP card provides a good choice.
For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.Choice of card reader
The technical factors to consider are most easily explained with a table:On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad Software Free/open Mostly free/open, Proprietary firmware in reader Key extraction Possible Not generally possible Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers) Other factors No hardware Small, USB key form-factor Largest form factor
Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.
Choice of computer to run the clean room environment
There are a wide array of devices to choose from. Here are some principles that come to mind:
- Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
- Even better if there is no wired networking either
- Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
- Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
- No hard disks required
- Having built-in SD card readers or the ability to add them easily
The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.
It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.
For convenience, it would be desirable to use a multi-card reader:
although the software experience will be much the same if lots of individual card readers or USB flash drives are used.Other devices
One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.Can you help with ideas or donations?
If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.
We’ve spent the past two weeks asking people around the world to think about our four refined design directions for the Mozilla brand identity. The results are in and the data may surprise you.
If you’re just joining this process, you can get oriented here and here. Our objective is to refresh our Mozilla logo and related visual assets & design toolkit that support our mission and make it easier for people who don’t know us to get to know us.
A reminder of the factors we’re taking into account in this phase. Data is our friend, but it is only one of several aspects to consider. In addition to the three quantitative surveys—of Mozillians, developers, and our target consumer audience—qualitative and strategic factors play an equal role. These include comments on this blog, constructive conversations with Mozillians, our 5-year strategic plan for Mozilla, and principles of good brand design.
Here is what we showed, along with a motion study, for each direction:
We asked survey respondents to rate these design directions against seven brand attributes. Five of them—Innovative, Activist, Trustworthy, Inclusive/Welcoming, Opinionated—are qualities we’d like Mozilla to be known for in the future. The other two—Unique, Appealing—are qualities required for any new brand identity to be successful.
Mozillians and developers meld minds.
Members of our Mozilla community and the developers surveyed through MDN (the Mozilla Developer Network) overwhelmingly ranked Protocol 2.0 as the best match to our brand attributes. For over 700 developers and 450 Mozillians, Protocol scored highest across 6 of 7 measures. People with a solid understanding of Mozilla feel that a design embedded with the language of the internet reinforces our history and legacy as an Internet pioneer. The link’s role in connecting people to online know-how, opportunity and knowledge is worth preserving and fighting for.
But consumers think differently.
We surveyed people making up our target audience, 400 each in the U.S., U.K., Germany, France, India, Brazil, and Mexico. They are 18- to 34-year-old active citizens who make brand choices based on values, are more tech-savvy than average, and do first-hand research before making decisions (among other factors).
We asked them first to rank order the brand attributes most important for a non-profit organization “focused on empowering people and building technology products to keep the internet healthy, open and accessible for everyone.” They selected Trustworthy and Welcoming as their top attributes. And then we also asked them to evaluate each of the four brand identity design systems against each of the seven brand attributes. For this audience, the design system that best fit these attributes was Burst.
Why would this consumer audience choose Burst? Since this wasn’t a qualitative survey, we don’t know for sure, but we surmise that the colorful design, rounded forms, and suggestion of interconnectedness felt appropriate for an unfamiliar nonprofit. It looks like a logo.
Also of note, Burst’s strategic narrative focused on what an open, healthy Internet feels and acts like, while the strategic narratives for the other design systems led with Mozilla’s role in world. This is a signal that our targeted consumer audience, while they might not be familiar with Mozilla, may share our vision of what the Internet could and should be.
Why didn’t they rank Protocol more highly across the chosen attributes? We can make an educated guess that these consumers found it one dimensional by comparison, and they may have missed the meaning of the :// embedded in the wordmark.
Although Dino 2.0 and Flame had their fans, neither of these design directions sufficiently communicated our desired brand attributes, as proven by the qualitative survey results as well as through conversations with Mozillians and others in the design community. By exploring them, we learned a lot about how to describe and show certain facets of what Mozilla offers to the world. But we will not be pursuing either direction.
Where we go from here.
Both Protocol and Burst have merits and challenges. Protocol is distinctly Mozilla, clearly about the Internet, and it reinforces our mission that the web stay healthy, accessible, and open. But as consumer testing confirmed, it lacks warmth, humor, and humanity. From a design perspective, the visual system surrounding it is too limited.
By comparison, Burst feels fresh, modern, and colorful, and it has great potential in its 3D digital expression. As a result, it represents the Internet as a place of endless, exciting connections and possibilities, an idea reinforced by the strategic narrative. Remove the word “Mozilla,” though, and are there enough cues to suggest that it belongs to us?
Our path forward is to take the strongest aspects of Burst—its greater warmth and dimensionality, its modern feel—and apply them to Protocol. Not to Frankenstein the two together, but to design a new, final direction that builds from both. We believe we can make Protocol more relatable to a non-technical audience, and build out the visual language surrounding it to make it both harder working and more multidimensional.
Long live the link.
What do we say to Protocol’s critics who have voiced concern that Mozilla is hitching itself to an Internet language in decline? We’re doubling down on our belief in the original intent of the Internet—that people should have the ability to explore, discover and connect in an unfiltered, unfettered, unbiased environment. Our mission is dedicated to keeping that possibility alive and well.
For those who are familiar with the Protocol prompt, using the language of the Internet in our brand identity signals our resolve. For the unfamiliar, Protocol will offer an opportunity to start a conversation about who we are and what we believe. The language of the Internet will continue to be as important to building its future as it was in establishing its origin.
We’ll have initial concepts for a new, dare-we-say final design within a few weeks. To move forward, first we’ll be taking a step back. We’ll explore different graphic styles, fonts, colors, motion, and surrounding elements, making use of the design network established by our agency partner johnson banks. In the meantime, tell us what you think.
The Rust team is happy to announce the latest version of Rust, 1.12.1. Rust is a systems programming language with a focus on reliability, performance, and concurrency.
Wait… one-point-twelve-point… one?
In the release announcement for 1.12 a few weeks ago, we said:
The release of 1.12 might be one of the most significant Rust releases since 1.0.
It was true. One of the biggest changes was turning on a large compiler refactoring, MIR, which re-architects the internals of the compiler. The overall process went like this:
- Initial MIR support landed in nightlies back in Rust 1.6.
- While work was being done, a flag, --enable-orbit, was added so that people working on the compiler could try it out.
- Back in October, we would always attempt to build MIR, even though it was not being used.
- A flag was added, -Z orbit, to allow users on nightly to try and use MIR rather than the traditional compilation step (‘trans’).
- After substantial testing over months and months, for Rust 1.12, we enabled MIR by default.
- In Rust 1.13, MIR will be the only option.
A change of this magnitude is huge, and important. So it’s also important to do it right, and do it carefully. This is why this process took so long; we regularly tested the compiler against every crate on crates.io, we asked people to try out -Z orbit on their private code, and after six weeks of beta, no significant problems appeared. So we made the decision to keep it on by default in 1.12.
But large changes still have an element of risk, even though we tried to reduce that risk as much as possible. And so, after release, 1.12 saw a fair number of regressions that we hadn’t detected in our testing. Not all of them are directly MIR related, but when you change the compiler internals so much, it’s bound to ripple outward through everything.Why make a point release?
Now, given that we have a six-week release cycle, and we’re halfway towards Rust 1.13, you may wonder why we’re choosing to cut a patch version of Rust 1.12 rather than telling users to just wait for the next release. We have previously said something like “point releases should only happen in extreme situations, such as a security vulnerability in the standard library.”
The Rust team cares deeply about the stability of Rust, and about our users’ experience with it. We could have told you all to wait, but we want you to know how seriously we take this stuff. We think it’s worth it to demonstrate our commitment to you by putting in the work of making a point release in this situation.
Furthermore, given that this is not security related, it’s a good time to practice actually cutting a point release. We’ve never done it before, and the release process is semi-automated but still not completely so. Having a point release in the world will also shake out any bugs in dealing with point releases in other tooling as well, like rustup. Making sure that this all goes smoothly and getting some practice going through the motions will be useful if we ever need to cut some sort of emergency point release due to a security advisory or anything else.
This is the first Rust point release since Rust 0.3.1, all the way back in 2012, and marks 72 weeks since Rust 1.0, when we established our six week release cadence along with a commitment to aggressive stability guarantees. While we’re disappointed that 1.12 had these regressions, we’re really proud of Rust’s stability and will to continue expanding our efforts to ensure that it’s a platform you can rely on. We want Rust to be the most reliable programming platform in the world.A note about testing on beta
One thing that you, as a user of Rust, can do to help us fix these issues sooner: test your code against the beta channel! Every beta release is a release candidate for the next stable release, so for the cost of an extra build in CI, you can help us know if there’s going to be some sort of problem before it hits a stable release! It’s really easy. For example, on Travis, you can use this as your .travis.yml:language: rust rust: - stable - beta
And you’ll test against both. Furthermore, if you’d like to make it so that any beta failure doesn’t fail your own build, do this:matrix: allow_failures: - rust: beta
The beta build may go red, but your build will stay green.
There were nine issues fixed in 1.12.1, and all of those fixes have been backported to 1.13 beta as well.
- ICE: ‘rustc’ panicked at ‘assertion failed: concrete_substs.is_normalized_for_trans()’ #36381
- Confusion with double negation and booleans
- rustc 1.12.0 fails with SIGSEGV in release mode (syn crate 0.8.0)
- Rustc 1.12.0 Windows build of ethcore crate fails with LLVM error
- 1.12.0: High memory usage when linking in release mode with debug info
- Corrupted memory after updated to 1.12
- “Let NullaryConstructor = something;” causes internal compiler error: “tried to overwrite interned AdtDef”
- Fix ICE: inject bitcast if types mismatch for invokes/calls/stores
- debuginfo: Handle spread_arg case in MIR-trans in a more stable way.
In addition, there were four more regressions that we decided not to include in 1.12.1 for various reasons, but we’ll be working on fixing those as soon as possible as well.
- ICE, possibly related to associated types of associated types?
- Compilation of a crate using a large static map fails on latest i686-pc-windows-gnu Beta
- Regression: “no method found” error when calling same method twice, with HRTB impl
- ICE: fictitious type sizing_type_of
You can see the full diff from 1.12.0 to 1.12.1 here.