mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 13 uur 36 min geleden

Air Mozilla: Meeting OpenData France 1

ma, 12/09/2016 - 14:00

Meeting OpenData France 1 Meeting OpenData France.

Categorieën: Mozilla-nl planet

Niko Matsakis: Thoughts on trusting types and unsafe code

ma, 12/09/2016 - 11:39

I’ve been thinking about the unsafe code guidelines a lot in the back of my mind. In particular, I’ve been trying to think through what it means to trust types – if you recall from the Tootsie Pop Model (TPM) blog post, one of the key examples that I was wrestling with was the RefCell-Ref example. I want to revisit a variation on that example now, but from a different angle. (This by the way is one of those Niko thinks out loud blog posts, not one of those Niko writes up a proposal blog posts.)

Setup

Let’s start with a little safe function:

1 2 3 4 5 fn patsy(v: &usize) -> usize { let l = *v; collaborator(); use(l); }

The question is, should the compiler ever be able to optimize this function as follows:

1 2 3 4 fn patsy(v: &usize) -> usize { collaborator(); use(*v); }

By moving the load from v after the call to collaborator(), we avoided the need for a temporary variable. This might reduce stack size or register pressure. It is also an example of the kind of optimizations we are considering doing for MIR (you can think of it as an aggressive form of copy-propagation). In case it’s not clear, I really want the answer to this question be yes – at least most of the time. More specifically, I am interested in examining when we can do this without doing any interprocedural analysis.

Now, the question of is this legal? is not necessarily a yes or no question. For example, the Tootsie Pop Model answer was it depends. In a safe code context, this transformation was legal. In an unsafe context, it was not.

What could go wrong?

The concern here is that the function collaborator() might invalidate *v in some way. There are two ways that this could potentially happen:

  • unsafe code could mutate *v,
  • unsafe code could invalidate the memory that v refers to.

Here is some unsafe code that does the first thing:

1 2 3 4 5 6 7 8 9 static mut data: usize = 0; fn instigator() { patsy(unsafe { &data }); } fn collaborator() { unsafe { data = 1; } }

Here is some unsafe code that invalidates *v using an option (you can also write code that makes it get freed, of course). Here, when we start, data is Some(22), and we take a reference to that 22. But then collaborator() reassigns data to None, and hence the memory that we were referring to is now uninitialized.

1 2 3 4 5 6 7 8 9 static mut data: Option<usize> = Some(22); fn instigator() { patsy(unsafe { data.as_ref().unwrap() }) } fn collaborator() { unsafe { data = None; } }

So, when we ask whether it is legal to optimize patsy move the *v load after the call to collaborator(), our answer affects whether this unsafe code is legal.

The Tootsie Pop Model

Just for fun, let’s look at how this plays out in the Tootsie Pop model (TPM). As I wrote before, whether this code is legal will ultimately depend on whether patsy is located in an unsafe context. The way I described the model, unsafe contexs are tied to modules, so I’ll stick with that, but there might also be other ways of defining what an unsafe context is.

First let’s imagine that all three functions are in the same module:

1 2 3 4 5 6 mod foo { static mut data: Option<usize> = Some(22); pub fn instigator() {...} fn patsy(v: &usize) -> usize {..} fn collaborator() {...} }

Here, because instigator and collaborator contain unsafe blocks, the module foo is considered to be an unsafe context, and thus patsy is also located within the unsafe context. This means that the unsafe code would be legal and the optimization would not. This is because the TPM does not allow us to trust types within an unsafe context.

However, it’s worth pointing out one other interesting detail. Just because the TPM model does not authorize the optimization, that doesn’t mean that it could not be performed. It just means that to perform the optimization would require detailed interprocedural alias analysis. That is, a highly optimizing compile might analyze instigator, patsy, and collaborator and determine whether or not the writes in collaborator can affect patsy (of course here they can, but in more reasonable code they likely would not). Put another way, the TPM basically tells you here are optimizations you can do without doing anything sophisticated; it doesn’t put an upper limit on what you can do given sufficient extra analysis.

OK, so now here is another recasting where the functions are spread between modules:

1 2 3 4 5 6 7 8 9 10 11 mod foo { use bar::patsy; static mut data: Option<usize> = Some(22); pub fn instigator() {...} pub fn collaborator() {...} } mod bar { use foo::collaborator; pub fn patsy(v: &usize) -> usize {..} }

In this case, the module bar does not contain unsafe blocks, and hence it is not an unsafe context. That means that we can optimize patsy. It also means that instigator is illegal:

1 2 3 fn instigator() { patsy(unsafe { &data }); }

The problem here is that instigator is calling patsy, which is defined in a safe context (and hence must also be a safe function). That implies that instigator must fulfill all of Rust’s basic permissions for the arguments that patsy expects. In this case, the argument is a &usize, which means that the usize must be accessible and immutable for the entire lifetime of the reference; that lifetime encloses the call to patsy. And yet the data in question can be mutated (by collaborator). So instigator is failing to live up to its obligations.

TPM has interesting implications for the Rust optimizer. Basically, whether or not a given statement can trust the types of its arguments ultimately depends on where it appeared in the original source. This means we have to track some info when inlining unsafe code into safe code (or else ‘taint’ the safe code in some way). This is not unique to TPM, though: Similar capabilities seem to be required for handling e.g. the C99 restrict keyword, and we’ll see that they are also important when trusting types.

What if we fully trusted types everywhere?

Of course, the TPM has the downside that it hinders optimization in unchecked-get use case. I’ve been pondering various ways to address that. One thing that I find intuitively appealing is the idea of trusting Rust types everywhere. For example, the idea might be that whenever you create a shared reference like &usize, you must ensure that its associated permissions hold. If we took this approach, then we could perform the optimization on patsy, and we could say that instigator is illegal, for the same reasons that it was illegal under TPM when patsy was in a distinct module.

However, trusting types everywhere – even in unsafe code – potentially interacts in a rather nasty way with lifetime inference. Here is another example function to consider, alloc_free:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 fn alloc_free() { unsafe { // allocates and initializes an integer let p: *mut i32 = allocate_an_integer(); // create a safe reference to `*p` and read from it let q: &i32 = &*p; let r = *q; // free `p` free(p); // use the value we loaded use(r); // but could we move the load down to here? } }

What is happening here is that we allocate some memory containing an integer, create a reference that refers to it, read from that reference, and then free the original memory. We then use the value that we read from the reference. The question is: can the compiler copy-propagate that read down to the call to use()?

If this were C code, the answer would pretty clearly be no (I presume, anyway). The compiler would see that free(p) may invalidate q and hence it act as a kind of barrier.

But if we were to go all in on trusting Rust types, the answer would be (at least currently) yes. Remember that the purpose of this model is to let us do optimizations without doing fancy analysis. Here what happens is that we create a reference q whose lifetime will stretch from the point of creation until the end of its scope:

1 2 3 4 5 6 7 8 9 10 11 12 fn alloc_free() { unsafe { let p: *mut i32 = allocate_an_integer(); let q: &i32 = &*p; // --+ lifetime of the reference let r = *q; // | as defined today // | free(p); // | // | use(r); // <------------+ } }

If this seems like a bad idea, it is. The idea that writing unsafe Rust code might be even more subtle than writing C seems like a non-starter to me. =)

Now, you might be tempted to think that this problem is an artifact of how Rust lifetimes are currently tied to scoping. After all, q is not used after the let r = *q statement, and if we adopted the non-lexical lifetimes approach, that would mean the lifetime would end there. But really this problem could still occur in a NLL-based system, though you have to work a bit harder:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 fn alloc_free2() { unsafe { let p: *mut i32 = allocate_an_integer(); let q: &i32 = &*p; // --------+ let r = *q; // | if condition1() { // | free(p); // | } // | if condition2() { // | use(r); // | if condition3() { // | use_again(*q); // <---+ } } } }

Here the problem is that, from the compiler’s point of view, the reference q is live at the point where we call free. This is because it looks like we might need it to call use_again. But in fact the programmer knows that condition1() and condition3() are mutually exclusive, and so she may reason that the lifetime of q ends earlier when condition1() holds than when it doesn’t.

So I think it seems clear from these examples that we can’t really fully trust types everywhere.

Trust types, not lifetimes?

I think that whatever guidelines we wind up with, we will not be able to fully trust lifetimes, at least not around unsafe code. We have to assume that memory may be invalidated early. Put another way, the validity of some unsafe code ought not to be determined by the results of lifetime inference, since mere mortals (including its authors) cannot always predict what it will do.

But there is a more subtle reason that we should not trust lifetimes. The Rust type system is a conservative analysis that guarantees safety – but there are many notions of a reference’s lifetime that go beyond its capabilities. We saw this in the previous section: today we have lexical lifetimes. Tomorrow we may have non-lexical lifetimes. But humans can go beyond that and think about conditional control-flow and other factors that the compiler is not aware of. We should not expect humans to limit themselves to what the Rust type system can express when writing unsafe code!

The idea here is that lifetimes are sometimes significant to the model – in particular, in safe code, the compiler’s lifetimes can be used to aid optimization. But in unsafe code, we are required to assume that the user gets to pick the lifetimes for each reference, but those choices must still be valid choices that would type check. I think that in practice this would roughly amount to “trust lifetimes in safe contexts, but not in unsafe contexts.

Impact of ignoring lifetimes altogether

This implies that the compiler will have to use the loads that the user wrote to guide it. For example, you might imagine that the the compiler can move a load from x down in the control-flow graph, but only if it can see that x was going to be loaded anyway. So if you consider this variant of alloc_free:

1 2 3 4 5 6 7 8 9 fn alloc_free3() { unsafe { let p: *mut i32 = allocate_an_integer(); let q: &i32 = &*p; let r = *q; // load but do not use free(p); use(*q); // not `use(r)` but `use(*q)` instead } }

Here we can choose to either eliminate the first load (let r = *q) or else replace use(*q) with use(r). Either is ok: we have evidence that the user believes the lifetime of q to enclose free. (The fact that it doesn’t is their fault.)

But now lets return to our patsy() function. Can we still optimize that?

1 2 3 4 5 fn patsy(v: &usize) -> usize { let l = *v; collaborator(); use(l); }

If we are just ignoring the lifetime of v, then we can’t – at least not on the basis of the type of v. For all we know, the user considers the lifetime of v to end right after let l = *v. That’s not so unreasonable as it might sound; after all, the code looks to have been deliberately written to load *v early. And after all, we are trying to enable more advanced notions of lifetimes than those that the Rust type system supports today.

It’s interesting that if we inlined patsy into its caller, we might learn new information about its arguments that lets us optimize more aggressively. For example, imagine a (benevolent, this time) caller like this:

1 2 3 4 5 fn kindly_fn() { let x = &1; patsy(x); use(*x); }

If we inlined patsy into kindly_fn, we get this:

1 2 3 4 5 6 7 8 9 fn kindly_fn() { let x = &1; { let l = *x; collaborator(); use(l); } use(*x); }

Here we can see that *x must be valid after collaborator(), and so we can optimize the function as follows (we are moving the load of *x down, and then applying CSE to eliminate the double load):

1 2 3 4 5 6 7 8 9 fn kindly_fn() { let x = &1; { collaborator(); let l = *x; use(l); } use(l); }

There is a certain appeal to trust types, not lifetimes, but ultimately I think it is not living up to Rust’s potential: as you can see above, we will still be fairly reliant on inlining to recover needed context for optimizing. Given that the vast majority of Rust is safe code, where these sorts of operations are harmless, this seems like a shame.

Trust lifetimes only in safe code?

An alternative to the TPM is the Asserting-Conflicting Access model (ACA), which was proposed by arielb1 and ubsan. I don’t claim to be precisely representing their model here: I’m trying to (somewhat separately) work through those rules and apply them formally. So what I write here is more inspired by those rules than reflective of it.

That caveat aside, the idea in their model is that lifetimes are significant to the model, but you can’t trust the compiler’s inference in unsafe code. There, we have to assume that the unsafe code author is free to pick any valid lifetime, so long as it would still type check (not borrow check – i.e., it only has to ensure that no data outlives its owning scope). Note the similarities to the Tootsie Pop Model here – we still need to define what an unsafe context is, and when we enter such a context, the compiler will be less aggressive in optimizing (though more aggressive than in the TPM). (This has implications for the unchecked-get example.)

Nonetheless, I have concerns about this formulation because it seems to assume that the logic for unsafe code can be expressed in terms of Rust’s lifetimes – but as I wrote above Rust’s lifetimes are really a conservative approximation. As we improve our type system, they can change and become more precise – and users might have in mind more precise and flow-dependent lifetimes still. In particular, it seems like the ACA would disallow my alloc_free2 example:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 fn alloc_free2() { unsafe { let p: *mut i32 = allocate_an_integer(); let q: &i32 = &*p; let r = *q; // (1) if condition1() { free(p); // (2) } if condition2() { use(r); // (3) if condition3() { use_again(*q); // (4) } } } }

Intuitively, the problem is that the lifetime of q must enclose the points (1), (2), (3), and (4) that are commented above. But the user knows that condition1() and condition3() are mutually exclusive, so in their mind, the lifetime ends either when we reach point (2), since they know that this means that point (4) is unreachable.

In terms of their model, the conflicting access would be (2) and the asserting access would be (1). But I might be misunderstanding how this whole thing works.

Trust lifetimes at safe fn boundaries

Nonetheless, perhaps we can do something similar to the ACA model and say that: we can trust lifetimes in safe code but totally disregard them in unsafe code (however we define that). If we adopted these definitions, would that allow us to optimize patsy()?

1 2 3 4 5 fn patsy<'a>(v: &'a usize) -> usize { let l = *v; collaborator(); use(l); }

Presuming patsy() is considered to be safe code, then the answer is yes. This in turn implies that any unsafe callers are obligated to consider patsy() as a block box in terms of what it might do with 'a.

This flows quite naturally from a permissions perspective — giving a reference to a safe fn implies giving it permission to use that reference any time during its execution. I have been (separately) trying to elaborate this notion, but it’ll have to wait for a separate post.

Conclusion

One takeaway from this meandering walk is that, if we want to make it easy to optimize Rust code aggressively, there is something special about the fn boundary. In retrospect, this is really not that surprising: we are trying to enable intraprocedural optimization, and hence the fn boundary is the boundary beyond which we cannot analyze – within the fn body we can see more.

Put another way, if we want to optimize patsy() without doing any interprocedural analysis, it seems clear that we need the caller to guarantee that v will be valid for the entire call to patsy:

1 2 3 4 5 fn patsy(v: &usize) -> usize { let l = *v; collaborator(); use(l); }

I think this is an interesting conclusion, even if I’m not quite sure where it leads yet.

Another takeaway is that we have to be very careful trusting lifetimes around unsafe code. Lifetimes of references are a tool designed for use by the borrow checker: we should not use them to limit the clever things that unsafe code authors can do.

Note on comments

Comments are closed on this post. Please post any questions or comments on the internals thread I’m about to start. =)

Also, I’m collecting unsafe-related posts into the unsafe category.

Categorieën: Mozilla-nl planet

Karl Dubost: Dyslexia, Typo and Web Compatibility

ma, 12/09/2016 - 09:36

Because we type code. We do mistakes. Today by chance my fingers typed viewpoet instead of viewport. It made me smile right away and I had to find out if I was the only one who did that typo but in actual code. So I started to search for broken code.

  • viewpoet

    Example: <meta name="viewpoet" content="width=devide-width">

  • transitoin

    Example: this.$element.on("transitoinEnd webkitTransitionEnd", function() {

  • gradeint

    Example: background: linear-gradeint($direction, $color-stops);

  • devixe

    Example: <meta name="viewport" content="width=devixe-width, initial-scale=1.0">

Slip of mind, dyslexia, keys close to each other, many reasons to do beautiful typos. As Descartes was saying:

I do typos therefore I am.

Otsukare!

Categorieën: Mozilla-nl planet

Chris McDonald: Started Writing a Game

ma, 12/09/2016 - 05:23

I started writing a game! Here’s the origin:

commit 54dbc81f294b612f7133c8a3a0b68d164fd0322c Author: Wraithan (Chris McDonald) <xwraithanx@gmail.com> Date: Mon Sep 5 17:11:14 2016 initial commit

Ok, now that we’re past that, lets talk about how I got there. I’ve been a gamer all my life, a big portion of the reason I went into software development was to work on video games. I’ve started my own games a couple times around when I as 18. Then every couple years I dabble in rendering Final Fantasy Tactics style battle map.

I’ve also played in a bunch of AI competitions over the years. I love writing programs to play games. Also, I recently started working at Sparkypants, a game studio, on Dropzone a modern take on RTS. I develop a bunch of the supporting services in the game our matchmaking and such.

Another desire that has lead me here is the desire to complete something. I have many started projects but rarely get them to a release quality. I think it is time I make my own game, and the games I’ve enjoyed the most have been management and simulation games. I’ll be drawing from a lot of games in that genre. Prison Architect, Rimworld, FortressCraft Evolved, Factorio, and many more. I want to try implementing lots of the features as experiments to decide what I want to dig further into.

The game isn’t open source currently, maybe eventually. I’m building the game mostly from scratch using rust and the glium abstraction over OpenGl/glutin. I’m going to try to talk about portions of the development on my blog. This may be simple things like new stuff I learn about Rust. Celebrating achievements as I feel proud of myself. And whatever else comes up as I go. If you want to discuss stuff I prefer twitter over comments, but comments are there if you would rather.

So far I’ve built a very simple deterministic map generator. I had to make sure the map got it’s own seed so it could decide when to pull numbers out and ensure consistency. It currently requires the map to be generated in a single pass. I plan to change to more of a procedural generator so I can expand the map as needed during game play.

I started to build a way to visualize the map beyond an ascii representation. I knew I wanted to be on OpenGL so I support all the operating systems I work/play in. In Rust the best OpenGL abstraction I could find was glium which is pretty awesome. It includes glutin so I get window creation and interaction as part of it, without having to stitch it together myself.

This post is getting a bit long. I think I’ll wrap it up here. In the near future (maybe later tonight?) I’ll have another post with my thoughts on rendering tiles, coordinate systems, panning and zooming, etc. I’m just a beginner at all of this so I’m very open to ideas and criticisms, but please also be respectful.


Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 77

ma, 12/09/2016 - 02:30

In the last week, we landed 78 PRs in the Servo organization’s repositories.

We are excited to announce that Josh Matthews, a member of the Mozilla Firefox DOM team, is now a part of the Servo core team! He has long played a key role in the project, and we look forward to his future work.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions
  • fitzen improved the SpiderMonkey binding generation process
  • mrobinson added transformation support for rounded rectangles
  • shinglyu implemented separate layout traces for each reflow, to aid with layout debugging
  • mortimer added several WebGL commands
  • vlad updated the SpiderMonkey Windows support, in preparation for another upgrade
  • ms2ger implemented error reporting for workers
  • nox updated our Rust version
  • glennw fixed some reftests where the reference was empty
  • ms2ger removed usage of mem::transmute_copy
  • sam added the dblclick event
  • malisa implemented the DOM Response API
  • kichjang added support to properly generate typedef identities in WebIDL unions
  • nical added tests for Matrix2D and Matrix4D
  • imperio returned video metadata to the build, now using a pure Rust stack!
  • uk992 created an amazing ./mach boostrap command for Windows MSVC support
  • attila added WebBluetooth support for Android
  • ajeffrey added unit tests of IPC reentrancy
  • aneesh fixed the availability of basic admin utilities on all of our buildbot instances
  • notriddle corrected a layout issue for fixed tables using percentages
  • nox implemented WebIDL namespaces
  • mrobinson supported 3d transforms for dotted borders
  • shinglyu fixed a layout issue for collapsed margins
  • samuknet implemented the dblclick DOM event
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

The amazing intern on the Developer Relations team, Sam, made an awesome video demonstrating some of the impact of great restyle speed in Servo.

Categorieën: Mozilla-nl planet

Chris McDonald: Assigning Blocks in Rust

ma, 12/09/2016 - 00:42

So, I’m hanging out after an amazing day at RustConf working on a side project. I’m trying to practice a more “just experiment” attitude while playing with some code. This means not breaking things out into functions or other abstractions unless I have to. One thing that made this uncomfortable was when I’d accidentally shadow a variable or have a hard time seeing logical chunks. Usually a chunk of the code could be distilled into a single return value so a function would make sense, but it also makes my code less flexible during this discovery phase.

While reading some code I noticed a language feature I’d never noticed before. I knew about using braces to create scopes. And from that, I’d occasionally declare a variable above a scope, then use the scope to assign to it while keeping the code to create the value separated. Other languages have this concept so I knew it from there. Here is an example:

fn main() { let a = 2; let val; { let a = 1; let b = 3; val = a + b; } // local a and local b are dropped here println!("a: {}, val: {}", val); // a: 2, val: 4 }

But this feature I noticed let me do something to declare my intent much more exactly:

fn main() { let a = 2; let val = { let a = 1; let b = 3; a + b } // local a and local b are dropped here println!("a: {}, val: {}", val); // a: 2, val: 4 }

A scope can have a return value, so you just make the last expression return by omitting the semicolon and you can assign it out. Now I have something that I didn’t have to think about the signature of but separates some logic and namespace out. Then later once I’m done with discovery and ready to abstract a little bit to make things like error handling, testing, or profiling easier I have blocks that may be combined with a couple related blocks into a function.


Categorieën: Mozilla-nl planet

Cameron Kaiser: Ars Technica's notes from the OS 9 underground

zo, 11/09/2016 - 17:07
Richard Moss has published his excellent and very comprehensive state of the Mac OS 9 userbase in Ars Technica. I think it's a detailed and very evenhanded summary of why there are more people than you'd think using a (now long obsolete) operating system that's still maintains more utility than you'd believe.

Naturally much of my E-mail interview with him could not be used in the article (I expected that) and I think he's done a fabulous job balancing those various parts of the OS 9 retrocomputing community. Still, there are some things I'd like to see entered into posterity publicly from that interview and with his permission I'm posting that exchange here.

Before doing so, though, just a note to Classilla users. I do have some work done on a 9.3.4 which fixes some JavaScript bugs, has additional stelae for some other site-specific workarounds and (controversially, because this will impact performance) automatically fixes screen update problems with many sites using CSS overflow. (They still don't layout properly, but they will at least scroll mostly correctly.) I will try to push that out as a means of keeping the fossil fed. TenFourFox remains my highest priority because it's the browser I personally dogfood 90% of the time, but I haven't forgotten my roots.

The interview follows. Please pardon the hand-conversion to HTML; I wrote this in plain text originally, as you would expect no less from me. This was originally written in January 2016, and, for the record, on a 1GHz iMac G4.

***

Q. What's your motivation for working on Classilla (and TenFourFox, but I'm mostly interested in Classilla for this piece)?

A. One of the worst things that dooms otherwise functional hardware to apparent obsolescence is when "they can't get on the Internet." That's complete baloney, of course, since they're just as capable of opening a TCP socket and sending and receiving data now as they were then (things like IPv6 on OS 9 notwithstanding, of course). Resources like Gopherspace (a topic for another day) and older websites still work fine on old Macs, even ones based on the Motorola 680x0 series.

So, realistically, the problem isn't "the Internet" per se; some people just want to use modern websites on old hardware. I really intensely dislike the idea that the ability to run Facebook is the sole determining factor of whether a computer is obsolete to some people, but that's the world we live in now. That said, it's almost uniformly a software issue. I don't see there being any real issues as far as hardware capability, because lots of people dig out their old P3 and P4 systems and run browsers on them for light tasks, and older G4 and G3 systems (and even arguably some 603 and 604s) are more than comparable.

Since there are lots of x86 systems, there are lots of people who want to do this, and some clueful people who can still get it to work (especially since operating system and toolchain support is still easy to come by). This doesn't really exist for architectures out of the mainstream like PowerPC, let alone for a now almost byzantine operating system like Mac OS 9, but I have enough technical knowledge and I'm certifiably insane and by dumb luck I got it to work. I like these computers and I like the classic Mac OS, and I want them to keep working and be "useful." Ergo, Classilla.

TenFourFox is a little more complicated, but the same reason generally applies. It's a bit more pointed there because my Quad G5 really is still my daily driver, so I have a lot invested in keeping it functional. I'll discuss this more in detail at the end.

Q. How many people use Classilla?

A. Hard to say exactly since unlike TenFourFox there is no automatic checkin mechanism. Going from manual checkins and a couple back-of-the-napkin computations from download counts, I estimate probably a few thousand. There's no way to know how many of those people use it exclusively, though, which I suspect is a much smaller minority.

Compare this with TenFourFox, which has much more reliable numbers; the figure, which actually has been slowly growing since there are no other good choices for 10.4 and less and less for 10.5, has been a steady 25,000+ users with about 8,000 checkins occurring on a daily basis. That 30% or so are almost certainly daily drivers.

Q. Has it been much of a challenge to build a modern web browser for OS 9? The problems stem from more than just a lack of memory and processing speed, right? What are there deeper issues that you've had to contend with?

A. Classilla hails as a direct descendant of the old Mozilla Suite (along with SeaMonkey/Iceweasel, it's the only direct descendant still updated in any meaningful sense), so the limitations mostly come from its provenance. I don't think anyone who worked on the Mac OS 9 compatible Mozilla will dispute the build system is an impressive example of barely controlled disaster. It's actually an MacPerl script that sends AppleEvents to CodeWarrior and MPW Toolserver to get things done (primarily the former, but one particularly problematic section of the build requires the latter), and as an example of its fragility, if I switch the KVM while it's trying to build stubs, it hangs up and I usually have to restart the build. There's a lot of hacking to make it behave and I rarely run the builder completely from the beginning unless I'm testing something. The build system is so intimidating few people have been able to replicate it on their computers, which has deterred all but the most motivated (or masochistic) contributors. That was a problem for Mozilla too back in the day, I might add, and they were only too glad to dump OS 9 and move to OS X with Mozilla 1.3.

Second, there's no Makefiles, only CodeWarrior project files (previously it actually generated them on the fly from XML templates, but I put a stop to that since it was just as iffy and no longer necessary). Porting code en masse usually requires much more manual work for that reason, like adding new files to targets by hand and so on, such as when I try to import newer versions of libpng or pieces of code from NSS. This is a big reason why I've never even tried to take entire chunks of code like layout/ and content/ even from later versions of the Suite; trying to get all the source files set up for compilation in CodeWarrior would be a huge mess, and wouldn't buy me much more than what it supports now. With the piecemeal hacks in core, it's already nearly to Mozilla 1.7-level as it is (Suite ended with 1.7.13).

Third is operating system support. Mozilla helpfully swallows up much of the ugly guts in the Netscape Portable Runtime, and NSPR is extremely portable, a testament to its good design. But that doesn't mean there weren't bugs and Mac OS 9 is really bad at anything that requires multithreading or multiprocessing, so some of these bugs (like a notorious race condition in socket handling where the socket state can change while the CPU is busy with something else and miss it) are really hard to fix properly. Especially for Open Transport networking, where complex things are sometimes easy but simple things are always hard, some folks (including Mozilla) adopted abstraction layers like GUSI and then put NSPR on top of the abstraction layer, meaning bugs could be at any level or even through subtleties of their interplay.

Last of all is the toolchain. CodeWarrior is pretty damn good as a C++ compiler and my hat is off to Metrowerks for the job they did. It had a very forward-thinking feature set for the time, including just about all of C++03 and even some things that later made it into C++11. It's definitely better than MPW C++ was and light-years ahead of the execrable classic Mac gcc port, although admittedly rms' heart was never in it. Plus, it has an outstanding IDE even by modern standards and the exceptional integrated debugger has saved my pasty white butt more times than I care to admit. (Debugging in Macsbug is much like walking in a minefield on a foggy morning with bare feet: you can't see much, it's too easy to lose your footing and you'll blow up everything to smithereens if you do.) So that's all good news and has made a lot of code much more forward-portable than I could ever have hoped for, but nothing's ever going to be upgraded and no bugs will ever be fixed. We can't even fix them ourselves, since it's closed source. And because it isn't C++11 compliant, we can forget about pulling in substantially more recent versions of the JavaScript interpreter or realistically anything else much past Gecko 2.

Some of the efficiencies possible with later code aren't needed by Classilla to render the page, but they certainly do make it slower. OS 9 is very quick on later hardware and I do my development work on an Power Mac G4 MDD with a Sonnet dual 1.8GHz 7447A upgrade card, so it screams. But that's still not enough to get layout to complete on some sites in a timely fashion even if Classilla eventually is able to do it, and we've got no JIT at all in Classilla.

Mind you, I find these challenges stimulating. I like the feeling of getting something to do tasks it wasn't originally designed to do, sort of like a utilitarian form of the demoscene. Constraints like these require a lot of work and may make certain things impossible, so it requires a certain amount of willingness to be innovative and sometimes do things that might be otherwise unacceptable in the name of keeping the port alive. Making the browser into a series of patches upon patches is surely asking for trouble, but there's something liberating about that level of desperation, anything from amazingly bad hacks to simply accepting a certain level of wrong behaviour in one module because it fixes something else in another to ruthlessly saying some things just won't be supported, so there.

Q. Do you get much feedback from people about the browser? What sorts of things do they say? Do you hear from the hold-outs who try to do all of their computing on OS 9 machines?

A. I do get some. Forcing Classilla to preferring mobile sites actually dramatically improved its functionality, at least for some period of time until sites starting assuming everyone was on some sufficiently recent version of iOS or Android. That wasn't a unanimously popular decision, but it worked pretty well, at least for the time. I even ate my own dogfood and took nothing but an OS 9 laptop with me on the road periodically (one time I took it to Leo Laporte's show back in the old studio, much to his amazement). It was enough for E-mail, some basic Google Maps and a bit of social media.

Nowadays I think people are reasonable about their expectations. The site doesn't have to look right or support more than basic functionality, but they'd like it to do at least that. I get occasional reports from one user who for reasons of his particular disability cannot use OS X, and so Classilla is pretty much his only means of accessing the Web. Other people don't use it habitually, but have some Mac somewhere that does some specific purpose that only works in OS 9, and they'd like a browser there for accessing files or internal sites while they work. Overall, I'd say the response is generally positive that there's something that gives them someimprovement, and that's heartening. Open source ingrates are the worst.

The chief problem is that there's only one of me, and I'm scrambling to get TenFourFox 45 working thanks to the never-ending Mozilla rapid release treadmill, so Classilla only gets bits and pieces of my time these days. That depresses me, since I enjoy the challenge of working on it.

Q. What's your personal take on the OS 9 web browsing experience?

A. It's ... doable, if you're willing to be tolerant of the appearance of pages and use a combination of solutions. There are some other browsers that can service this purpose in a limited sense. For example, the previous version of iCab on classic Mac is Acid2 compliant, so a lot of sites look better, but its InScript JavaScript interpreter is glacial and its DOM support is overall worse than Classilla's. Internet Explorer 5.1 (and the 5.5 beta, if you can find it) is very fast on those sites it works on, assuming you can find any. At least when it crashes, it does that fast too! Sometimes you can even get Netscape 4.8 to be happy with them or at least the visual issues look better when you don't even try to render CSS. Most times they won't look right, but you can see what's going on, like using Lynx.

Unfortunately, none of those browsers have up-to-date certificate stores or ciphers and some sites can only be accessed in Classilla for that reason, so any layout or speed advantages they have are negated. Classilla has some other tricks to help on those sites it cannot natively render well itself. You can try turning off the CSS entirely; you could try juggling the user agent. If you have some knowledge of JavaScript, you can tell Classilla's Byblos rewrite module to drop or rewrite problematic portions of the page with little snippets called stelae, basically a super-low-level Greasemonkey that works at the HTML and CSS level (a number of default ones are now shipped as standard portions of the browser).

Things that don't work at all generally require DOM features Classilla does not (yet) expose, or aspects of JavaScript it doesn't understand (I backported Firefox 3's JavaScript to it, but that just gives you the syntax, not necessarily everything else). This aspect is much harder to deal with, though some inventive users have done it with limited success on certain sites.

You can cheat, of course. I have Virtual PC 6 on my OS 9 systems, and it is possible (with some fiddling in lilo) to get it to boot some LiveCDs successfully -- older versions of Knoppix, for example, can usually be coaxed to start up and run Firefox and that actually works. Windows XP, for what that's worth, works fine too (I would be surprised if Vista or anything subsequent does, however). The downside to this is the overhead is a killer on laptops and consumes lots of CPU time, and Linux has zero host integration, but if you're able to stand it, you can get away with it. I reserved this for only problematic sites that I had to access, however, because it would bring my 867MHz TiBook to its knees. The MDD puts up with this a lot better but it's still not snappy.

If all this sounds like a lot of work, it is. But at least that makes it possible to get the majority of Web sites functional to some level in OS 9 (and in Classilla), at least one way or another, depending on how you define "functional." To that end I've started focusing now on getting specific sites to work to some personally acceptable level rather than abstract wide-ranging changes in the codebase. If I can't make the core render it correctly, I'll make some exceptions for it with a stele and ship that with the browser. And this helps, but it's necessarily centric to what I myself do with my Mac OS 9 machines, so it might not help you.

Overall, you should expect to do a lot of work yourself to make OS 9 acceptable with the modern Web and you should accept the results are at best imperfect. I think that's what ultimately drove Andrew Cunningham up the wall.

I'm going to answer these questions together:

Q1. How viable do you think OS 9 is as a primary operating system for someone today? How viable is it for you?
[...]
Q2. What do you like about using older versions of Mac OS (in this case, I'm talking in broad terms - so feel free to discuss OS X Tiger and PPC hardware as well)? Why not just follow the relentless march of technology? (It's worth mentioning here that I actually much prefer the look and feel of classic MacOS and pre-10.6 OS X, but for a lot of my own everyday computing I need to use newer, faster machines and operating systems.)

A. I'm used to a command line and doing everything in text. My Mac OS 9 laptop has Classilla and MacSSH on it. I connect to my home server for E-mail and most other tasks like Texapp for command-line posting to App.net, and if I need a graphical browser, I've got one. That covers about 2/3rds of my typical use case for a computer. In that sense, Mac OS 9 is, at least, no worse than anything else for me. I could use some sort of Linux, but then I wouldn't be able to easily run much of my old software (see below). If I had to spend my time in OS 9 even now, with a copy of Word and Photoshop and a few other things, I think I could get nearly all of my work done, personally. There is emulation for the rest. :)

I will say I think OS 9 is a pleasure to use relative to OS X. Part of this is its rather appalling internals, which in this case is necessity made virtue; I've heard it said OS 9 is just a loose jumble of libraries stacked under a file browser and that's really not too far off. The kernel, if you can call it that, is nearly non-existent -- there's a nanokernel, but it's better considered as a primitive hypervisor. There is at best token support for memory protection and some multiprocessing, but none of it is easy and most of it comes with severe compromises. But because there isn't much to its guts, there's very little between you and the hardware. I admit to having an i7 MBA running El Crapitan, and OS 9 still feels faster. Things happen nearly instantaneously, something I've never said about any version of OS X, and certain classes of users still swear by its exceptionally low latency for applications such as audio production. Furthermore, while a few annoyances of the OS X Finder have gradually improved, it's still not a patch on the spatial nature of the original one, and I actually do like Platinum (de gustibus non disputandum, of course). The whole user experience feels cleaner to me even if the guts are a dog's breakfast.

It's for that reason that, at least on my Power Macs, I've said Tiger forever. Classic is the best reason to own a Power Mac. It's very compatible and highly integrated, to the point where I can have Framemaker open and TenFourFox open and cut and paste between them. There's no Rhapsody full-screen blue box or SheepShaver window that separates me from making Classic apps first-class citizens, and I've never had a Classic app take down Tiger. Games don't run so well, but that's another reason to keep the MDD around, though I play most of my OS 9 games on a Power Mac 7300 on the other desk. I've used Macs partially since the late 1980s and exclusively since the mid-late 1990s (the first I owned personally was a used IIsi), and I have a substantial investment in classic Mac software, so I want to be able to have my cake and eat it too. Some of my preference for 10.4 is also aesthetic: Tiger still has the older Mac gamma, which my eyes are accustomed to after years of Mac usage, and it isn't the dreary matte grey that 10.5 and up became infested with. These and other reasons are why I've never even considered running something like Linux/PPC on my old Macs.

Eventually it's going to be the architecture that dooms this G5. This Quad is still sufficient for most tasks, but the design is over ten years old, and it shows. Argue the PowerPC vs x86 argument all you like, but even us PPC holdouts will concede the desktop battle was lost years ago. We've still got a working gcc and we've done lots of hacking on the linker, but Mozilla now wants to start building Rust into Gecko (and Servo is, of course, all Rust), and there's no toolchain for that on OS X/ppc, so TenFourFox's life is limited. For that matter, so is general Power Mac software development: other than freaks like me who still put -arch ppc -arch i386 in our Makefiles, Universal now means i386/x86_64, and that's not going to last much longer either. The little-endian web (thanks to asm.js) even threatens that last bastion of platform agnosticism. These days the Power Mac community lives on Pluto, looking at a very tiny dot of light millions of miles away where the rest of the planets are.

So, after TenFourFox 45, TenFourFox will become another Classilla: a fork off Gecko for a long-abandoned platform, with later things backported to it to improve its functionality. Unlike Classilla it won't have the burden of six years of being unmaintained and the toolchain and build system will be in substantially better shape, but I'll still be able to take the lessons I've learned maintaining Classilla and apply it to TenFourFox, and that means Classilla will still live on in spirit even when we get to that day when the modern web bypasses it completely.

I miss the heterogeneity of computing when there were lots of CPUs and lots of architectures and ultimately lots of choices. I think that was a real source of innovation back then and much of what we take for granted in modern systems would not have been possible were it not for that competition. Working in OS 9 reminds me that we'll never get that diversity back, and I think that's to our detriment, but as long as I can keep that light on it'll never be completely obsolete.

***

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 035. PNG can be invalid and other bugs.

zo, 11/09/2016 - 16:55

Discovering a new thing to eat on a local market and Tune of the week: Pastoral Symphony - Beethoven

Webcompat Life

Progress this week:

314 open issues ---------------------- needsinfo 10 needsdiagnosis 98 needscontact 19 contactready 31 sitewait 149 ----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • When your design is only working in Blink and probably not because the developer meant to make it work for Blink, but probably because it was only tested with this rendering engine.
  • A problem with PNG, not being sure of the origin of the issue, I opened a bug on Core/ImageLib. The PNG file is working in WebKit and Blink, not in Gecko.
  • When a Gecko/Firefox rendering issue is in fact a Blink/Chrome bug, probably inherited from Webkit/Safari. This is a symptom of a pattern where one rendering engine is trusted and not checked against specs and other browsers.
  • Going through a lot of the yahoo.co.jp Web sites, around 100 of them to assess the ones which are working and those who are failing. I should probably something about it.
Webcompat.com development Reading List Follow Your Nose TODO
  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox 45.4.0 available (plus: priorities for feature parity and down with Dropbox)

zo, 11/09/2016 - 00:02
With the peerless level of self-proclaimed courageousness necessary to remove a perfectly good headphone jack for a perfectly stupid reason (because apparently no one at Apple charges their phone and listens to music at the same time), TenFourFox 45.4.0 is released (downloads, hashes, release notes). This will be the final release in the beta cycle and assuming no critical problems will become the public official release sometime Monday Pacific time. Localizations will be frozen today and uploaded to SourceForge sometime this evening or tomorrow, so get them in ASAP.

The major change in this release is additional tweaking to the MediaSource implementation and I'm now more comfortable with its functioning on G4 systems through a combination of some additional later patches I backported and adjusting our own hacks to not only aggressively report the dropped frames but also force rebuffering if needed. The G4 systems now no longer seize and freeze (and, occasionally, fatally assert) on some streams, and the audio never becomes unsynchronized, though there is some stuttering if the system is too overworked trying to keep the video and audio together. That said, I'm going to keep MediaSource off for 45.4 so that there will be as little unnecessary support churn as possible while you test it (if you haven't already done so, turn media.mediasource.enabled to true in about:config; do not touch the other options). In 45.5, assuming there are no fatal problems with it (I don't consider performance a fatal flaw, just an important one), it will be the default, and it will be surfaced as an option in the upcoming TenFourFox-specific preference pane.

However, to make the most of MediaSource we're going to need AltiVec support for VP9 (we only have it for VP3 and VP8). While upper-spec G5 systems can just power through the decoding process (though this might make hi-def video finally reasonable on the last generation machines), even high-spec G4 systems have impaired decoding due to CPU and bus bandwidth limitations and the low-end G4 systems are nearly hopeless at all but the very lowest bitrates. Officially I still have a minimum 1.25GHz recommendation but I'm painfully aware that even those G4s barely squeak by. We're the only ones who ever used libvpx's moldy VMX code for VP8 and kept it alive, and they don't have anything at all for VP9 (just x86 and ARM, though interestingly it looks like MIPS support is in progress). Fortunately, the code was clearly written to make it easier for porters to hand-vectorize and I can do this work incrementally instead of having to convert all the codec pieces simultaneously.

Interestingly, even though our code now faithfully and fully reports every single dropped frame, YouTube doesn't seem to do anything with this information right now (if you right-click and select "Stats for nerds" you'll see our count dutifully increase as frames are skipped). It does downshift for network congestion, so I'm trying to think of a way to fool it and make dropped frames look like a network throughput problem instead. Doing so would technically be a violation of the spec but you can't shame that which has no shame and I have no shame. Our machines get no love from Google anyway so I'm perfectly okay with abusing their software.

I have the conversion to platform codec of our minimp3 decoder written as a first draft, but I haven't yet set that up or tested it, so this version still uses the old codec wrapper and still has the track-shifting problem with Amazon Music. That is probably the highest priority for 45.5 since it is an obvious regression from 38. On the security side, this release also disables RTCPeerConnection to eliminate the WebRTC IP "leak" (since I've basically given up on WebRTC for Power Macs). You can still reenable it from about:config as usual.

The top three priorities for the next couple versions (with links to the open Github issues) are, highest first, fixing Amazon Music, AltiVec VP9 codepaths and the "little endian typed array" portion of IonPower-NVLE to fix site compatibility with little-endian generated asm.js. Much of this work will proceed in parallel and the idea is to have a beta 45.5 for you to test them in a couple weeks. Other high priority items on my list to backport include allowing WebKit/Blink to eat the web supporting certain WebKit-prefixed properties to make us act the same as regular Firefox, support for ChaCha20+Poly1305, WebP images, expanded WebCrypto support, the "NV" portion of IonPower-NVLE and certain other smaller-scope HTML/CSS features. I'll be opening tracking issues for these as they enter my worklist, but I have not yet determined how I will version the browser to reflect these backported new features. For now we'll continue with 45.x.y while we remain on 45ESR and see where we end up.

As we look into the future, though, it's always instructive to compare it with the past. With the anticipation that even Google Code's Archive will be flushed down the Mountain View memory hole (the wiki looks like it's already gone, but you can get most of our old wikidocs from Github), I've archived 4.0b7, 4.0.3, 8.0, 10.0.11, 17.0.11 and Intel 17.0.2 on SourceForge along with their corresponding source changesets. These Google Code-only versions were selected as they were either terminal (quasi-)ESR releases or have historical or technical relevance (4.0b7 was our first beta release of TenFourFox ever "way back" in 2010, 8.0 was the last release that was pure tracejit which some people prefer, and of course Intel 17.0.2 was our one and so far only release on Intel Macs). There is no documentation or release notes; they're just there for your archival entertainment and foolhardiness. Remember that old versions run an excellent chance of corrupting your profile, so start them up with one you can throw away.

Finally, a good reason to dump Dropbox (besides the jerking around they give those of you trying to keep the PowerPC client working) is their rather shameful secret unauthorized abuse of your Mac's accessibility framework by forging an entry in the privacy database. (Such permissions allow it to control other applications on your Mac as if it were you at the user interface. The security implications of that should be immediately obvious, but if they're not, see this discussion.) The fact this is possible at all is a bug Apple absolutely must fix and apparently has in macOS Sierra, but exploiting it in this fashion is absolutely appalling behaviour on Dropbox's part because it won't even let you turn it off. To their credit they're taking their lumps on Hacker News and TechCrunch, but accepting their explanation of "only asking for privileges we use" requires a level of trust that frankly they don't seem worthy of and saying they never store your administrator password is a bit disingenuous when they use administrative access to create a whole folder of setuid binaries -- they don't need your password at that point to control the system. Moreover, what if there were an exploitable security bug in their software?

Mind you, I don't have a problem with apps requesting that access if I understand why and the request isn't obfuscated. As a comparison, GOG.com has a number of classic DOS games I love that were ported for DOSBox and work well on my MacBook Air. These require that same accessibility access for proper control of input methods. Though I hope they come up with a different workaround eventually, the GOG installer does explain why and does use the proper APIs for requesting that privilege, and you can either refuse on the spot or disable it later if you decide you're not comfortable with it. That's how it's supposed to work, but that's not what Dropbox did, and they intentionally hid it and the other inappropriate system-level things they were sneaking through. Whether out of a genuine concern for user experience or just trying to get around what they felt was an unnecessary security precaution, it's not acceptable and it's potentially exploitable, and they need to answer for that.

Watch for 45.4 going final in a couple days, and hopefully a 45.5 beta in a couple weeks.

Categorieën: Mozilla-nl planet

Eitan Isaacson: I Built A Smart Clock

za, 10/09/2016 - 19:46
Problem Statement:
In today’s fast-paced world, while juggling work and personal life, sometimes we need to know what time it is.
Solution:
A chronometer you can hang on the wall.

Wake up people. Clocks are the future. Embrace progress.

Hello, my name is Eitan. I am a clock maker.

Over the past year I have spent nights and weekends designing gear trains, circuit boards, soldering and writing software. The result is a clock. Not just any clock, it is a smart clock that could tell you what time it is on demand. It is internet-connected so you can remotely monitor what time it is in your home.

It is powered by three stepper motors, 3 hall effect sensors, and a miniature computer. It also ticks.

Final clock. Translucent body showing all components including gears and Raspberry Pi.The future of time telling.

Why? Because in my hubris I thought it would be an easy weekend project, and then things got out of hand.

An early attempt at flight. The propeller falls off.

Gears

My first gear boxes were pretty elaborate, with different gear ratios for each motor/hand. I probably spent the most time in the design/print cycle trying to come up with a reliable solution that would transfer the torque from 3 separate motors to a single axis. I ended up with something much simpler, a 2:1 gear ratio for all hands.

An example of an early super elaborate and huge gearbox.An example of an early super elaborate and huge gear box. My final design.My final design. Limit Switches

Another challenge I struggled with was how would the software know where each hand is at any given time? A stepper motor is nice because it has some predictability. Each one of its steps is equal, so if you know that a step size is 6 degrees, it will take 60 steps to complete a rotation. In our case, this isn’t good enough, because:

  1. The motor is not 100% guaranteed to complete each step. If there is too much resistance on it, it will fail. I struggled to design a perfect gear box that won’t ever make things hard for the motors, but there will be a bad tooth every once in a while that will jam the motor for a step or two.
  2. The motor I chose for this project is the 28BYJ-48, mainly because it is cheap and you can get a pack of 5 from amazon with drivers for only $12. Its big drawback is that there is no consensus on what the precise step size is. Internally the motor has 32 steps per revolution (11.25 degrees per step). But it has a set of reduction gears embedded in it that make the steps per revolution something like 2037.886. Not a nice number and, more importantly, not good for clocks that risk drifting out of precision.
  3. When the clock is first turned on, it has no way to know the initial position of each hand.

I decided to solve all this with limit switches. If the clock hands somehow closed a circuit at the top of each rotation, we would at least know where “12 o’clock” is and we will have something to work with. I thought about using buttons, metal contacts and the likes. But I didn’t like the idea of introducing more wear on a delicate mechanical system: how many times will the second hand brush past a contact before screwing it up?

So, I went with latching hall effect sensors. The basic concept is magnets. The sensor will open or close a circuit depending on what pole of a magnet is nearby. I glued tiny magnets at opposite ends of the gears, and by checking the state of the a sensor circuit after every motor step you can tell if we just got to 6 o’clock or 12 o’clock.

Two magnets on opposite sides of the gear, and the hall effect sensor is clamped just above the gear to detect them.Two magnets on opposite sides of the gear, and the hall effect sensor clamped just above the gear to detect polarity changes. Circuits

Electricity is magic to me, I really don’t understand it. I learned just enough to get this project going. I started by reading Adafruit tutorials and breadboarding. After assembling a forest of wires, I finally got a “working” clock. I wanted the final clock to be more elegant than this:

Breadboard, raspberry pi, motor drivers and gearbox all taped to a peice of cardboard.

With some graph paper, I sketched out how I can arrange all of the circuits on a perf board and went to work soldering, after inhaling a lot of toxic fumes and leaving some of my skin on the iron I got this:

BOttom of perfboard showing a lot of bare wires and soldering.

I was happy with the result, but it could be prettier. Plus, I would never put myself through that again. I wanted to make this process as easy as possible. So I decided to splurge a bit, I redesigned the board using Fritzing:

Screenshot of Fritzing design app.

…and ordered PCB prints from their online service. After a month or so I got these in the mail:

Custom PCBs printed in Berlin!Custom PCBs printed in Berlin!

Soldering on components was a breeze…

PCB with jumper headers and resistors.PCB with jumper headers and resistors. Software

Software is my comfort zone, you don’t get burnt, electrocuted, or spend a whole day 3D printing just to find out your design is shit. My plan was to compensate for all the hardware imperfection in software. Have it be self-tuning, smart and terrific.

I chose to have NodeJS drive the clock. Mostly because I have recently got comfortable with it, but also because it is easy to give this project a slick web interface.

The web interface, with a form to set the time on the clock.

Doing actual GPIO calls to move the motors didn’t work well in JS. Python wasn’t cutting it either. I needed to move 3 motors simultaneously at intervals below 20 milliseconds. The CPU halts to a stop. So I ended writing a small C program that actually does all the GPIO bits. You start it with an RPM as an argument, and it will do the rest. It doesn’t make sense to spawn a new process on each clock tick, so instead I have the JS code send a signal to the process each second.

With the help of the Network Time Protocol and some fancy high school algebra I was able to make the clock precise to the second. Just choose a timezone, and it will do the rest. It should even switch back and fourth from daylight savings time (haven’t actually tested that, waiting for DST to naturally end).

Mounting

I went to TAP Plastics, my go-to retailer for all things plastic. I bought a 12″ diameter 3/8″ thick acrylic disc. Drilled some holes, and got to mounting this whole caboodle. This project is starting to take shape! With the help of a few colorful wires for extra flourish:

Call the components mounted on the acrylic disc.All the components mounted on the acrylic disc. In Conclusion

The slick marketing from Apple and Samsung will have you believe that your life isn’t complete without their latest smart watch. They are wrong.

Your life isn’t complete because you haven’t built your own smart clock. Prime your soldering iron and get to work!


Categorieën: Mozilla-nl planet

Pagina's