mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 4 maanden 4 dagen geleden

Hacks.Mozilla.Org: Experimenting with WebAssembly and Computer Vision

di, 12/09/2017 - 17:17

This past summer, four time-crunched engineers with no prior WebAssembly experience began experimenting. The result after six weeks of exploration was WebSight: a real-time face detection demo based on OpenCV.

By compiling OpenCV to WebAssembly, the team was able to reuse a well-tested C/C++ library directly in the browser and achieve performance an order of magnitude faster than a similar JavaScript library.

I asked the team members—Brian Feldman, Debra Do, Yervant Bastikian, and Mark Romano—to write about their experience.

Note: The report that follows was written by the team members mentioned above.

WebAssembly (“wasm”) made a splash this year with its MVP release, and eager to get in on the action, we set out to build an application that made use of this new technology.

We’d seen projects like WebDSP compile their own C++ video filters to WebAssembly, an area where JavaScript has historically floundered due to the computational demands of some algorithms. This got us interested in pushing the limits of wasm, too. We wanted to use an existing, specialized, and time-tested C++ library, and after much deliberation, we landed on OpenCV, a popular open-source computer vision library.

Computer vision is highly demanding on the CPU, and thus lends itself well to wasm. Building off of some incredible work put forward by the UC Irvine SysArch group and Github user njor, we were able to update outdated asm.js builds of OpenCV to compile with modern versions of Emscripten, exposing much of OpenCV’s core functionality in JavaScript callable formats.

Working with these Emscripten builds went much differently than we expected. As Web developers, we’re used to writing code and being able to iterate and test very quickly. Introducing a large C++ library with 10-15 minute build times was a foreign experience, especially when our normal working environments are Webpack, Nodemon, and hot reloading everywhere. Once compiled, we approached the wasm build as a bit of a black box: the module started as an immutable beast of an object, and though we understood it more and more throughout the process, it never became ‘transparent’.

The efforts spent on compiling the wasm file, and then incorporating it into our JavaScript were worthwhile: it outperformed JavaScript with ease, and was significantly quicker than WebAssembly’s predecessor, asm.js.

We compared these formats through the use of a face detection algorithm. The architecture of the functions that drove these algorithms was the same, the only difference was the implementation language for each algorithm. Using web workers, we passed video stream data into the algorithms, which returned with the coordinates of a rectangle that would frame any faces in the image, and calculated an FPS measure. While the range of FPS is dependent on the user’s machine and the browser being used (Firefox takes the cake!), we noted that the FPS of the wasm-powered algorithm was consistently twice as high as the FPS of the asm.js implementation, and twenty times higher than the JS implementation, solidifying the benefits of web assembly.

Building in cutting edge technology can be a pain, but the reward was worth the temporary discomfort. Being able to use native, portable, C/C++ code in the browser, without third-party plugins, is a breakthrough. Our project, WebSight, successfully demonstrated the use of OpenCV as a WebAssembly module for face and eye detection. We’re really excited about the  future of WebAssembly, especially the eventual addition of garbage collection, which will make it easier to efficiently run other high-level languages in the browser.

You can view the demo’s GitHub repository at github.com/Web-Sight/WebSight.

Categorieën: Mozilla-nl planet

Air Mozilla: Martes Mozilleros, 12 Sep 2017

di, 12/09/2017 - 17:00

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

Air Mozilla: Martes Mozilleros, 12 Sep 2017

di, 12/09/2017 - 17:00

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Mozilla running into CHAOSS to Help Measure and Improve Open Source Community Health

di, 12/09/2017 - 16:39

This week the Linux Foundation announced project CHAOSS, a collaborative initiative focused on creating the analytics and metrics to help define the health of open source communities, and developing tools for analyzing and improving the contributor experience in modern software development.

credit: Chaoss project

Besides Mozilla, initial members contributing to the project include Bitergia, Eclipse Foundation, Jono Bacon Consulting, Laval University (Canada), Linaro, OpenStack, Polytechnique Montreal (Canada) Red Hat, Sauce Labs, Software Sustainability Institute, Symphony Software Foundation, University of Missouri, University of Mons (Belgium), University of Nebraska at Omaha, and University of Victoria.

With the combined expertise from academic researchers and practitioners from industry the CHAOSS metrics committee aims to “define a neutral, implementation-agnostic set of reference metrics to be used to describe communities in a common way.” The analytical work will be complemented by the CHAOSS software committee, “formed to provide a framework for establishing an open source GPLv3 reference implementation of the CHAOSS metrics.”

Mozilla’s Open Innovation strategist Don Marti will be part of the CHAOSS project’s governance board, which is responsible for the overall oversight of the Project and coordination of efforts of the technical committees.

As a member of CHAOSS, Mozilla is committed to supporting research that will help maintainers pick the right open source metrics to focus on — metrics that will help open source projects make great software and provide a rewarding experience for contributors.

If you want to learn more about how to participate in the project have a look at the CHAOSS community website: https://chaoss.community.

Mozilla running into CHAOSS to Help Measure and Improve Open Source Community Health was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Chris H-C: Two Days, or How Long Until The Data Is In

di, 12/09/2017 - 15:29

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs people open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

 


Categorieën: Mozilla-nl planet

Daniel Stenberg: The backdoor threat

di, 12/09/2017 - 08:13

— “Have you ever detected anyone trying to add a backdoor to curl?”

— “Have you ever been pressured by an organization or a person to add suspicious code to curl that you wouldn’t otherwise accept?”

— “If a crime syndicate would kidnap your family to force you to comply, what backdoor would you be be able to insert into curl that is the least likely to get detected?” (The less grim version of this question would instead offer huge amounts of money.)

I’ve been asked these questions and variations of them when I’ve stood up in front of audiences around the world and talked about curl and how it is one of the most widely used software components in the world, counting way over three billion instances.

Back door (noun)
— a feature or defect of a computer system that allows surreptitious unauthorized access to data.

So how is it?

No. I’ve never seen a deliberate attempt to add a flaw, a vulnerability or a backdoor into curl. I’ve seen bad patches and I’ve seen patches that brought bugs that years later were reported as security problems, but I did not spot any deliberate attempt to do bad in any of them. But if done with skills, certainly I wouldn’t have noticed them being deliberate?

If I had cooperated in adding a backdoor or been threatened to, then I wouldn’t tell you anyway and I’d thus say no to questions about it.

How to be sure

There is only one way to be sure: review the code you download and intend to use. Or get it from a trusted source that did the review for you.

If you have a version you trust, you really only have to review the changes done since then.

Possibly there’s some degree of safety in numbers, and as thousands of applications and systems use curl and libcurl and at least some of them do reviews and extensive testing, one of those could discover mischievous activities if there are any and report them publicly.

Infected machines or owned users

The servers that host the curl releases could be targeted by attackers and the tarballs for download could be replaced by something that carries evil code. There’s no such thing as a fail-safe machine, especially not if someone really wants to and tries to target us. The safeguard there is the GPG signature with which I sign all official releases. No malicious user can (re-)produce them. They have to be made by me (since I package the curl releases). That comes back to trusting me again. There’s of course no safe-guard against me being forced to signed evil code with a knife to my throat…

If one of the curl project members with git push rights would get her account hacked and her SSH key password brute-forced, a very skilled hacker could possibly sneak in something, short-term. Although my hopes are that as we review and comment each others’ code to a very high degree, that would be really hard. And the hacked person herself would most likely react.

Downloading from somewhere

I think the highest risk scenario is when users download pre-built curl or libcurl binaries from various places on the internet that isn’t the official curl web site. How can you know for sure what you’re getting then, as you couldn’t review the code or changes done. You just put your trust in a remote person or organization to do what’s right for you.

Trusting other organizations can be totally fine, as when you download using Linux distro package management systems etc as then you can expect a certain level of checks and vouching have happened and there will be digital signatures and more involved to minimize the risk of external malicious interference.

Pledging there’s no backdoor

Some people argue that projects could or should pledge for every release that there’s no deliberate backdoor planted so that if the day comes in the future when a three-letter secret organization forces us to insert a backdoor, the lack of such a pledge for the subsequent release would function as an alarm signal to people that something is wrong.

That takes us back to trusting a single person again. A truly evil adversary can of course force such a pledge to be uttered no matter what, even if that then probably is more mafia level evilness and not mere three-letter organization shadiness anymore.

I would be a bit stressed out to have to do that pledge every single release as if I ever forgot or messed it up, it should lead to a lot of people getting up in arms and how would such a mistake be fixed? It’s little too irrevocable for me. And we do quite frequent releases so the risk for mistakes is not insignificant.

Also, if I would pledge that, is that then a promise regarding all my code only, or is that meant to be a pledge for the entire code base as done by all committers? It doesn’t scale very well…

Additionally, I’m a Swede living in Sweden. The American organizations cannot legally force me to backdoor anything, and the Swedish versions of those secret organizations don’t have the legal rights to do so either (caveat: I’m not a lawyer). So, the real threat is not by legal means.

What backdoor would be likely?

It would be very hard to add code, unnoticed, that sends off data to somewhere else. Too much code that would be too obvious.

A backdoor similarly couldn’t really be made to split off data from the transfer pipe and store it locally for other systems to read, as that too is probably too much code that is too different than the current code and would be detected instantly.

No, I’m convinced the most likely backdoor code in curl is a deliberate but hard-to-detect security vulnerability that let’s the attacker exploit the program using libcurl/curl by some sort of specific usage pattern. So when triggered it can trick the program to send off memory contents or perhaps overwrite the local stack or the heap. Quite possibly only one step out of several steps necessary for a successful attack, much like how a single-byte-overwrite can lead to root access.

Any past security problems on purpose?

We’ve had almost 70 security vulnerabilities reported through the project’s almost twenty years of existence. Since most of them were triggered by mistakes in code I wrote myself, I can be certain that none of those problems were introduced on purpose. I can’t completely rule out that someone else’s patch modified curl along the way and then by extension maybe made a vulnerability worse or easier to trigger, could have been made on purpose. None of the security problems that were introduced by others have shown any sign of “deliberateness”. (Or were written cleverly enough to not make me see that!)

Maybe backdoors have been planted that we just haven’t discovered yet?

Discussion

Follow-up discussion/comments on hacker news.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 199

di, 12/09/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is pikkr, a JSON parser that can extract values without tokenization and is blazingly fast using AVX2 instructions, Thank you, bstrie for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

99 pull requests were merged in the last week

New Contributors
  • bgermann
  • Douglas Campos
  • Ethan Dagner
  • Jacob Kiesel
  • John Colanduoni
  • Lance Roy
  • Mark
  • MarkMcCaskey
  • Max Comstock
  • toidiu
  • Zaki Manian
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

We're currently writing up the discussions, we'd love some help. Check out the tracking issue for details.

PRs:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

When programmers are saying that there are a lot of bicycles in code that means that it contains reimplementations of freely available libraries instead of using them

Presumably the metric for this would be bicyclomatic complexity?

/u/tomwhoiscontrary on reddit.

Thanks to Matt Ickstadt for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Niko Matsakis: Cyclic queries in chalk

di, 12/09/2017 - 06:00

In my last post about chalk queries, I discussed how the query model in chalk. Since that writing, there have been some updates, and I thought it’d be nice to do a new post covering the current model. This post will also cover the tabling technique that scalexm implemented for handling cyclic relations and show how that enables us to implement implied bounds and other long-desired features in an elegant way. (Nice work, scalexm!)

What is a chalk query?

A query is simply a question that you can ask chalk. For example, we could ask whether Vec<u32> implements Clone like so (this is a transcript of a cargo run session in chalk):

?- load libstd.chalk ?- Vec<u32>: Clone Unique; substitution [], lifetime constraints []

As we’ll see in a second, the answer “Unique” here is basically chalk’s way of saying “yes, it does”. Sometimes chalk queries can contain existential variables. For example, we might say exists<T> { Vec<T>: Clone } – in this case, chalk actually attempts to not only tell us if there exists a type T such that Vec<T>: Clone, it also wants to tell us what T must be:

?- exists<T> { Vec<T>: Clone } Ambiguous; no inference guidance

The result “ambiguous” is chalk’s way of saying “probably it does, but I can’t say for sure until you tell me what T is”.

So you think can think of a chalk query as a kind of subroutine like Prove(Goal) = R that evaluates some goal (the query) and returns a result R which has one of the following forms:

  • Unique: indicates that the query is provable and there is a unique value for all the existential variables.
    • In this case, we give back a substitution saying what each existential variable had to be.
    • Example: exists<T> { usize: PartialOrd<T> } would yield unique and return a substitution that T = usize, at least today (since there is only one impl that could apply, and we haven’t implemented the open world modality that aturon talked about yet).
  • Ambiguous: the query may hold but we could not be sure. Typically, this means that there are multiple possible values for the existential variables.
    • Example: exists<T> { Vec<T>: Clone } would yield ambiguous, since there are many T that could fit the bill).
    • In this case, we sometimes give back guidance, which are suggested values for the existential variables. This is not important to this blog post so I’ll not go into the details.
  • Error: the query is provably false.

(The form of these answers has changed somewhat since my previous blog post, because we incorporated some of aturon’s ideas around negative reasoning.)

So what is a cycle?

As I outlined long ago in my first post on lowering Rust traits to logic, the way that the Prove(Goal) subroutine works is basically just to iterate over all the possible ways to prove the given goal and try them one at a time. This often requires proving subgoals: for example, when we were evaluating ?- Vec<u32>: Clone, internally, this would also wind up evaluating u32: Clone, because the impl for Vec<T> has a where-clause that T must be clone:

impl<T> Clone for Vec<T> where T: Clone, T: Sized, { }

Sometimes, this exploration can wind up trying to solve the same goal that you started with! The result is a cyclic query and, naturally, it requires some special care to yield a valid answer. For example, consider this setup:

trait Foo { } struct S<T> { } impl<U> Foo for S<U> where U: Foo { }

Now imagine that we were evaluating exists<T> { T: Foo }:

  • Internally, we would process this by first instantiating the existential variable T with an inference variable, so we wind up with something like ?0: Foo, where ?0 is an as-yet-unknown inference variable.
  • Then we would consider each impl: in this case, there is only one.
    • For that impl to apply, ?0 = S<?1> must hold, where ?1 is a new variable. So we can perform that unification.
      • But next we must check that ?1: Foo holds (that is the where-clause on the impl). So we would convert this into “closed” form by replacing all the inference variables with exists binders, giving us something like exists<T> { T: Foo }. We can now perform this query.
        • Only wait: This is the same query we were already trying to solve! This is precisely what we mean by a cycle.

In this case, the right answer for chalk to give is actually Error. This is because there is no finite type that satisfies this query. The only type you could write would be something like

S<S<S<S<...ad infinitum...>>>>: Foo

where there are an infinite number of nesting levels. As Rust requires all of its types to have finite size, this is not a legal type. And indeed if we ask chalk this query, that is precisely what it answers:

?- exists<T> { S<T>: Foo } No possible solution: no applicable candidates

But cycles aren’t always errors of this kind. Consider a variation on our previous example where we have a few more impls:

trait Foo { } // chalk doesn't have built-in knowledge of any types, // so we have to declare `u32` as well: struct u32 { } impl Foo for u32 { } struct S<T> { } impl<U> Foo for S<U> where U: Foo { }

Now if we ask the same query, we get back an ambiguous result, meaning that there exists many solutions:

?- exists<T> { T: Foo } Ambiguous; no inference guidance

What has changed here? Well, introducing the new impl means that there is now an infinite family of finite solutions:

  • T = u32 would work
  • T = S<u32> would work
  • T = S<S<u32>> would work
  • and so on.

Sometimes there can even be unique solutions. For example, consider this final twist on the example, where we add a second where-clause concerning Bar to the impl for S<T>:

trait Foo { } trait Bar { } struct u32 { } impl Foo for u32 { } struct S<T> { } impl<U> Foo for S<U> where U: Foo, U: Bar { } // ^^^^^^ this is new

Now if we ask the same query again, we get back yet a different response:

?- exists<T> { T: Foo } Unique; substitution [?0 := u32], lifetime constraints []

Here, Chalk figured out that T must be u32. How can this be? Well, if you look, it’s the only impl that can apply – for T to equal S<U>, U must implement Bar, and there are no Bar impls at all.

So we see that when we encounter a cycle during query processing, it doesn’t necessarily mean the query needs to result in an error. Indeed, the overall query may result in zero, one, or many solutions. But how does should we figure out what is right? And how do we avoid recursing infinitely while doing so? Glad you asked.

Tabling: how chalk is handling cycles right now

Naturally, traditional Prolog interpreters have similar problems. It is actually quite easy to make a Prolog program spiral off into an infinite loop by writing what seem to be quite reasonable clauses (quite like the ones we saw in the previous section). Over time, people have evolved various techniques for handling this. One that is relevant to us is called tabling or memoization – I found this paper to be a particularly readable introduction. As part of his work on implied bounds, scalexm implemented a variant of this idea in chalk.

The basic idea is as follows. When we encounter a cycle, we will actually wind up iterating to find the result. Initially, we assume that a cycle means an error (i.e., no solutions). This will cause us to go on looking for other impls that may apply without encountering a cycle. Let’s assume we find some solution S that way. Then we can start over, but this time, when we encounter the cyclic query, we can use S as the result of the cycle, and we would then check if that gives us a new solution S’.

If you were doing this in Prolog, where the interpreter attempts to provide all possible answers, then you would keep iterating, only this time, when you encountered the cycle, you would give back two answers: S and S’. In chalk, things are somewhat simpler: multiple answers simply means that we give back an ambiguous result.

So the pseudocode for solving then looks something like this:

  • Prove(Goal):
    • If goal is ON the stack already:
      • return stored answer from the stack
    • Else, when goal is not on the stack:
      • Push goal on to the stack with an initial answer of error
      • Loop
        • Try to solve goal yielding result R (which may generate recursive calls to Solve with the same goal)
        • Pop goal from the stack and return the result R if any of the following are true:
          • No cycle was encountered; or,
          • the result was the same as what we started with; or,
          • the result is ambiguous (multiple solutions).
        • Otherwise, set the answer for Goal to be R and repeat.

If you’re curious, the real chalk code is here. It is pretty similar to what I wrote above, except that it also handles “coinductive matching” for auto traits, which I won’t go into now. In any case, let’s apply this to our three examples of proving exists<T> { T: Foo }:

  • In the first example, where we only had impl<U> Foo for S<U> where U: Foo, the cyclic attempt to solve will yield an error (because the initial answer for cyclic alls is errors). There is no other way for a type to implement Foo, and hence the overall attempt to solve yields an error. This is the same as what we started with, so we just return and we don’t have to cycle again.
  • In the second example, where we added impl Foo for u32, we again encounter a cycle and return error at first, but then we see that T = u32 is a valid solution. So our initial result R is Unique[T = u32]. This is not what we started with, so we try again.
    • In the second iteration, when we encounter the cycle trying to process impl<U> Foo for S<U> where U: Foo, this time we will give back the answer U = u32. We will then process the where-clause and issue the query u32: Foo, which succeeds. Thus we wind up yielding a successful possibility, where T = S<u32>, in addition to the result that T = u32. This means that, overall, our second iteration winds up producing ambiguity.
  • In the final example, where we added a where clause U: Bar, the first iteration will again produce a result of Unique[T = u32]. As this is not what we started with, we again try a second iteration.
    • In the second iteration, we will again produce T = u32 as a result for the cycle. This time however we go on to evaluate u32: Bar, which fails, and hence overall we still only get one successful result (T = u32).
    • Since we have now reached a fixed point, we stop processing.
Why do we care about cycles anyway?

You may wonder why we’re so interested in handling cycles well. After all, how often do they arise in practice? Indeed, today’s rustc takes a rather more simplistic approach to cycles. However, this leads to a number of limitations where rustc fails to prove things that it ought to be able to do. As we were exploring ways to overcome these obstacles, as well as integrating ideas like implied bounds, we found that a proper handling of cycles was crucial.

As a simple example, consider how to handle “supertraits” in Rust. In Rust today, traits sometimes have supertraits, which are a subset of their ordinary where-clauses that apply to Self:

// PartialOrd is a "supertrait" of Ord. This means that // I can only implement `Ord` for types that also implement // `PartialOrd`. trait Ord: PartialOrd { }

As a result, whenever I have a function that requires T: Ord, that implies that T: PartialOrd must also hold:

fn foo<T: Ord>(t: T) { bar(t); // OK: `T: Ord` implies `T: PartialOrd` } fn bar<T: PartialOrd>(t: T) { ... }

The way that we handle this in the Rust compiler is through a technique called elaboration. Basically, we start out with a base set of where-clauses (the ones you wrote explicitly), and then we grow that set, adding in whatever supertraits should be implied. This is an iterative process that repeats until a fixed-point is reached. So the internal set of where-clauses that we use when checking foo() is not {T: Ord} but {T: Ord, T: PartialOrd}.

This is a simple technique, but it has some limitations. For example, RFC 1927 proposed that we should elaborate not only supertraits but arbitrary where-clauses declared on traits (in general, a common request). Going further, we have ideas like the implied bounds RFC. There are also just known limitations around associated types and elaboration.

The problem is that the elaboration technique doesn’t really scale gracefully to all of these proposals: often times, the fully elaborated set of where-clauses is infinite in size. (We somewhat arbitrarily prevent cycles between supertraits to prevent this scenario in that special case.)

So we tried in chalk to take a different approach. Instead of doing this iterative elaboration step, we push that elaboration into the solver via special rules. The basic idea is that we have a special kind of predicate called a WF (well-formed) goal. The meaning of something like WF(T: Ord) is basically “T is capable of implementing Ord” – that is, T satisfies the conditions that would make it legal to implement Ord. (It doesn’t mean that T actually does implement Ord; that is the predicate T: Ord.) As we lower the Ord and PartialOrd traits to simpler logic rules, then, we can define the WF(T: Ord) predicate like so:

// T is capable of implementing Ord if... WF(T: Ord) :- T: PartialOrd. // ...T implements PartialOrd.

Now, WF(T: Ord) is really an “if and only if” predicate. That is, there is only one way for WF(T: Ord) to be true, and that is by implementing PartialOrd. Therefore, we can define also the opposite direction:

// T must implement PartialOrd if... T: PartialOrd :- WF(T: Ord). // ...T is capable of implementing Ord.

Now if you think this looks cyclic, you’re right! Under ordinary circumstances, this pair of rules doesn’t do you much good. That is, you can’t prove that (say) u32: PartialOrd by using these rules, you would have to use other rules for that (say, rules arising from an impl).

However, sometimes these rules are useful. In particular, if you have a generic function like the function foo we saw before:

fn foo<T: Ord>() { .. }

In this case, we would setup the environment of foo() to contain exactly two predicates {T: Ord, WF(T: Ord)}. This is a form of elaboration, but not the iterative elaboration we had before. We simply introduce WF-clauses. But this gives us enough to prove that T: PartialOrd (because we know, by assumption, that WF(T: Ord)). What’s more, this setup scales to arbitrary where-clauses and other kinds of implied bounds.

Conclusion

This post covers the tabling technique that chalk currently uses to handle cycles, and also the key ideas of how Rust handles elaboration.

The current implementation in chalk is really quite naive. One interesting question is how to make it more efficient. There is a lot of existing work on this topic from the Prolog community, naturally, with the work on the well-founded semantics being among the most promising (see e.g. this paper). I started doing some prototyping in this direction, but I’ve recently become intrigued with a different approach, where we use the techniques from Adapton (or perhaps other incremental computation systems) to enable fine-grained caching and speed up the more naive implementation. Hopefully this will be the subject of the next blog post!

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Welcome to San Francisco, Chairman Pai – We Depend on Net Neutrality

di, 12/09/2017 - 05:33

This is an open letter to FCC Chairman Ajit Pai as he arrives in San Francisco for an event. He has said that Silicon Valley is a magically innovative place – and we agree. An open internet makes that possible, and enables other geographical areas to grow and innovate too.

Welcome to San Francisco, Chairman Pai! As you have noted in the past, the Bay Area has been a hub for many innovative companies. Our startups, technology companies, and service providers have added value for billions of users online.

The internet is a powerful tool for the economy and creators. No one owns the internet – we can all create, shape, and benefit from it. And for the future of our society and our economy, we need to keep it that way – open and distributed.

We are very concerned by your proposal to roll back net neutrality protections that the FCC enacted in 2015 and that are currently in place. That enforceable policy framework provides vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Abandoning these core protections will hurt consumers and small businesses alike.

As network engineers have noted, your proposal mischaracterizes many aspects of the internet, and does not show that the 2015 open internet order would benefit anyone other than major broadband providers. Instead, this seems like a politically loaded decision made about rules that have not been tested, either in the courts or in the field. User rights, the American economy, and free speech should not be used as political footballs. We deserve more from you, an independent regulator.

Broadband providers are in a position to restrict internet access for their own business objectives: favoring their own products, blocking sites or brands, or charging different prices (either to users or to content providers) and offering different speeds depending on content type. Net neutrality prohibits network providers from discriminating based on content, so everyone has equal access to potential users – whether you are a powerful incumbent or an up-and-coming disruptive service. That’s key to a market that works.

The open internet aids free speech, competition, innovation and user choice. We need more than the hollow promises and wishful thinking of your proposal – we must have enforceable rules. And net neutrality enforcement under non-Title II theories has been roundly rejected by the courts.

Politics is a terrible way to decide the future of the internet, and this proceeding increasingly has the makings of a spectator sport, not a serious debate. Protecting the internet should not be a political, or partisan, issue. The internet has long served as a forum where all voices are free to be heard – which is critical to democratic and regulatory processes. These suffer when the internet is used to feed partisan politics. This partisanship also damages the Commission’s strong reputation as an independent agency. We don’t believe that net neutrality, internet access, or the open internet is – or ever should be – a partisan issue. It is a human issue.

Net neutrality is most essential in communities that don’t count giant global businesses as their neighbors like your hometown in Kansas. Without it, consumers and businesses will not be able to compete by building and utilizing new, innovative tools. Proceed carefully – and protect the entire internet, not just giant ISPs.

The post Welcome to San Francisco, Chairman Pai – We Depend on Net Neutrality appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Julien Vehent: Lessons learned from mentoring

di, 12/09/2017 - 03:49

Over the last few weeks, a number of enthusiastic students have asked me when the registration for the next edition of Mozilla Winter of Security would open. I've been saddened to inform them that there won't be an edition of MWoS this year. I understand this is disappointing to many who were looking forward to work on cool security projects alongside experienced engineers, but the truth is simply don't have the time, resources and energy to mentor students right now.


Firefox engineers are cramming through bugs for the Firefox 57 release, planned for November 14th. We could easily say "sorry, too busy making Firefox awesome, kthnksbye", but there is more to the story of not running MWoS this year than the release of 57. In this blog post, I'd like to explore some of these reasons, and maybe share tips with folks who would like to become mentors.


After running MWoS for 3 years, engaging with hundreds of students and personally mentoring about a dozen, I learned two fundamental lessons:

  1. The return on investment is extremely low, when it's not a direct loss to the mentor.
  2. Students engagement is very hard to maintain, and many are just in it for the glory.
Those are hard-learned lessons that somewhat shattered my belief in mentoring. Let's dive into each.Return on investment Many mentors will tell you that having an altruistic approach to mentoring is the best way to engage with students. That's true for short engagements, when you spare a few minutes to answer questions and give guidance, but it's utter bullshit for long engagements.It is simply not realistic to ask engineers to invest two hours a week over four months without getting something out of it. Your time is precious, have some respect for it. When we initially structured MWoS, we made sure that each party (mentors, students and professors) would get something out of it, specifically:
  • Mentors get help on a project they would not be able to complete alone.
  • Students get a great experience and a grade as part of their school curriculum.
  • Professors get interesting projects and offload the mentoring to Mozilla.
Making sure that students received a grade from their professors helped maintain their engagement (but only to some extend, more on that later), and ensured professors approved of the cost a side project would make to their very-busy-students.The part that mattered a lot for us, mentors, besides helping train the next generation of engineers, was getting help on projects we couldn't complete ourselves. After running MWoS for three years and over a few dozen projects, the truth is we would be better off writing the code ourselves in the majority of cases. The time invested in teaching students would be better used implementing the features we're looking for, because even when students completed their projects, the code quality was often too low for the features to be merged without significant rewrites.
There have been exceptions, of course, and some teams have produced code of good quality. But those have been the exceptions, not the rule. The low return on investment (and often negative return when mentors invested time into projects that did not complete), meant that it became increasingly hard for busy engineers to convince their managers to dedicate 5 to 10% of their time supporting teams that will likely produce low quality code, if any code at all.It could be said that we sized our projects improperly, and made them too complex for students to complete. It's a plausible explanation, but at the same time, we have not observed a correlation between project complexity and completion. This leads into the next point.Students engagement is hard to maintain You would imagine that a student who is given the opportunity to work with Mozilla engineers for several months would be incredibly engaged, and drop everything for the opportunity to work on interesting, highly visible, very challenging projects. We've certainly seen students like that, and they have been fantastic to work with. I remain friends with a number of them, and it's been rewarding to see them grow into accomplished professional who know way more about the topics I mentored them on than I do today. Those are the good ones. The exceptions. The ones that keep on going when your other mentoring projects keep on failing.
And then, you have the long tail of students who have very mixed interest in their projects. Some are certainly overwhelmed by their coursework and have little time to dedicate to their projects. I have no issue with overwhelmed students, and have repeatedly told many of my mentee to prioritize their coursework and exams over MWoS projects.
The ones that rub me the wrong way are students that are more interested in getting into MWoS than actually completing their projects. This category of resume-padding students cares for the notoriety of the program more than the work they accomplish. They are very hard to notice at first, but after a couple years of mentoring, you start to see the patterns: eagerness to name-drop, github account filled with forks of projects and no authored code, vague technical answers during interview questions, constant mention of their references and people they know, etc.When you mentor students that are just in it for the glory, the interest in the project will quickly drop. Here's how it usually goes:
  • By week 2, you'll notice students have no plan to implement the project, and you find yourself holding their hands through the roadmap, sometimes explaining concepts so basic you wonder how they could not be familiar with them yet.
  • By week 4, students are still "going through the codebase to understand how it is structured", and have no plans to implement the project yet. You spend meeting explaining how things work, and grow frustrated by their lack of research. Did they even look at this since our last meeting?
  • By week 6, you're pretty much convinced they only work on the project for 30min chunks when you send them a reminder email. The meetings are becoming a drag, a waste of a good half hour in your already busy week. Your tone changes and you become more and more prescriptive, less and less enthusiastic. Students nod, but you have little hope they'll make progress.
  • By week 8, it's the mid-term, and no progress is made for another month.
You end up cancelling the weekly meeting around week 10, and ask students to contact you when they have made progress. You'll hear back from them 3 months later because their professor is about to grade them. You wonder how that's going to work, since the professor never showed up to the weekly meeting, and never contacted you directly for an assessment. Oh well, they'll probably get an A just because they have Mozilla written next to their project...
This is a somewhat overly dramatic account of a failed engagement, but it's not at all unrealistic. In fact, in the dozen projects I mentored, this probably happened on half of them.The problem with lowly-engaged students is that they are going to drain your motivation away. There is a particular light in the eye of the true nerd-geek-hacker-engaged-student that makes you want to work with them and guide them through their mistakes. That's the reward of a mentor, and it is always missing from students that are not engaged. You learn to notice it after a while, but often long after the damage done by the opportunists have taken away your interest in mentoring.Will MWoS rise from the ashes? The combination of low return on investment and poorly engaged students, in addition to a significant increase in workload, made us cancel this year's round. Maybe next year, if we find the time and energy, we will run MWoS again. It's also possible that other folks at Mozilla, and in other organizations, will run similar programs in the future. Should we run it again, we would be a lot stricter on filtering students, and make sure they are ready to invest a lot of time and energy into their projects. This is fairly easy to do: throw them a challenge during the application period, and check the results. "Implement a crude Diffie-Hellman chat on UDP sockets, you've got 48 hours", or anything along those line, along with a good one hour conversation, ought to do it. We were shy to ask those questions at first, but it became obvious over the years that stronger filtering was desperately needed.
For folks looking to mentor, my recommendation is to open your organization to internships before you do anything else. There's a major difference in productivity between interns and students, mostly because you control 100% of an intern's daily schedule, and can make sure they are working on the tasks you assign them too. Interns often complete their projects and provide direct value to the organization. The same cannot be said by mentee of the MWoS program.
Categorieën: Mozilla-nl planet

Firefox Nightly: Developer Tools visual refresh coming to Nightly

ma, 11/09/2017 - 22:54

Good evening, Nightly friends! As the UX designer for DevTools, I’ve been working on fresh new themes for Firefox 57. My colleague Gabriel Luong is handling the implementation and will be landing new syntax colors in Nightly soon. I want to give you a preview of the new changes and explain some of the reasoning behind them. I’ll also be inviting you to test the new design and give feedback.

New Nightly icon, some Photon colors, new tabs

57: New icon, new colors, new tabs

Firefox 57’s new design—codenamed Photon—features vibrant colors and bold, modern styling. Aligning with Photon was the main goal of this DevTools restyling, and my hope was to use this opportunity to improve the usability of the tools with cleaner interfaces and more readable text.

The new DevTools tab bar is a simpler version of the new Firefox tab bar. Compared to the old tabs, this means fewer lines, slightly more padding, and subtler use of color.

New DevTools tabs

New DevTools tabs

In dark mode, all the slate blues have been replaced with deep grays, and the sidebars are a darker shade to give more visual priority to the center column.

New dark debugger

New dark debugger

Syntax highlighting was the most challenging part of this project due the abundance of opinions and the lack of solid research. To keep my decisions as data-informed as possible, I referenced the following resources:

  • Each color was checked for accessible contrast levels to keep the new themes AA-compliant.
  • This study on syntax highlighting showed that it’s beneficial to highlight a larger variety of keywords with different colors.
  • This study on computer readability concluded that, while light themes are generally better than dark themes for readability, many people have a good experience with chromatic dark themes that feature the universal favorite color: blue.
  • Using the Sim Daltonism app, the themes were informally checked for color blindness conditions.

In addition, I wanted to move away from the use of red for non-error text, and mostly use cool colors accented with warm colors. After some experimentation in the browser toolbox, a blue/magenta/navy theme emerged based on the Photon design system colors.

The old design used translucency to de-emphasize <head>, <script> tags, and hidden elements, which made them a bit difficult to read. For the new design, head and script tags will be treated normally, since they tend to be some of the most important elements in HTML. Hidden divs and other elements will be desaturated instead of translucent.

Firefox's current syntax highlighting

Old HTML/CSS

New syntax highlighting - HTML/CSS

New HTML/CSS

For the dark theme, I aimed for a slightly lower-contrast, calmer theme, intended for lengthy screen-staring sessions in dimmer rooms. (There’s a huge variety in the contrast levels of popular dark themes, but for this project, it felt important to balance the light theme’s high contrast with a lower-contrast theme.) The bold Photon colors looked too glaring against a dark background, so I created a more pastel version of each color for this theme.

Old HTML/CSS (dark)

Old HTML/CSS (dark)

New HTML/CSS (dark)

New HTML/CSS (dark)

For JavaScript in the Debugger, I added a few extra colors to allow for more variation than what the previous theme had—for example, keywords and properties will now be different colors. These mockups show the general color direction, but exact highlighting patterns are under discussion and will continue to be developed.

New dark and light JS highlighing schemes

New JS colors (tentative)

Feedback Wanted

These changes should be arriving in a few days. Much more polish is planned, so if you have any feedback, I’d love to hear it! I know dealing with UI changes can be jarring, but try it out for a couple days in your usual workflow and let me know what you think. I hope to hear from both developers and designers working in all different kinds of environments, and I’m especially interested in hearing from users with accessibility needs.

You can send me feedback either through this Discourse thread or by talking to me on Twitter. Thank you!

Categorieën: Mozilla-nl planet

Air Mozilla: Automating Web Accessibility Testing

ma, 11/09/2017 - 22:00

Automating Web Accessibility Testing A conclusion to my internship on automating web accessibility testing.

Categorieën: Mozilla-nl planet

Air Mozilla: Automating Web Accessibility Testing

ma, 11/09/2017 - 22:00

Automating Web Accessibility Testing A conclusion to my internship on automating web accessibility testing.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 11 Sep 2017

ma, 11/09/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 11 Sep 2017

ma, 11/09/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

The Mozilla Blog: A Copyright Vote That Could Change the EU’s Internet

ma, 11/09/2017 - 09:01
On October 10, EU lawmakers will vote on a dangerous proposal to change copyright law. Mozilla is urging EU citizens to demand better reforms.

 

On October 10, the European Parliament Committee on Legal Affairs (JURI) will vote on a proposal to change EU copyright law.

The outcome could sabotage freedom and openness online. It could make filtering and blocking online content far more routine, affecting the hundreds of millions of EU citizens who use the internet everyday.

Dysfunctional copyright reform is threatening Europe’s internet

Why Copyright Reform Matters

The EU’s current copyright legal framework is woefully outdated. It’s a framework created when the postcard, and not the iPhone, was a reigning communication method.

But the EU’s proposal to reform this framework is in many ways a step backward. Titled “Directive on Copyright in the Digital Single Market,” this backward proposal is up for an initial vote on October 10 and a final vote in December.

“Many aspects of the proposal and some amendments put forward in the Parliament are dysfunctional and borderline absurd,” says Raegan MacDonald, Mozilla’s Senior EU Policy Manager. “The proposal would make filtering and blocking of online content the norm, effectively undermining innovation, competition and freedom of expression.”

Under the proposal:

  • If the most dangerous amendments pass, everything you put on the internet will be filtered, and even blocked. It doesn’t even need to be commercial — some proposals are so broad that even photos you upload for friends and family would be included.

 

  • Linking to and accessing information online is also at stake: extending copyright to cover news snippets will restrict our ability to learn from a diverse selection of sources. Sharing and accessing news online would become more difficult through the so-called “neighbouring right” for press publishers.

 

  • The proposal would remove crucial protections for intermediaries, and would force most online platforms to monitor all content you post — like Wikipedia, eBay, software repositories on Github, or DeviantArt submissions.

 

  • Only scientific research institutions would be allowed to mine text and datasets. This means countless other beneficiaries — including librarians, journalists, advocacy groups, and independent scientists — would not be able to make use of mining software to understand large data sets, putting Europe in a competitive disadvantage in the world.
Mozilla’s Role

In the weeks before the vote, Mozilla is urging EU citizens to phone their lawmakers and demand better reform. Our website and call tool — changecopyright.org — makes it simple to contact Members of European Parliament (MEPs).

This isn’t the first time Mozilla has demanded common-sense copyright reform for the internet age. Earlier this year, Mozilla and more than 100,000 EU citizens dropped tens of millions of digital flyers on European landmarks in protest. And in 2016, we collected more than 100,000 signatures calling for reform.

Well-balanced, flexible, and creativity-friendly copyright reform is essential to a healthy internet. Agree? Visit changecopyright.org and take a stand.

Note: This blog has been updated to include a link to the reform proposal.

The post A Copyright Vote That Could Change the EU’s Internet appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Tantek Çelik: My First #Marathon @theSFMarathon

ma, 11/09/2017 - 07:30

I started writing this San Francisco Marathon race write-up the evening after the race, braindumping so many thoughts and feelings from various points of the race, the memories pouring forth not in any particular order. Over the next month I sat down for a half hour here, an hour there, and with the help of a race map and my Strava record, closed my eyes, recalled my memories one mile at a time, focusing on where I ran and what it felt like. I wrote paragraphs for each, integrating those visual memories with my post-race braindump.

Seven weeks after the race itself, here’s how it went.

Morning of the race

We woke up so early that morning of July 23rd — I cannot remember exactly when. So early that maybe I blocked it out. My friend Zoe had crashed on my couch the night before — she decided just a month before to race with me in solidarity.

Having laid out our race kits the night before, we quickly got ready. I messaged my friend & neighbor Michele who came over and called a Lyft for us. The driver took us within a couple of blocks of Harrison & Spear street, and we ran the rest of the way. After a quick pitstop we walked out to the Embarcadero to find our starting waves.

I only have a few timestamps from the photos I took before the race.

The Embarcadero and Bay Bridge still lit with lights at dawn

05:32. The Bay Bridge lights with a bit of dawn’s orange glow peeking above the East Bay mountains.

We sent Michele on her way to wave 3, and Zoe & I found our way around the chain link fences to join the crowd in wave 5.

Tantek, a police officer in full uniform & cap, and Zoe lined up in wave 5 of the San Francisco Marathon 2017

05:47. We found a police officer in the middle of the crowd, or rather he found us. He had seen our November Project tagged gear and shouted out “Hey November Project! I used to run that in Boston!” We shared our experiences running the steps at Harvard Stadium. Glancing down we noticed he had a proper race bib and everything. He was doing the whole race in his full uniform, including shoes.

Tantek and Zoe in wave 5 waiting for the start of the San Francisco Marathon

05:52. We took a dawn photo of us in the darkness with the Bay Bridge behind us. Zoe calls this the “When we were young and naïve” shot.

We did a little warm up run (#no_strava) back & forth on the Embarcadero until just minutes before our wave start time.

Sunrise behind the Bay Bridge

05:57. I noticed the clouds were now glowing orange, reflections glistening on the bay, and took another photo. Wave 5 got a proper sunrise send-off.

As we were getting ready to start, Zoe told me to feel free to run ahead, that she didn’t want to slow me down.

In the weeks leading up to the race, all my runner friends had told me: enjoy yourself, don’t focus on time. So that’s what I told Zoe: Don’t worry about time, we’re going to support each other and enjoy ourselves. She quietly nodded. We were ready.

The San Francisco Marathon 2017 Full Marathon Course Map

Map of the 2017 full marathon course from the official website.

Mile 1

06:02. We started with the crowd. That was the last timestamp I remember.

We ran at an easy sub-10minute/mile pace up The Embarcadero, you could see the colors of everything change by the minute as the sun rose.

It took deliberate concentration to keep a steady pace and not let the excitement get to me. I focused on keeping an even breath, an even pace.

That first mile went by in mere moments. I remember feeling ready, happy, and grateful to be running with a friend. With all that energy and enthusiasm from the crowd, it felt effortless. More like gliding than running. Then poof, there went the Mile 1 sign.

Mile 2

It was like gliding until the cobblestones in Fisherman’s Wharf. The street narrowed and they were hard to avoid. The cobblestones made running awkward and slowed us down. Noted for future races.

Mile 3

Running into Aquatic park was a relief.

Yes this is NPSF Mondays home territory, look out. We got excited and picked up the pace. Latent Monday morning competitive memories of so many burnout sprints on the sand. Turned the cove corner and ran up the ramp to the end of Van Ness.

Hey who just shouted my name from the sidelines? It was Lindsay Bolt at a med station. Stopped for a lightning hug and sprinted back to catch Zoe.

Back on asphalt, made it to the Fort Mason climb. Let’s do this.

Time to stretch those hill climbing legs. Strava segment PR NBD, even with a 1 min walk at the top at our 5/1 run/walk pace. Picked up the run just in time for...

Those smiles, that beard. Yes none other than perhaps two of the most positive people in NPSF, Tony & Holly! This race just keeps getting better and better.

These two are so good at sharing and beaming their joy out into the world, it lifts you off the ground. Seriously. I felt like I was running on air, flying.

Fort Mason downhill, more NPSF Mondays home territory. Glanced at my watch to see low-6min/mile pace (!!!). I know I’m supposed to be taking it easy, but it felt like less work to lean forward and flow with gravity’s downhill pull, rather than resist.

Mile 4

Slight veer to the right then left crossing the mile 3 marker to the Marina flats which brought me back to my sustainable ~10min/mile pace.

Somehow it got really crowded. We had caught up to a slower group and had to slalom back and forth to get through them. It was hard to keep a consistent pace. Slowed to about a 10-11min/mile.

Just as we emerged from the slow cluster, the path narrowed and sent us on a zig towards the beach away from Mason street. Then left, another left, and right back onto Mason street after the mile 4 marker.

What was the point of this momentum-killing jut out towards the bay? They couldn’t figure out some other place to put that distance? Really hope they fix it next year.

Mile 5

The long and fairly straight stretch of Mason street was a nice relief. Though it was at this point that I first felt like I had to to pee. I figured I could probably ignore it for a bit, especially with the momentum we had picked up.

I should note that Zoe and I have been run/walking 5min/1min intervals so far this entire time, maybe fudging them a bit to overlap with the water stations so we could walk at each one. We grabbed a cup of water every time. One cup only.

So it was with the station before the mile 5 marker. That station was particularly well placed, right before one of the biggest hills in the course.

Mile 6

We flew by the mile 5 marker and started the uphill grind towards the bridge. I just ran this hill 3 weeks ago. Piece of cake I thought.

Practicing hills for a race course is a huge confidence booster, because nearly everyone else slows down, even slowing to a walk, because hills seem to intrinsically evoke fear in runners, likely mostly fear of the unknowns. How long is this hill? Am I going to run out of energy/steam/breath trying to run up it? Am I going to tire myself out? Practicing a hill removes such mysteries and you know just how long you’ll have to push how hard to summit it how fast. Then you can run uphill with confidence, knowing full well how much energy it will take to get to the top.

Despite all that, hills are still the hardest thing for me. Zoe quickly outpaced me and pulled ahead. I kept her in sight.

We kept a nice 5/1 run/walk pace. And while running up the hill, I glanced at my heart monitor to pace myself and keep it just under 150bpm.

Now for the bridge. Did I mention the view running up to the bridge? I did not, because there was almost no view of the bridge, just a blanket of fog in the Marina.

On the bridge we could see maybe a few hundred meters in front of us, and just the base of the towers. @karlthefog was out stronger than I’ve seen in any SF Marathon of the past four years. And I was quite grateful because I’d forgotten to put on sunscreen.

Mile 7

That blanket of fog also meant nearly no views, which meant nearly no one stopping to selfie in the middle of the bridge. This was the smoothest I have ever seen a race run over the Golden Gate Bridge.

The initial uphill on the bridge went by faster than I ever remember. As the road flattened approaching the halfway point, it started to feel like it was downhill. I couldn’t tell if that was an illusion from excitement or actually gravity.

Sometime after the midpoint, as the bridge cables started to rise once again, I finally saw my first NP racer wearing a November Project tagged shirt coming the other way. He was a tall guy that I did not recognize, likely visiting from another city. We shouted “NP” and high fived as we passed. Smack.

Mile 8

As we crossed the bridge into Marin, the fog thinned to reveal sunlit hills in front of us. Pretty easy loop around the North Vista Point parking lot, biggest challenge was dodging everyone stopping for gu. It was nice to get a bit of sunshine.

We looped back onto the bridge with just enough momentum to keep up a little speed, with the North tower in sight.

Mile 9

The Golden Gate Bridge felt even faster on the way back, and it actually felt good to run back into the fog. Sunglasses off.

We picked up even more speed as the grade flattened, eventually becoming a downhill as we approached the South Tower. That mile felt particularly fast.

Mile 10

Launching into the tenth mile with quite a bit of momentum, I kept us running a bit longer than the five minutes of our 5/1 run/walk, flying around the turns until the bottom of the downhill right turn onto Lincoln Boulevard.

I didn’t know it at the time, but I had just set PRs for the Strava segments across the bridge, having run it significantly faster than any practice runs.

Flying run turned into fast walk, we shuffled up the Lincoln climb at a good clip, which felt less steep than ever before.

Fast walked right up to the aid station, our run/walk timing had worked out well. After we downed a cup of water each and started running again, we both related that a quick bathroom stop would be a good idea, and agreed to take a pee-break at the next set of porta-potties.

Mile 11

One more run/walk up to the top of the Lincoln hill. Been here many times, whether running the first half of the SF Marathon, or coming the other direction in the Rock & Roll SF half, or running racing friends up to the top. Again it felt less steep than before.

All those Friday NPSF hillsforbreakfast sessions followed by Saturday mornings with SFRC running trails in the Marin headlands had prepared me to keep pushing even after 10 miles. Zoe pulled ahead, stronger on the uphills.

We knew going in that we had different strengths, she was faster up the hills and I was faster down them, so we encouraged each other to go faster when we could, figuring we would sync-up on the flats.

Having reached the end of our 1 minute walk as we crested the hill, we picked up our run, I leaned forward and let gravity pull me through. Zooming down the hill faster than I’d expected, by the time I walked through the water stop at the bottom I had lost sight of Zoe. I kept walking and looking but couldn’t see her.

Apparently I had missed the porta-potties by the aid station, she had not, and had stopped as we had agreed.

Mile 12

Crossing mile marker 11, I turned around and started walking backwards, hoping to see Zoe. A few people looked at me like I was nuts but I didn’t care, I was walking uphill backwards nearly as fast as some were shuffling forwards. And I knew from experience that walking backwards works your muscles very differently, so I looked at it as a kind of super active-recovery.

After walking nearly a half mile backwards I finally spotted Zoe running / fast walking to catch-up; I think she spotted me first.

Just after we sync’d back up, and switched back to walking, a swing-dancing friend of mine who I had not seen in years spotted me and cheered us on at 27th & Clement!

We finally got to the top of the Richmond hill (at Anza street I think), and could see Golden Gate Park downhill in front of us.

Mile 12 was my slowest mile of the race, just after my fastest (mile 11). We picked up the pace once more.

Mile 13

We sped into the park, and slowed once we hit the uphill approaching the aid station there. I remember this point in the course very clearly from last year’s first half. At that point last year my knees were unhappy and I was struggling to finish. This year was a different story. Yes I felt the hill, however, my joints felt solid. Ankles, knees, hips all good. A little bit of soreness in my left hip flexor but nothing unmanageable.

However this hill did not feel easy like the others. Not sure if that was due to being tired or someting else.

Making a note to practice this hill in particular if (when) I plan to next run the first half of the SF Marathon (maybe next year).

Speaking of, just after the aid station this is where they divide up first half and full marathon runners. At the JFK intersection, the half runners turn left with a bit more uphill toward their last sprint to the finish, and the marathoners turn right, downhill towards the beach.

I have lost count of the number of times I have run down JFK to the beach, in races like Bay to Breakers, and Sunday training runs in Golden Gate Park. Zoe & I in particular have run this route more times than I can remember. This was super familiar territory and very easy for us to get into a comfortable groove and just go.

Mile 14

As we flew past the mile 13 marker, we high-fived (as we did at every mile marker we passed together), and I told Z hey we’re basically halfway done, we totally got this!

This part of JFK is always so enjoyable — a sweeping curving downhill with broad green meadows and a couple of lakes.

I saw the aid station at Spreckels Lake and gave Z a heads-up that I needed to take a quick pit stop.

Ran back into the fray and while I knew we were passing the bison on our right, I don’t actually remember looking over to see any. I think we were too focused on the road in front of us.

Mile 15

The mile 14 marker seemed to come up even quicker, maybe because we briefly stopped just a half mile or so before. Seeing that “14” had a huge impact on me, a number I had never before run up to in any race.

I remembered from the course map that we were approaching where the second half marathoners were going to start.

We turned left toward MLK drive, right by the second half start, and there was no sign of the second half marathoners.

My dad was running the second half, originally in wave 9, and we had thoughts of somehow trying to cross paths during our races. Not only was he long gone, but he had ended up starting in wave 5, and the second half overall started 15 minutes earlier than expected. Regardless I knew there was very little chance of catching him since all the second half runners were long gone.

MLK drive is a bit of a long uphill slog and we naturally slowed down a bit. It finally started to feel like “work” to get to the mile 15 marker.

Mile 16

Right after the mile 15 marker we zigged left then right onto the forgettably named Middle drive, which I had not run in quite a while, I’m not sure ever. I vaguely remembered rollerblading on it many years ago.

The pavement was a bit rougher, and the slow uphill slog continued. I decided I would chew half of one of my caffeinated cherry Nuun energy tablets at the next aid station, swallowing it with water.

The half tablet started to fizz as I chewed it so I was happy to wash it down. The fizziness felt a bit odd in my stomach. So far in the race I had had zero stomach problems or weirdnesses, so this was maybe not the greatest idea. Yeah, that thing about don’t change your fuel on raceday, that. I was mostly ok, but I think the fizziness threw me off.

I wasn’t really enjoying this part of the race, despite it being in Golden Gate park. I wasn’t hating it either. It just felt kind of meh.

Mile 17

Crossing the mile 16 marker and high-fiving I remember thinking, only ten-ish miles left, that doesn’t seem so bad. Turning right back onto JFK felt good though, finally we were back in familiar territory.

Then I remembered we still had to run up and around Stow lake. When I saw the course map I remember looking forward to that, but at this point I felt done with hills and was no longer looking forward to it.

After we turned right and started running up towards Stow Lake, I decided to walk and wait to sync up with Z, which was good timing it turns out. My friend Michele (who started a couple of waves before us) was just finishing Stow Lake and on her way down that same street.

She expressed that she wasn’t feeling too good, I told her she looked great and she smiled. We hugged, she told me and Zoe that it was only about 15 minutes to go around the lake and come back down, which made it feel more doable.

Still, it continued to feel like “work”. As we ran past the back (South) side of the lake, it was nice to have a bit of downhill, especially down to the next mile marker.

Mile 18

Crossing the mile 17 marker I turned to Zoe and told her hey, less than ten miles left! Single digits! She managed a smile. We kept pushing up and around the lake.

The backside of the lake felt easier since I knew the downhill to JFK was coming up. Picked up speed again, and then walked once I reached JFK, waiting for Zoe to catch back up.

We could see the first half marathoners finishing to our left, and I had flashbacks to how I felt finishing the first half last year. I was feeling a lot better this year at mile 17+ than last year at mile 13+, and I actually felt pretty good last year. That was a huge confidence boost.

As they got their finishers medals, we had an uphill to climb toward the de Young museum tower. This was really the last major hill. Once we crested it and could see the mile 18 marker, knowing it was mostly downhill made it feel like we didn’t have that far to go.

Mile 19

More familiar territory on JFK. Another aid station as we passed the outdoor "roller rink" on the left. The sun finally started to break through the clouds & fog, and we could see blue skies ahead.

I chatted with Z a bit as we passed the Conservatory of Flowers, about how we have done this run so many times, and how it was mostly downhill from here.

Up ahead I heard a couple of people shouting my name and then saw the sign.

 faster than fiber optics' cheering at mile 19 in the San Francisco Marathon

Photo by Amanda Blauvelt. Tim & Amanda surprised me with a sign at the edge of Golden Gate Park! (you can see me in the orange on the left).

I couldn’t help laughing. Ran up and hugged them both. Background: Last year Amanda ran the SF Marathon (her first full), and I conspired with her best friend from out of town to have her show up and surprise Amanda at around mile 10 by jumping in and running with her. The tournabout surprise was quite appreciated.

In my eager run up to Tim & Amanda, I somehow lost Zoe.

First I paused and look around, looked ahead to see if she had run past me and did not see her. Looked behind me to see if she was approaching and also did not see her.

I picked up the pace figuring she may have run past me when I saw Tim and Amanda, or I would figure it out later. (After the race Tim told me they saw Zoe moments after I had left).

The race looped back into Golden Gate park for a bit.

Mile 20

Passing the mile 19 marker, the course took us under a bridge, up to and across Stanyan street onto Haight street, the last noticeable uphill.

This was serious home territory for me, having run up Haight street to the market near Ashbury more times than I can remember.

Tantek running on Haight street just after crossing Ashbury.

Photo by mom. I saw my mom cheering at the intersection of Haight & Ashbury, and positioned myself clear of other runners because I knew she was taking photos. Then I went to her, hugged her, told her I love her, and asked where dad was. An hour ahead of me. No way I’m going to catch him before the finish.

I could see the mile 20 marker, but just as I was passing Buena Vista park on my right, I heard another familiar voice cheering me on. Turning to look I immediately recognized my friend Leah who helped get me into running in the first place, by encouraging me to start with very short distances.

She asked if I wanted company because she had to get in a 6-7 mile run herself and I said sure! Leah asked if I wanted to run quietly, or for her to talk or listen, and I said I was happy to listen to her talk about anything and appreciated the company.

I told her about how I’d lost Zoe earlier. Leah put Zoe’s info into the SF Marathon app on her phone to track Zoe’s progress to see if we could find her as we ran.

We were crushing it down the hill to Divisadero literally passing everyone else around us (downhills are my jam), and she was surprised at how well I looked and sounded so far into the race, at this point farther than I’d ever run before.

Mile 21

As we flew by the mile 20 marker, I remember thinking wow 20 miles and I feel great. I felt like I could just keep running on Clif blocks and Nuun electrolytes for hours. It was an amazing feeling of strength and confidence.

I realized I was doing something I thought I would never do, but more than that, it felt sustainable. I felt unstoppable.

My hip flexors were both a bit sore now, but at least they were evenly sore, which helped both balance things out, and then forget about them. My knees were just a tiny bit sore now, but again, about the same on both sides.

Just as we reached Scott street, they started redirecting racers up Scott to Waller. One more tiny uphill block, I remember complaining and then thinking just gotta push through. Up to Waller street then again a slight eastward downhill.

Once again picking up speed, I really started to enjoy all the cheering from folks who had come out of their houses to cheer us on. There was a family with kids offering small cups of water and snacks to the runners.

As we approached the last block before Buchanan street, I could hear a house on the North side blasting the Chariots of Fire theme song on huge speakers. Louder than the music I was listening to. Brilliant for that last Waller street block which happenned to be uphill. Of course it was a boost.

Making the right turn to run down Buchanan street, we only made it a block before they redirected us eastward down Hermann street to the Market street crossing and veering right onto Guerrero.

Running these familiar streets felt so easy and comfortable.

Once again we picked up speed running downhill, barely slowing down to pick up two cups of Nuun at the aid station before the mile 21 marker.

Mile 22

We kept running South on Guerrero until the course turned East again at 16th street.

16th street in the Mission is a bit of mess. Lots of smells, from various things in the street, to the questionable oily meats spewing clouds of smoke from questionable grills. I think this was my least favorite stretch of the race. Literally disgusting.

The smells didn’t clear until about Folsom street. Still relatively flat, I knew we had a climb coming up to Bryant street, so I was mentally ready for it.

Just before we reached Bryant street, they redirected us South one block onto 17th street.

Still no sign of Zoe. With all these race route switches I was worried that we had been switched different ways, and would have difficulty finding each other.

The racer tracking app was also fairly inaccurate. In several places it showed Zoe as being literally right by us, or just ahead or just behind when she was nowhere to be seen.

Mile 23

Slow climb up to Potrero. It’s not very enticing running there. Mostly industrial. Still felt familiar enough, we just pressed on, occasionally looking for Zoe.

Leah kept up a nice friendly distracting dialog that helped this fairly unremarkable part of the course go by quicker than it otherwise would have.

Another aid station, more Nuun. I started to feel I wasn’t absorbing fluids as fast as I had been earlier. Something also felt a bit off about my stomach. Not sure if it was the fizzing from the cherry Nuun tablet I had chewed on. Or the smells of 16th street.

I only sipped half a cup of Nuun and tossed the rest.

We were almost at 280, turned briefly down Missisippi street for a block, then over on Mariposa to cross underneath 280, and I could see the mile 23 marker just on the other side.

Mile 24

Downhill to Indiana street so we flew right by the marker.

Twenty-three miles done. Just a little over 5km left.

Made a hard right onto Indiana street where it flattened out once more. We had entered the industrial backwaters of the Dogpatch.

Still run/walking at about a 5 to 1 split, but I was starting to slowly feel more tired. No “wall” like I have often heard about. I wondered if the feeling was really physical, or just mental.

Maybe it was just the street and the few memories I had associated with it. Some just two years old, some older. Nothing remarkable. Maybe this was my chance to update my memories of Indiana street.

The sun was shining, and I was running. Over 23 miles into my first marathon and I still felt fine. There were scant few out cheering on this stretch. But I knew the @Nov_Project_SF cheerstation wasn’t far.

The sound of two people shouting my name brought my attention back to my surroundings. My friends @Nov_Project Ava and Tara had run backwards along the course from the cheerstation!

They checked in with me, asked how I was doing. I was able to actually answer while running which was a good sign. They ran with me a bit and then sprinted ahead a few blocks to just past the next corner.

Turning onto 22nd street, I grabbed another half cup of Nuun. At this point I did not feel like eating anything, my stomach had an odd half-full not-hungry feeling. I sipped the Nuun and tossed the cup.

There were Ava & Tara again, cheering me on, like a personal traveling cheersquad. So grateful. I’m convinced smiling helps you go faster, and especially when friends are cheering you on. They sprinted on ahead again and I lost sight of them.

Finally the turn onto 3rd street. There is something very powerful about feeling like you are finally heading directly towards the finish.

It was getting warmer, and the sweat was making it harder to see. This is the point where I was glad I had brought my sunglasses with me, despite the thick clouds this morning. No clouds remained. Just clear blue skies.

Kept going through Dog Patch and China Basin, really not the most attractive places to run. Except once again I saw Ava & Tara up ahead at 20th street, and they cheered us through the corner, and then disappeared again.

Just one block East on 20th and then North again onto Illinois street. I could see the next marker.

Mile 25

Just over a couple of miles left. Slight right swerve onto Terry A Francois Boulevard, and I could see and hear the very excited Lululemon Cheerstation waving their signs, shouting, and cheering on all of us runners.

Then perhaps the second best part of the race. Actually maybe tied for best with finishing.

I saw brightly colored neon shirts up ahead and heard a roar. (I’m having trouble even writing this four weeks later without tearing up.)

The November Project San Francisco cheerstation. What a sight even from a distance.

My friend Caity Rogo ran towards me & Leah, and I had this thought like I should be surprised to see her but I couldn’t remember why.

Leah and Tantek running with Caity beside them right before the NPSF cheergang at the San Francisco Marathon 2017

Photo by Kirstie Polentz. I do not remember what I said to Caity. Later I would remember that just the day before she was away running a Ragnar relay race! Somehow she had made it back in time to cheer.

At this point my cognitive bandwidth was starting to drop. I had just enough to focus on the race, and pay attention to the amazing friends cheering and accompanying me.

Tantek running through the November Project cheergang

Photo by Lillian Lauer. So many high fives. So many folks pacing me. I think there were hugs? It was kind of a blur. I asked and found out Zoe was about 2 min ahead of me, so I picked up the pace in an attempt to catch up to her.

Tantek with Nuun cup walking next to Henri asking him how he is doing during the San Francisco Marathon 2017

Photo by Kirstie Polentz. I remember Henry Romeo asking me what I wanted from the next water station, running ahead, bringing me a Nuun electrolyte cup, and keeping me company for a bit.

After snapping a few photos, my pal Krissi ran with me despite a recent calf injury, grinning with infectious joy and confidence. She ran me past the mile 25 marker, checking to make sure I was ok, how I was feelng etc.

As good as I thought I was feeling before, the cheer station was a massive boost.

Mile 26

Found Zoe again! Or rather she saw me. She was walking slowly or had stopped and was looking for me.

Having reconnected I checked in with her, how was everything feeling. We kept up our run/walk, with still a bit over a mile left.

Apparently there was a ballgame on at AT&T park. I couldn’t help but feel a sharp contrast between the sports fans on one side of the race barrier and runners on the other. Each of us were doing our own thing. A few sports fans cheered us on and reached across to give out high fives which we gladly accepted.

Finally we made it around the ballpark and out to the Embarcadero, our home stretch. Half mile or so to go.

We were all tired, with various body parts aching, and yet did our best to keep up a decent pace.

Leah peeled off at mile 26, shouting encouragements for us to push hard to the finish.

Finish

Past the mile 26 marker we curved a little to the left and could see the finish just a few blocks in front of us.

I talked Zoe into keeping up a regular pace as we approached the finish line. Checking to make sure she was good and still smiling, I picked up the pace with whatever energy I had, just to see how many people I could pass in the last 400 meters.

I actually saw people slowing down, which felt like an enticement to go even faster. I sprinted the last 100m as fast as I could, passing someone with just feet to go to the finish. Maybe a silly bit of competitiveness, but it’s always felt right to push hard to a finish, using any motivation at hand.

5:35:59.

I kept walking and got my finishers medal.

Zoe and Tantek at the finish of of the San Francisco Marathon wearing their medals

Turning around I found Zoe. We had someone take our photo. We had done it. Marathon finishers!

We kept walking and found my dad. We picked up waters & fruit cups and saw my mom & youngest sister on the other side of the barriers.

Tantek and Zoe stretching after finishing the San Francisco Marathon

Photo by Ayşan Çelik. We stopped to stretch our legs and take more photos.

We found more @Nov_Project friends. I stopped by the Nuun booth and kept refilling my cup and Steve gave me big hug too.

I was a little sore in parts, but nothing was actually hurting. No blood, no limping, no pain. Just a blister on one left toe, and one on my right heel that had already popped. Slight chafing on my right ankle where my shoe had rubbed against it.

I felt better than after most of my past half marathon races. Something was different.

Whether it was all the weekly hours of intense Vinyasa yoga practice, from the March through May yoga teacher training @YogaFlowSF and since, or the months of double-gang workouts @Nov_Project_SF (5:30 & 6:30 every Wednesday morning), or doing nearly all my long runs on Marin trails Saturday mornings hosted by @SFRunCo in Mill Valley, setting new monthly meters climbing records leading up to the race, I was stronger than ever before, physically and mentally. Something had changed.

I had just finished my first marathon, and I felt fine.

Tantek wearing the San Francisco Marathon 52 Club hoodie, finisher medal, and 40 for 40 medal.

I waited til I got home to finally put on my San Francisco Marathon “52 Club” hoodie (for having run the first half last year, and the second half the year before that), with the medals of course.

As much as all the training prepared me as an individual, the experience would not have been the same without the incredible support from fellow @Nov_Project runners, from my family, even just knowing my dad was ahead of me running the second half, Leah and other friends that jumped in and ran alongside, and especially starting & finishing with my pal Zoe, encouraging each other along the way.

Grateful for having the potential, the opportunity to train, and all the community, friends, and family support. Yes it took a lot of personal determination and hard work, but it was all the support that made the difference. And yes, we enjoyed ourselves.

(Thanks to Michele, Zoe, Krissi, and Lillian for reviewing drafts of this post, proofreading, feedback, and corrections! Most photos above were posted previously and link to their permalinks. The few new to this post are also on Instagram.)

Categorieën: Mozilla-nl planet

Cameron Kaiser: Irma's silver lining: text is suddenly cool again

ma, 11/09/2017 - 01:24
In Gopherspace (proxy link if your browser doesn't support it), plain text with low bandwidth was always cool and that's how we underground denizens roll. But as our thoughts and prayers go to the residents of the Caribbean and Florida peninsula being lashed by Hurricane Irma, our obey your media thought overlords newspapers and websites are suddenly realizing that when the crap hits the oscillating storm system, low-bandwidth text is still a pretty good idea.

Introducing text-only CNN. Yes, it's really from CNN. Yes, they really did it. It loads lickety-split in any browser, including TenFourFox and Classilla. And if you're hunkered down in the storm cellar and the radio's playing static and all you can get is an iffy 2G signal from some half-damaged cell tower miles away, this might be your best bet to stay current.

Not to be outdone, there's a Thin National Public Radio site too, though it only seems to have quick summaries instead of full articles.

I hope CNN keeps this running after Irma has passed because we really do need less crap overhead on the web, and in any disaster where communications are impacted, low-bandwidth solutions are the best way to communicate the most information to the most people. Meanwhile, please donate to the American Red Cross and the Salvation Army (or your relief charity of choice) to help the victims of Hurricanes Harvey and Irma today.

Categorieën: Mozilla-nl planet

Andy McKay: My third Gran Fondo

zo, 10/09/2017 - 09:00

Yesterday was my third Gran Frondo, the last was in 2016.

Last year was a bit of an odd year, I knew what to face, yet I struggled. I was planning on correcting that this year.

The most important part of the Fondo is the months and months of training before hand. This year that went well. Up to this point I've been on the bike for 243 hours, 5,050km over 198 bike rides. I only ended up doing Mt Seymour 3 times. But rides with Steed around some North and West Vancouver gave me some extra hill practice.

I managed to lose 20lbs over the training, but have gained a lot of muscle mass especially in my legs. I also did the challenge route of the Ride to Conquer Cancer with some awesome Mozilla friends. The weekend before I did the same route 3 times, on the last day I hit a pile of personal records.

Two equipment changes also helped. I had a computer to tell me how fast I was going (yeah, should have had one earlier) and I moved from Mountain Bike pedals over to Shimano road pedals.

So know what I was facing I had a slightly different plan, focusing on my nemesis, the last hour of the ride. To do that I focused on:

  • Drafting on the flats where I can
  • Taking energy gels every hour to replenish electrolytes
  • Not charging up every hill
  • Going for a faster cadence in a lower gear
  • Saving the energy for the last half (same as last year)

As the day arrived a new challenge appeared. It was raining. Pretty much the entire bloody way.

The first part felt good, I knew what time I would have to arrive each rest stop to beat the last time. I made it to the first stop 13 mins ahead of schedule. But then made it to the next stop about 10 mins ahead of schedule. Then the sticky peice of plastic with the times on flew off.

At this point I was getting anxious, I seemed to be slowing down. All I could remember was the time I needed to be at the last rest stop. Then came the hills.

The difference's here were: the rain was keeping me cool so I wasn't dehydrating (also energy gels helped), I knew my pace and I had energy in my legs. Over the last 20 km I floored it (well comparatively for me) where as, in previous years I just fell apart. The whole second half of the race were personal records.

The result? I ended up crossing at 4h 44m. That's 17 minutes faster than a younger version me.

Today, my knees, wrists and other parts of my body all hurt and I skipped the Steed ride. But other than that I'm feeling not too bad.

Also, I signed up for the Fondo next year. I'm going to get below 4hr 30min next year.

Categorieën: Mozilla-nl planet

QMO: Firefox Developer Edition 56 Beta 12, September 15th

vr, 08/09/2017 - 17:43

Hello Mozillians!

We are happy to let you know that Friday, September 15th, we are organizing Firefox Developer Edition 56 Beta 12 Testday. We’ll be focusing our testing on the following new features: Preferences SearchCSS Grid Inspector Layout View and Form Autofill. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Pagina's