mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Nikki Bee: RustConf 2016

Mozilla planet - mo, 26/09/2016 - 23:21

So, a couple weeks ago I got to go to RustConf 2016, the first instance of an annual event! What was it about? The relatively new programming language Rust, from Mozilla!

Attending

I was only able to attend thanks to three factors: travel re-reimbursement allowance from Outreachy for my recent internship with them; a scholarship ticket from RustConf to cover attendance fees; and being able to stay at a friends place in Portland for a few days. Were it not for each of those, I would not have felt financially comfortable attending! I’m very grateful for all of the support.

The Morning

I went to RustConf with my now wife, Decky Coss (who also was able to attend under the same circumstances as me!). We decided that since the event is only a day long, and due to our track record for having a hard time getting the “full” experience out of conventions, that we would work extra hard on getting there on time. As such, we only missed 25 minutes of the first talk.

Of course, I’m not pleased that that happened, but we had the disadvantage of some rough flights the day before, as well as not staying on or near the location. Plus, we’re just not cut out for whatever it takes to attend conferences studiously. But we did our best that day and I think that was OK.

As I said, we missed the first half of the opening keynote, given by Aaron Turon and Niko Matsakis, but I really liked what I saw. It felt like a neat overview of Rust, especially concerning common issues newbies (like me!) run into with the language. It was gratifying hearing them point out the very same trip-ups I deal with, especially since they talked about how Rust developers are aware of these issues! They are looking into ways to help stop these trip-ups, such as better language syntax, and updating the Rust documentation.

Acting as a perfect follow-up to the keynote was “The Illustrated Adventure Survival Guide”, by Liz Baillie, which was easily the funnest talk. It was a drawn slideshow/comic making many metaphors and jokes representing how Rust works. I felt like it would be hard to enjoy if you didn’t have a decent understanding of Rust but then again, I suppose nobody would be there if they didn’t!

The last talk of the morning, “Integrating Rust and VLC”, by Geoffroy Couprie. It felt a bit hard to follow in that I didn’t know what it was leading to. All the same I greatly enjoyed seeing both a practical implementation of Rust (in this case, handling video streaming codecs), and how Rust can act as a friendlier alternative to C-like languages (something that has been heavily on my mind).

The Afternoon

Lunchtime! Lunch was set to be an hour and a half long, which, when I first saw the schedule, I thought would be excessively long. Somehow it went by very fast! Plus, following such a long morning, I was very grateful to get to unwind for a bit from listening to so much talking.

However, the dessert I had gave me a bad sugar rush and crash and I felt very out of it for the next few hours, so I don’t remember much about the talks during this time. None of them really did much for me, not that they were bad, but I just wasn’t able to pay much attention.

The Evening

Snack time! Again, I was glad to have a respite where I could chill out for a bit. Especially after having a hard time keeping focus for a few hours. This break was not nearly as long as lunch, but if it was any longer, I think the conference would have been too long.

The first talk of the evening was “A Modern Editor in Rust”, presented by Raph Levien. Raph talked about the text editor Xi. I was excited by some of the things offered in it, since it’s made to tackle unique issues, such as quickly loading and scrolling through massive (like, hundreds of megabytes big) files. I didn’t see much in it that I thought I needed personally, but I appreciate seeing programs for solving unique challenges like that.

The next talk I ended up writing so much about, I put it into another section.

The Second-Last Talk

Next was “RFC: In Order to Form a More Prefect union”, by Josh Triplett, which was easily the most impactful talk to me. It was a chronicle of a Request For Comments idea created by the speaker, which was recently committed into Rust! Josh described the RFC process in the Rust community as “welcoming, open, and inclusive”, all of which echoes my experience as a Rust newbie.

Josh talked about each step of their submission, including how far their proposal went (which was a bit of a surprise to them!). The best part to me was, despite it becoming one of the most discussed issues on Rust’s Github, Josh never found the proposal being taken from them by a more experienced contributor, no matter how high-level the discussion got. It’s not that I would expect such a rude thing to happen easily, but more that I greatly appreciate the speaker being given ample room to learn how to become a better contributor.

Previously, I contributed to Servo (the browser engine written in Rust) for my Outreachy internship, which had formal processes for my additions to be merged into the main code. That makes it obviously different than how it would be for any another contributor, since my internship gave me a unique position in relation to a mentor who knew the ins and outs of the Servo project. I never felt like I would be able to replicate this anywhere else.

All that said, if I ever contribute to Rust more informally, this talk has given me a great idea of what to expect, and how to prepare to enter into doing so. Whenever I have an idea I want to share with Rust, I won’t hesitate over doing so! Also, the next time I contribute to another FOSS project, I’ll be thinking of the process here, and try to map the open Rust process to that project!

The Closing Keynote

Finally, the closing keynote, presented by Julia Evans. She’s the only person I already recognized, as we’re both Recurse Center alums. During the snack break Decky and I decided to try to find her to chat for a couple minutes. We managed to do so- as well as get copies of a delightful zine she wrote about Linux tools she’s found invaluable for programming! You can find it on her website.

By this time I felt pretty fatigued from the long day, but Julia’s talk was very energetic which made me enthused to pay attention. Her keynote was focused on learning systems programming with Rust, while new to the language- two things that felt challenging to her, but that she was able to get into well surprisingly quickly. I really relate to that sort of experience, since many times a new project I wanted to do seemed insurmountable, but became much easier after I started!

I’ve never done any systems programming, but I’ve had a couple friends ask me when I’ve told them about Rust, if they can do X or Y sort of thing in it, to which I don’t often have an answer. Based on Julia’s talk, I feel comfortable saying systems programming is a good bet!

Other Things

That’s it for the talks themselves, although I had a few more thoughts I wanted to share.

  • As part of being late (as I mentioned before), Decky and I missed the breakfast offered by RustConf, although we hadn’t realized there was any before we got there. Having lunch and snacks offered as part of the conference was fantastic. If Decky and I had to go out for lunch we probably would have missed a talk. Although, in the future, I’ll be skipping out on sweets, even free ones, because sugar is just really bad for me :(.

  • I’ve never done an event that’s so continuously long. The schedule for RustConf was about 9 hours straight, and I was quite exhausted by the end of it. In the future, I’d prefer an event with multiple, shorter days! Conventions in general are fatiguing for me, but having to sit all the time for a long day takes a lot out of me.

  • I know better than to assume on an individual level, but I felt dismayed by the low ratio of women to men present at the conference. As a result of feeling a bit stranded by that, I felt uncomfortable talking to strangers. The only people I talked to - for more than a few sentences - were people I knew, or recognized from online.

  • One of the talks ended in a very forced parody of Trump’s campaign slogan which got a lot of laughs from the crowd. When that happened, my only reaction was to turn to Decky and grimace to show her how unhappy I was. Comedy shouldn’t be used to trivialize fascists, it should be used to kill fascism.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 26 Sep 2016

Mozilla planet - mo, 26/09/2016 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Zertifikate: Mozilla will Startcom und Wosign das Vertrauen entziehen - Golem.de

Nieuws verzameld via Google - mo, 26/09/2016 - 18:34

Zertifikate: Mozilla will Startcom und Wosign das Vertrauen entziehen
Golem.de
Mozilla überlegt offenbar, den Certificate Authorities (CA) Startcom und Wosign das Vertrauen zu entziehen. In einem auf Google Docs veröffentlichten Dokument, das unter anderem von Gervase Markham von Mozilla und Ryan Sleevi aus dem ...

Categorieën: Mozilla-nl planet

Doug Belshaw: Curriculum as algorithm

Mozilla planet - mo, 26/09/2016 - 16:57

algorithm.jpg

Way back in Episode 39 of Today In Digital Education, the podcast I record every week with Dai Barnes, we discussed the concept of ‘curriculum as algorithm’. If I remember correctly, it was Dai who introduced the idea.

The first couple of things that pop into my mind when considering curricula through an algorithmic lens are:

But let’s rewind and define our terms, including their etymology. First up, curriculum:

In education, a curriculum… is broadly defined as the totality of student experiences that occur in the educational process. The term often refers specifically to a planned sequence of instruction, or to a view of the student’s experiences in terms of the educator’s or school’s instructional goals.
[…]
The word “curriculum” began as a Latin word which means “a race” or “the course of a race” (which in turn derives from the verb currere meaning “to run/to proceed”). (Wikipedia)

…and algorithm:

In mathematics and computer science, an algorithm… is a self-contained step-by-step set of operations to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks.

The English word 'algorithm’ comes from Medieval Latin word algorism and French-Greek word “arithmos”. The word 'algorism’ (and therefore, the derived word 'algorithm’) come from the name al-Khwārizmī. Al-Khwārizmī (Persian: خوارزمی‎‎, c. 780–850) was a Persian mathematician, astronomer, geographer, and scholar. English adopted the French term, but it wasn’t until the late 19th century that “algorithm” took on the meaning that it has in modern English. (Wikipedia)

So my gloss on the above would be that a curriculum is the container for student experiences, and an algorithm provides the pathways. In order to have an algorithmic curriculum, there need to be disaggregated learning content and activities that can serve as data points. Instead of a forced group march, the curriculum takes shape around the individual or small groups - much like what is evident in Khan Academy’s Knowledge Map.

The problem is, as the etymology of 'algorithm’ suggests, it’s much easier to do this for subjects like maths, science, and languages than it is for humanities subjects. The former subjects are true/false, have concepts that obviously build on top of one another, and are based on logic. Not all of human knowledge works like that.

For this reason, then, we need to think of a way in which learning content and activities in the humanities can be represented in a way that builds around the learner. This, of course, is exactly what great teachers do in these fields: they personalise the quest for human knowledge based on student interests and experience.

One thing that I think is under-estimated in learning is the element of serendipity. To some extent, serendipity is the opposite of an algorithm. My Amazon recommendations mean I get more of the same. Stumbling across a secondhand bookshop, on the other hand, means I could head off in an entirely new direction due to a serendipitous find.

Using services such as StumbleUpon makes the web a giant serendipity engine. But it’s not a curriculum, as such. What I envisage by curriculum as algorithm is a fine balance between:

  • Continuously-curated learning content and activities
  • Formative feedback
  • Multiple pathways to diverse goals
  • Flexible accreditation

I’m hoping that as the Open Badges system evolves, we move beyond the metaphor of 'playlists’ for learning, towards curriculum as algorithm. I’d love to get some funding to explore this further…

Comments? Questions? Get in touch: @dajbelshaw / mail@dougbelshaw.com

Image CC BY x6e38

Categorieën: Mozilla-nl planet

Christian Heilmann: Help making the fourth industrial revolution less scary

Mozilla planet - mo, 26/09/2016 - 13:16

Last week I spent in Germany at an event sponsored by the government agency for unemployment covering the issue of digitalisation of the job market and the subsequential loss of jobs.

me, giving a keynote on machine learning and work

When the agency approached me to give a keynote on the upcoming “fourth industrial revolution” and what machine learning and artificial intelligence means for the job market I was – to put it mildly – bricking it. All the other presenters at the event had several doctor titles and were professors of this and that. And here I was, being asked to deliver the “future” to an audience of company owners, university professors and influential people who decide the employment fate of thousands of people.

Expert Panel

I went into hermit mode and watched, read and translated dozens of videos and articles on AI and the work environment. In the end, I took a more detailed look at the conference schedule and realised that most of the subject matter data will be covered by the presenter before me.

Thus I delivered a talk covering the current situation of AI and what it means for us as job seekers and employers. The slides and screencast are in German, but I am looking forward to translating them and maybe deliver them in a European frame soon.

The slide deck is on Slideshare, and even without knowing German, you should get the gist:

Zwischen Terminator und Star Trek: Digitalisierung und Künstliche Intelligenz from Christian Heilmann

The screencast is on YouTube:

The feedback was overwhelming and humbling. I got interviewed by the local TV station where I mostly deflected all the negative and defeatist attitude towards artificial intelligence the media loves to portrait.

tv interview

I also got a half page spread in the local newspaper where – to the amusement of my friends – I was touted a “fascinating prophet”.

Newspaper article

During the expert panel on digital security I had a few interesting encounters. Whilst in general it felt tough to see how inflexible and outdated some of the attitudes of companies towards computers were, there is a lot of innovation happening even in rural areas. I was especially impressed with the state of robots in warehouses and the investment of the European Union in Blockchain solutions and security research.

One thing I am looking forward to is working with a cybersecurity centre in the area giving workshops on social engineering and security of iOT.

A few things I learned and I’d like you to also consider:

  • We are at the cusp – if not in the middle of – a new digital revolution
  • Our job as people in the know is to reach out to those who are afraid of it and give out sensible information as a counter point to some of the fearmongering of the press
  • It is incredibly rewarding to go out of our comfort zone and echo chamber and talk to people with real business and social change issues. It humbles you and makes you wonder just how you ended up knowing all that we do.
  • The good social aspects of our jobs could be a blueprint for other companies to work and change to be resilient towards replacement by machines
  • German is hard :)

So, be brave, offer to present at places not talking about the latest flavour of JavaScript or CSS preprocessing. The world outside our echo chamber needs us.

Or as the interrupters put it: What’s your plan for tomorrow?

Categorieën: Mozilla-nl planet

Gervase Markham: GPLv2 Combination Exception for the Apache 2 License

Mozilla planet - mo, 26/09/2016 - 12:51

CW: heavy open source license geekery ahead.

One unfortunate difficulty with open source licensing is that some lawyers, including the FSF, consider the Apache License 2.0 incompatible with the GPL 2.0, which is to say that you can’t combined Apache 2.0-licensed code with GPL 2.0-licensed code and distribute the result. This is annoying because when choosing a permissive licence, we want people to use the more modern Apache 2.0 over the older BSD or MIT licenses, because it provides some measure of patent protection. And this incompatibility discourages people from doing that.

This was a concern for Mozilla when determining the correct licensing for Rust, and this is why the standard Rust license is a dual license – the choice of Apache 2.0 or MIT. The idea was that Apache 2.0 would be the normal license, but people could choose MIT if they wanted to combine “Rust license” code with GPL 2.0 code.

However, the LLVM project has now had notable open source attorney Heather Meeker come up with an exception to be added to the Apache 2.0 license to enable GPL 2.0 compatibility. This exception meets a number of important criteria for a legal fix for this problem:

  • It’s an additional permission, so is unlikely to affect the open source-ness of the license;
  • It doesn’t require the organization using it to take a position on the question of whether the two licenses are actually compatible or not;
  • It’s specific to the GPL 2.0, thereby constraining its effects to solving the problem.

Here it is:

—- Exceptions to the Apache 2.0 License: —-

In addition, if you combine or link compiled forms of this Software with software that is licensed under the GPLv2 (“Combined Software”) and if a court of competent jurisdiction determines that the patent provision (Section 3), the indemnity provision (Section 9) or other Section of the License conflicts with the conditions of the GPLv2, you may retroactively and prospectively choose to deem waived or otherwise exclude such Section(s) of the License, but only in their entirety and only with respect to the Combined Software.

—- end —-

It seems very well written to me; I wish it had been around when we were licensing Rust.

Categorieën: Mozilla-nl planet

Gervase Markham: Introducing Deliberate Protocol Errors: Langley’s Law

Mozilla planet - mo, 26/09/2016 - 12:39

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

Categorieën: Mozilla-nl planet

QMO: Firefox 50 Beta 3 Testday, September 30th

Mozilla planet - mo, 26/09/2016 - 11:28

Hello Mozillians,

We are happy to announce that Friday, September 30th, we are organizing Firefox 50 Beta 3 Testday. We will be focusing our testing on Pointer Lock API and WebM EME support for Widevine features. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 037. Autumn is here. The fall of -webkit-

Mozilla planet - mo, 26/09/2016 - 03:08

Two Japanese national holidays during the week. And there goes the week. Tune of the Week: Anderson .Paak - Silicon Valley

Webcompat Life

Progress this week:

Today: 2016-09-26T09:24:48.064519 336 open issues ---------------------- needsinfo 13 needsdiagnosis 110 needscontact 9 contactready 27 sitewait 161 ----------------------

You are welcome to participate

  • Monday day off in Japan: Respect for the elders.
  • Thursday day off in Japan: Autumn Equinox.

Firefox 49 has been released and an important piece of cake is delivered now to every users. You can get some context on why some (not all) -webkit- landed on Gecko and the impact on Web standards.

We have a team meeting soon in Taipei.

The W3C was meeting this week in Lisbon. Specifically about testing.

I did a bit of Prefetch links testing and how they appear in devtools.

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Little by little we are accumulating our issues list about CSS zoom. Firefox is the only one to not support the non-standard property. It's coming from (Trident) IE was imported in WebKit (Safari), then maintained alive in Blink (Chrome, Opera) to finally come into Edge. Sadness.
Webcompat.com development Reading List Follow Your Nose TODO
  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Categorieën: Mozilla-nl planet

Mic Berman

Mozilla planet - mo, 26/09/2016 - 02:56

How will you Lead? From a talk at MarsDD in Toronto April 2016

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 79

Mozilla planet - mo, 26/09/2016 - 02:30

In the last week, we landed 96 PRs in the Servo organization’s repositories.

Promise support has arrived in Servo, thanks to hard work by jdm, dati91, and mmatyas! This does not fully implement microtasks, but unblocks the uses of Promises in many places (e.g., the WebBluetooth test suite).

Emilio rewrote the bindings generation code for rust-bindgen, dramatically improving the flow of the code and output generated when producing Rust bindings for C and C++ code.

The TPAC WebBluetooth standards meeting talked a bit about the great progress by the team at the University of Szeged in the context of Servo.

Planning and Status

Our overall roadmap is available online and now includes the Q3 plans. The Q4 and 2017 planning will begin shortly!

This week’s status updates are here. We have been having a conversation on the mailing list about how to better involve all contributors to the Servo project and especially improve the visibility into upcoming work - please make your ideas and opinions known!

Notable Additions
  • bholley made it possible to manage the Gecko node data without using FFI calls
  • aneeshusa improved Homu so that it would ignore Work in Progress (WIP) pull requests
  • wdv4758h implemented iterators for FormData
  • nox updated our macOS builds to use libc++ instead of libstdc++
  • TheKK added support for noreferrer to when determining referrer policies
  • manish made style unit tests run on all properties (including stylo-only ones)
  • gw added the OSMesa source, a preliminary step towards better headless testing on CI
  • emilio implemented improved support for function pointers, typedefs, and macOS’s stdlib in bindgen
  • schuster styled the input text element with user-agent CSS rather than hand-written Rust code
  • jeenalee added support for open-ended dictionaries in the Headers API
  • saneyuki fixed the build failures in SpiderMonkey on macOS Sierra
  • mrobinson added support for background-repeat properties space and round
  • pcwalton improved the layout of http://python.org
  • phrohdoh implemented the minlength attribute for text inputs
  • anholt improved WebGL support
  • mmatyas added ARM support to WebRender
  • ms2ger implemented safe, high-level APIs for manipulating JS typed arrays
  • manish added the ability to uncompute a style back to its specified value, in support of animations
  • cbrewster added an option to replace the current session entry when reloading a page
  • kichjang changed the loading of external scripts to use the Fetch network stack
  • splav implemented the HTMLOptionsCollection API
  • cynicaldevil fixed a panic involving <link> elements and the rel attribute
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Demo of the in-progress fetch() API:

Categorieën: Mozilla-nl planet

Mozilla veröffentlicht Firefox 49.0.1 - soeren-hentzschel.at

Nieuws verzameld via Google - sn, 24/09/2016 - 20:17

Mozilla veröffentlicht Firefox 49.0.1
soeren-hentzschel.at
Mozilla hat nur wenige Tage nach Veröffentlichung von Firefox 49.0 ein Update auf Firefox 49.0.1 veröffentlicht. Wie bereits bei beiden außerplanmäßigen Updates für Firefox 48 ist auch dieses Update der Drittanbieter-Software WebSense geschuldet.

Categorieën: Mozilla-nl planet

Niko Matsakis: Intersection Impls

Mozilla planet - sn, 24/09/2016 - 12:07

As some of you are probably aware, on the nightly Rust builds, we currently offer a feature called specialization, which was defined in RFC 1210. The idea of specialization is to improve Rust’s existing coherence rules to allow for overlap between impls, so long as one of the overlapping impls can be considered more specific. Specialization is hotly desired because it can enable powerful optimizations, but also because it is an important component for modeling object-oriented designs.

The current specialization design, while powerful, is also limited in a few ways. I am going to work on a series of articles that explore some of those limitations as well as possible solutions.

This particular posts serves two purposes: it describes the running example I want to consder, and it describes one possible solution: intersection impls (more commonly called lattice impls). We’ll see that intersection impls are a powerful feature, but they don’t completely solve the problem I am aiming to solve and they also intoduce other complications. My conclusion is that they may be a part of the final solution, but are not sufficient on their own.

Running example: interconverting between Copy and Clone

I’m going to structure my posts around a detailed look at the Copy and Clone traits, and in particular about how we could use specialization to bridge between the two. These two traits are used in Rust to define how values can be duplicated. The idea is roughly like this:

  • A type is Copy if it can be copied from one place to another just by copying bytes (i.e., with memcpy). This is basically types that consist purely of scalar values (e.g., u32, [u32; 4], etc).
  • The Clone trait expands upon Copy to include all types that can be copied at all, even if requires executing custom code or allocating memory (for example, a String or Vec<u32>).

These two traits are clearly related. In fact, Clone is a supertrait of Copy, which means that every type that is copyable must also be cloneable.

For better or worse, supertraits in Rust work a bit differently than superclasses from OO languages. In particular, the two traits are still independent from one another. This means that if you want to declare a type to be Copy, you must also supply a Clone impl. Most of the time, we do that with a #[derive] annotation, which auto-generates the impls for you:

1 2 3 4 5 #[derive(Copy, Clone, ...)] struct Point { x: u32, y: u32, }

That derive annotation will expand out to two impls looking roughly like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 struct Point { x: u32, y: u32, } impl Copy for Point { // Copy has no methods; it can also be seen as a "marker" // that indicates that a cloneable type can also be // memcopy'd. } impl Clone for Point { fn clone(&self) -> Point { *self // this will just do a memcpy } }

The second impl (the one implementing the Clone trait) seems a bit odd. After all, that impl is written for Point, but in principle it could be used any Copy type. It would be nice if we could add a blanket impl that converts from Copy to Clone that applies to all Copy types:

1 2 3 4 5 6 // Hypothetical addition to the standard library: impl<T:Copy> Clone for T { fn clone(&self) -> Point { *self } }

If we had such an impl, then there would be no need for Point above to implement Clone explicitly, since it implements Copy, and the blanket impl can be used to supkply the Clone impl. (In other words, you could just write #[derive(Copy)].) As you have probably surmised, though, it’s not that simple. Adding a blanket impl like this has a few complications we’d have to overcome first. This is still true with the specialization system described in [RFC 1210][].

There are a number of examples where these kinds of blanket impls might be useful. Some examples: implementing PartialOrd in terms of Ord, implementing PartialEq in terms of Eq, and implementing Debug in terms of Display.

Coherence and backwards compatibility

Hi! I’m the language feature coherence! You may remember me from previous essays like Little Orphan Impls or RFC 1023.

Let’s take a step back and just think about the language as it is now, without specialization. With today’s Rust, adding a blanket impl<T:Copy> Clone for T would be massively backwards incompatible. This is because of the coherence rules, which aim to prevent there from being more than one trait applicable to any type (or, for generic traits, set of types).

So, if we tried to add the blanket impl now, without specialization, it would mean that every type annotated with #[derive(Copy, Clone)] would stop compiling, because we would now have two clone impls: one from derive and the blanket impl we are adding. Obviously not feasible.

Why didn’t we add this blanket impl already then?

You might then wonder why we didn’t add this blanket impl converting from Copy to Clone in the wild west days, when we broke every existing Rust crate on a regular basis. We certainly considered it. The answer is that, if you have such an impl, the coherence rules mean that it would not work well with generic types.

To see what problems arise, consider the type Option:

1 2 3 4 5 #[derive(Copy, Clone)] enum Option<T> { Some(T), None, }

You can see that Option<T> derives Copy and Clone. But because Option is generic for T, those impls have a slightly different look to them once we expand them out:

1 2 3 4 5 6 7 8 9 10 impl<T:Copy> Copy for Option<T> { } impl<T:Clone> Clone for Option<T> { fn clone(&self) -> Option<T> { match *self { Some(ref v) => Some(v.clone()), None => None, } } }

Before, the Clone impl for Point was just *self. But for Option<T>, we have to do something more complicated, which actually calls clone on the contained value (in the case of a Some). To see why, imagine a type like Option<Rc<u32>> – this is clearly cloneable, but it is not Copy. So the impl is rewritten so that it only assumes that T: Clone, not T: Copy.

The problem is that types like Option<T> are sometimes Copy and sometimes not. So if we had the blanket impl that converts all Copy types to Clone, and we have the impl above that impl Clone for Option<T> if T: Clone, then we can easily wind up in a situation where there are two applicable impls. For example, consider Option<u32>: it is Copy, and hence we could use the blanket impl that just returns *self. But it is also fits the Clone-based impl I showed above. This is a coherence violation, because now the compiler has to pick which impl to use. Obviously, in the case of the trait Clone, it shouldn’t matter too much which one it chooses, since they both have the same effect, but the compiler doesn’t know that.

Enter specialization

OK, all of that prior discussion was assuming the Rust of today. So what if we adopted the existing specialization RFC? After all, its whole purpose is to improve coherence so that it is possible to have multiple impls of a trait for the same type, so long as one of those implementations is more specific. Maybe that applies here?

In fact, the RFC as written today does not. The reason is that the RFC defines rules that say an impl A is more specific than another impl B if impl A applies to a strict subset of the types which impl B applies to. Let’s consider some arbitrary trait Foo. Imagine that we have an impl of Foo that applies to any Option<T>:

1 impl<T> Foo for Option<T> { .. }

The more specific rule would then allow a second impl for Option<i32>; this impl would specialize the more generic one:

1 impl Foo for Option<i32> { .. }

Here, the second impl is more specific than the first, because while the first impl can be used for Option<i32>, it can also be used for lots of other types, like Option<u32>, Option<i64>, etc. So that means that these two impls would be accepted under RFC #1210. If the compiler ever had to choose between them, it would prefer the impl that is specific to Option<i32> over the generic one that works for all T.

But if we try to apply that rule to our two Clone impls, we run into a problem. First, we have the blanket impl:

1 impl<T:Copy> Clone for T { .. }

and then we have an impl tailored to Option<T> where T: Clone:

1 impl<T:Clone> Clone for Option<T> { .. }

Now, you might think that the second impl is more specific than the blanket impl. After all, it can be used for any type, whereas the second impl can only be used Option<T>. Unfortunately, this isn’t quite right. After all, the blanket impl cannot be used for any type T: it can only be used for Copy types. And we already saw that there are lots of types for which the second impl can be used where the first impl is inapplicable. In other words, neither impl is a subset of one another – rather, they both cover two distinct, but overlapping, sets of types.

To see what I mean, let’s look at some examples:

1 2 3 4 5 6 | Type | Blanket impl | `Option` impl | | ---- | ------------ | ------------- | | i32 | APPLIES | inapplicable | | Box<i32> | inapplicable | inapplicable | | Option<i32> | APPLIES | APPLIES | | Option<Box<i32>> | inapplicable | APPLIES |

Note in particular the first and fourth rows. The first row shows that the blanket impl is not a subset of the Option impl. The last row shows that the Option impl is not a subset of the blanket impl either. That means that these two impls would be rejected by RFC #1210 and hence adding a blanket impl now would still be a breaking change. Boo!

To see the problem from another angle, consider this Venn digram, which indicates, for every impl, the sets of types that it matches. As you can see, there is overlap between our two impls, but neither is a strict subset of one another:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 +-----------------------------------------+ |[impl<T:Copy> Clone for T] | | | | Example: i32 | | +---------------------------------------+-----+ | | | | | | Example: Option<i32> | | | | | | +-+---------------------------------------+ | | | | Example: Option<Box<i32>> | | | | [impl<T:Clone> Clone for Option<T>]| +---------------------------------------------+ Enter intersection impls

One of the first ideas proposed for solving this is the so-called lattice specialization rule, which I will call intersection impls, since I think that captures the spirit better. The intuition is pretty simple: if you have two impls that have a partial intersection, but which don’t strictly subset one another, then you can add a third impl that covers precisely that intersection, and hence which subsets both of them. So now, for any type, there is always a most specific impl to choose. To get the idea, it may help to consider this ASCII Art Venn diagram. Note the difference from above: there is now an impl (indicating with = lines and . shading) covering precisely the intersection of the other two.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 +-----------------------------------------+ |[impl<T:Copy> Clone for T] | | | | Example: i32 | | +=======================================+-----+ | |[impl<T:Copy> Clone for Option<T>].....| | | |.......................................| | | |.Example: Option<i32>..................| | | |.......................................| | +-+=======================================+ | | | | Example: Option<Box<i32>> | | | | [impl<T:Clone> Clone for Option<T>]| +---------------------------------------------+

Intersection impls have some nice properties. For one thing, it’s a kind of minimal extension of the existing rule. In particular, if you are just looking at any two impls, the rules for deciding which is more specific are unchanged: the only difference when adding in intersection impls is that coherence permits overlap when it otherwise wouldn’t.

They also give us a good opportunity to recover some optimization. Consider the two impls in this case: the blanket impl that applies to any T: Copy simply copies some bytes around, which is very fast. The impl that is tailed to Option<T>, however, does more work: it matches the impl and then recursively calls clone. This work is necessary if T: Copy does not hold, but otherwise it’s wasted work. With an intersection impl, we can recover the full performance:

1 2 3 4 5 6 // intersection impl: impl<T:Copy> Clone for Option<T> { fn clone(&self) -> Option<T> { *self // since T: Copy, we can do this here } } A note on compiler messages

I’m about to pivot and discuss the shortcomings of intersection impls. But before I do so, I want to talk a bit about the compiler messages here. I think that the core idea of specialization – that you want to pick the impl that applies to the most specific set of types – is fairly intuitive. But working it out in practice can be kind of confusing, especially at first. So whenever we propose any extension, we have to think carefully about the error messages that might result.

In this particular case, I think that we could give a rather nice error message. Imagine that the user had written these two impls:

1 2 3 4 5 6 7 impl<T: Copy> Clone for T { // impl A fn clone(&self) -> T { ... } } impl<T: Clone> Clone for Option<T> { // impl B fn clone(&self) -> Option<T> { ... } }

As we’ve seen, these two impls overlap but neither specializes the other. One might imagine an error message that says as much, and which also suggests the intersection impl that must be added:

1 2 3 4 5 6 7 8 9 10 error: two impls overlap, but neither specializes the other | 2 | impl<T: Copy> Clone for T {...} | ---- | 4 | impl<T: Clone> Clone for Option<T> {...} | | note: both impls apply to a type like `Option<T>` where `T: Copy`; | to specify the behavior in this case, add the following intersection impl: | `impl<T: Copy> Clone for Option<T>`

Note the message at the end. The wording could no doubt be improved, but the key point is that we should be to actually tell you exactly what impl is still needed.

Intersection impls do not solve the cross-crate problem

Unfortunately, intersection impls don’t give us the backwards compatibility that we want, at least not by themselves. The problem is, if we add the blanket impl, we also have to add the intersection impl. Within the same crate, this might be ok. But if this means that downstream crates have to add an intersection impl too, that’s a big problem.

Intersection impls may force you to predict the future

There is one other problem with intersection impls that arises in cross-crate situations, which nrc described on the tracking issue: sometimes there is a theoretical intersection between impls, but that intersection is empty in practice, and hence you may not be able to write the code you wanted to write. Let me give you an example. This problem doesn’t show up with the Copy/Clone trait, so we’ll switch briefly to another example.

Imagine that we are adding a RichDisplay trait to our project. This is much like the existing Display trait, except that it can support richer formatting like ANSI codes or a GUI. For convenience, we want any type that implements Display to also implement RichDisplay (but without any fancy formatting). So we add a trait and blanket impl like this one (let’s call it impl A):

1 2 trait RichDisplay { /* elided */ } impl<D: Display> RichDisplay for D { /* elided */ } // impl A

Now, imagine that we are also using some other crate widget that contains various types, including Widget<T>. This Widget<T> type does not implement Display. But we would like to be able to render a widget, so we implement RichDisplay for this Widget<T> type. Even though we didn’t define Widget<T>, we can implement a trait for it because we defined the trait:

1 impl<T: RichDisplay> RichDisplay for Widget<T> { ... } // impl B

Well, now we have a problem! You see, according to the rules from RFC 1023, impls A and B are considered to potentially overlap, and hence we will get an error. This might surprise you: after all, impl A only applies to types that implement Display, and we said that Widget<T> does not. The problem has to do with semver: because Widget<T> was defined in another crate, it is outside of our control. In this case, the other crate is allowed to implement Display for Widget<T> at some later time, and that should not be a breaking change. But imagine that this other crate added an impl like this one (which we can call impl C):

1 impl<T: Display> Display for Widget<T> { ... } // impl C

Such an impl would cause impls A and B to overlap. Therefore, coherence considers these to be overlapping – however, specialization does not consider impl B to be a specialization of impl A, because, at the moment, there is no subset relationship between them. So there is a kind of catch-22 here: because the impl may exist in the future, we can’t consider the two impls disjoint, but because it doesn’t exist right now, we can’t consider them to be specializations.

Clearly, intersection impls don’t help to address this issue, as the set of intersecting types is empty. You might imagine having some alternative extension to coherence that permits impl B on the logic of if impl C were added in the future, that’d be fine, because impl B would be a specialization of impl A.

This logic is pretty dubious, though! For example, impl C might have been written another way (we’ll call this alternative version of impl C impl C2):

1 2 impl<T: WidgetDisplay> Display for Widget<T> { ... } // impl C2 // ^^^^^^^^^^^^^^^^ changed this bound

Note that instead of working for any T: Display, there is now some other trait T: WidgetDisplay in use. Let’s say it’s only implemented for optional 32-bit integers right now (for some reason or another):

1 2 trait WidgetDisplay { ... } impl WidgetDisplay for Option<i32> { ... }

So now if we had impls A, B, and C2, we would have a different problem. Now impls A and B would overlap for Widget<Option<i32>>, but they would not overlap for Widget<String>. The reason here is that Option<i32>: WidgetDisplay, and hence impl A applies. But String: RichDisplay (because String: Display) and hence impl B applies. Now we are back in the territory where intersection impls come into play. So, again, if we had impls A, B, and C2, one could imagine writing an intersection impl to cover this situation:

1 impl<T: RichDisplay + WidgetDisplay> RichDisplay for Widget<T> { ... } // impl D

But, of course, impl C2 has yet to be written, so we can’t really write this intersection impl now, in advance. We have to wait until the conflict arises before we can write it.

You may have noticed that I was careful to specify that both the Display trait and Widget type were defined outside of the current crate. This is because RFC 1023 permits the use of negative reasoning if either the trait or the type is under local control. That is, if the RichDisplay and the Widget type were defined in the same crate, then impls A and B could co-exist, because we are allowed to rely on the fact that Widget does not implement Display. The idea here is that the only way that Widget could implement Display is if I modify the crate where Widget is defined, and once I am modifying things, I can also make any other repairs (such as adding an intersection impl) that are necessary.

Conclusion

Today we looked at a particular potential use for specialization: adding a blanket impl that implements Clone for any Copy type. We saw that the current subset-only logic for specialization isn’t enough to permit adding such an impl. We then looked at one proposed fix for this, intersection impls (often called lattice impls).

Intersection impls are appealing because they increase expressiveness while keeping the general feel of the subset-only logic. They also have an explicit nature that appeals to me, at least in principle. That is, if you have two impls that partially overlap, the compiler doesn’t select which one should win: instead, you write an impl to cover precisely that intersection, and hence specify it yourself. Of course, that explicit nature can also be verbose and irritating sometimes, particularly since you will often want the intersection impl to behave the same as one of the other two (rather than doing some third, different thing).

Moreover, the explicit nature of interseciton impls causes problems across crates:

  • they don’t allow you to add a blanket impl in a backwards compatible fashion;
  • they interact poorly with semver, and specifically the limitations on negative logic imposed by RFC 1023.

My conclusion then is that intersection impls may well be part of the solution we want, but we will need additional mechanisms. Stay tuned for additional posts.

A note on comments

As is my wont, I am going to close this post for comments. If you would like to leave a comment, please go to this thread on Rust’s internals forum instead.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox 45.5.0b1 available: now with little-endian (integer) typed arrays, AltiVec VP9, improved MP3 support and a petulant rant

Mozilla planet - sn, 24/09/2016 - 08:24
The TenFourFox 45.5.0 beta (yes, it says it's 45.4.0, I didn't want to rev the version number yet) is now available for testing (downloads, hashes). This blog post will serve as the current "release notes" since we have until November 8 for the next release and I haven't decided everything I'll put in it, so while I continue to do more work I figured I'd give you something to play with. Here's what's new so far, roughly in order of importance.

First, minimp3 has been converted to a platform decoder. Simply by doing that fixed a number of other bugs which were probably related to how we chunked frames, such as Google Translate voice clips getting truncated and problems with some types of MP3 live streams; now we use Mozilla's built-in frame parser instead and in this capacity minimp3 acts mostly as a disembodied codec. The new implementation works well with Google Translate, Soundcloud, Shoutcast and most of the other things I tried. (See, now there's a good use for that Mac mini G4 gathering dust on your shelf: install TenFourFox and set it up for remote screensharing access, and use it as a headless Internet radio -- I'm sitting here listening to National Public Radio over Shoutcast in a foxbox as I write this. Space-saving, environmentally responsible computer recycling! Yes, I know I'm full of great ideas. Yes. You're welcome.)

Interestingly, or perhaps frustratingly, although it somewhat improved Amazon Music (by making duration and startup more reliable) the issue with tracks not advancing still persisted for tracks under a certain critical length, which is dependent on machine speed. (The test case here was all the little five or six second Fingertips tracks from They Might Be Giants' Apollo 18, which also happens to be one of my favourite albums, and is kind of wrecked by this problem.) My best guess is that Amazon Music's JavaScript player interface ends up on a different, possibly asynchronous code path in 45 than 38 due to a different browser feature profile, and if the track runs out somehow it doesn't get the end-of-stream event in time. Since machine speed was a factor, I just amped up JavaScript to enter the Baseline JIT very quickly. That still doesn't fix it completely and Apollo 18 is still messed up, but it gets the critical track length down to around 10 or 15 seconds on this Quad G5 in Reduced mode and now most non-pathological playlists will work fine. I'll keep messing with it.

In addition, this release carries the first pass at AltiVec decoding for VP9. It has some of the inverse discrete cosine and one of the inverse Hadamard transforms vectorized, and I also wrote vector code for two of the convolutions but they malfunction on the iMac G4 and it seems faster without them because a lot of these routines work on unaligned data. Overall, our code really outshines the SSE2 versions I based them on if I do say so myself. We can collapse a number of shuffles and merges into a single vector permute, and the AltiVec multiply-sum instruction can take an additional constant for use as a bias, allowing us to skip an add step (the SSE2 version must do the multiply-sum and then add the bias rounding constant in separate operations; this code occurs quite a bit). Only some of the smaller transforms are converted so far because the big ones are really intimidating. I'm able to model most of these operations on my old Core 2 Duo Mac mini, so I can do a step-by-step conversion in a relatively straightforward fashion, but it's agonizingly slow going with these bigger ones. I'm also not going to attempt any of the encoding-specific routines, so if Google wants this code they'll have to import it themselves.

G3 owners, even though I don't support video on your systems, you get a little boost too because I've also cut out the loopfilter entirely. This improves everybody's performance and the mostly minor degradation in quality just isn't bad enough to be worth the CPU time required to clean it up. With this initial work the Quad is able to play many 360p streams at decent frame rates in Reduced mode and in Highest Performance mode even some 480p ones. The 1GHz iMac G4, which I don't technically support for video as it is below the 1.25GHz cutoff, reliably plays 144p and even some easy-to-decode (pillarboxed 4:3, mostly, since it has lots of "nothing" areas) 240p. This is at least as good as our AltiVec VP8 performance and as I grind through some of the really heavyweight transforms it should get even better.

To turn this on, go to our new TenFourFox preference pane (TenFourFox > Preferences... and click TenFourFox) and make sure MediaSource is enabled, then visit YouTube. You should have more quality settings now and I recommend turning annotations off as well. Pausing the video while the rest of the page loads is always a good idea as well as before changing your quality setting; just click once anywhere on the video itself and wait for it to stop. You can evaluate it on my scientifically validated set of abuses of grammar (and spelling), 1970s carousel tape decks, gestures we make at Gmail other than the middle finger and really weird MTV interstitials. However, because without further configuration Google will "auto-"control the stream bitrate and it makes that decision based on network speed rather than dropped frames, I'm leaving the "slower" appellation because frankly it will be, at least by default. Nevertheless, please advise if you think MSE should be the default in the next version or if you think more baking is necessary, though the pref will be user-exposed regardless.

But the biggest and most far-reaching change is, as promised, little-endian typed arrays (the "LE" portion of the IonPower-NVLE project). The rationale for this change is that, largely due to the proliferation of asm.js code and the little-endian Emscripten systems that generate it, there will be more and more code our big-endian machines can't run properly being casually imported into sites. We saw this with images on Facebook, and later with WhatsApp Web, and also with MEGA.nz, and others, and so on, and so forth. asm.js isn't merely the domain of tech demos and high-end ported game engines anymore.

The change is intentionally very focused and very specific. Only typed array access is converted to little-endian, and only integer typed array access at that: DataView objects, the underlying ArrayBuffers and regular untyped arrays in particular remain native. When a multibyte integer (16-bit halfword or 32-bit word) is written out to a typed array in IonPower-LE, it is transparently byteswapped from big-endian to little-endian and stored in that format. When it is read back in, it is byteswapped back to big-endian. Thus, the intrinsic big-endianness of the engine hasn't changed -- jsvals and doubles are still tag followed by payload, and integers and single-precision floats are still MSB at the lowest address -- only the way it deals with an integer typed array. Since asm.js uses a big typed array buffer essentially as a heap, this is sufficient to present at least a notional illusion of little-endianness as the asm.js script accesses that buffer as long as those accesses are integer.

I mentioned that floats (neither single-precision nor doubles) are not byteswapped, and there's an important reason for that. At the interpreter level, the virtual machine's typed array load and store methods are passed through the GNU gcc built-in to swap the byte order back and forth (which, at least for 32 bits, generates pretty efficient code). At the Baseline JIT level, the IonMonkey MacroAssembler is modified to call special methods that generate the swapped loads and stores in IonPower, but it wasn't nearly that simple for the full Ion JIT itself because both unboxed scalar values (which need to stay big-endian because they're native) and typed array elements (which need to be byte-swapped) go through the same code path. After I spent a couple days struggling with this, Jan de Mooij suggested I modify the MIR for loading and storing scalar values to mark it if the operation actually accesses a typed array. I added that to the IonBuilder and now Ion compiled code works too.

All of these integer accesses have almost no penalty: there's a little bit of additional overhead on the interpreter, but Baseline and Ion simply substitute the already-built-in PowerPC byteswapped load and store instructions (lwbrx, stwbrx, lhbrx, sthbrx, etc.) that we already employ for irregexp for these accesses, and as a result we incur virtually no extra runtime overhead at all. Although the PowerPC specification warns that byte-swapped instructions may have additional latency on some implementations, no PPC chip ever used in a Power Mac falls in that category, and they aren't "cracked" on G5 either. The pseudo-little endian mode that exists on G3/G4 systems but not on G5 is separate from these assembly language instructions, which work on all PowerPCs including the G5 going all the way back to the original 601.

Floating point values, on the other hand, are a different story. There are no instructions to directly store a single or double precision value in a byteswapped fashion, and since there are also no direct general purpose register-floating point register moves, the float has to be spilled to memory and picked up by a GPR (or two, if it's a double) and then swapped at that point to complete the operation. To get it back requires reversing the process, along with the GPR (or two) getting spilled this time to repopulate the double or float after the swap is done. All that would have significantly penalized float arrays and we have enough performance problems without that, so single and double precision floating point values remain big-endian.

Fortunately, most of the little snippets of asm.js floating around (that aren't entire Emscriptenized blobs: more about that in a moment) seem perfectly happy with this hybrid approach, presumably because they're oriented towards performance and thus integer operations. MEGA.nz seems to load now, at least what I can test of it, and WhatsApp Web now correctly generates the QR code to allow your phone to sync (just in time for you to stop using WhatsApp and switch to Signal because Mark Zuckerbrat has sold you to his pimps here too).

But what about bigger things? Well ...

Yup. That's DOSBOX emulating MECC's classic Oregon Trail (from the Internet Archive's MS-DOS Game Library), converted to asm.js with Emscripten and running inside TenFourFox. Go on and try that in 45.4. It doesn't work; it just throws an exception and screeches to a halt.

To be sure, it doesn't fully work in this release of 45.5 either. But some of the games do: try playing Oregon Trail yourself, or Where in the World is Carmen Sandiego or even the original, old school in its MODE 40 splendour, Те́трис (that's Tetris, comrade). Even Commander Keen Goodbye Galaxy! runs, though not even the Quad can make it reasonably playable. In particular the first two probably will run on nearly any Power Mac since they're not particularly dependent on timing (I was playing Oregon Trail on my iMac G4 last night), though you should expect it may take anywhere from 20 seconds to a minute to actually boot the game (depending on your CPU) and I'd just mute the tab since not even the Quad G5 at full tilt can generate convincing audio. But IonPower-LE will now run them, and they run pretty well, considering.

Does that seem impractical? Okay then: how about something vaguely useful ... like ... Linux?

This is, of course, Fabrice Belliard's famous jslinux emulator, and yes, IonPower now runs this too. Please don't expect much out of it if you're not on a high-end G5; even the Quad at full tilt took about 80 seconds elapsed time to get to a root prompt. But it really works and it's useable.

Getting into ridiculous territory was running Linux on OpenRISC:

This is the jor1k emulator and it's only for the highest end G5 systems, folks. Set it to 5fps to have any chance of booting it in less than five minutes. But again -- it's not that the dog walked well.

vi freaks like me will also get a kick out of vim.js. Or, if you miss Classic apps, now TenFourFox can be your System 7 (mouse sync is a little too slow here but it boots):

Now for the bad news: notice that I said things don't fully work. With em-dosbox, the Emscriptenoberated DOSBOX, notice that I only said some games run in TenFourFox, not most, not even many. Wolfenstein 3D, for example, gets as far as the main menu and starting a new game, and then bugs out with a "Reboot requested" message which seems to originate from the emulated BIOS. (It works fine on my MacBook Air, and I did get it to run under PCE.js, albeit glacially.) Catacombs 3D just sits there, trying to load a level and never finishing. Most of the other games don't even get that far and a few don't start at all.

I also tried a Windows 95 emulator (also DOSBOX, apparently), which got part way into the boot sequence and then threw a JavaScript exception "SimulateInfiniteLoop"; the Internet Archive's arcade games under MAME which starts up and then exhausts recursion and aborts (this seems like it should be fixable or tunable, but I haven't explored this further so far); and of course programs requiring WebGL will never, ever run on TenFourFox.

Debugging Emscripten goo output is quite difficult and usually causes tumours in lab rats, but several possible explanations come to mind (none of them mutually exclusive). One could be that the code actually does depend on the byte ordering of floats and doubles as well as integers, as do some of the Mozilla JIT conformance tests. However, that's not ever going to change because it requires making everything else suck for that kind of edge case to work. Another potential explanation is that the intrinsic big-endianness of the engine is causing things to fail somewhere else, such as they managed to get things inadvertently written in such a way that the resulting data was byteswapped an asymmetric number of times or some other such violation of assumptions. Another one is that the execution time is just too damn long and the code doesn't account for that possibility. Finally, there might simply be a bug in what I wrote, but I'm not aware of any similar hybrid endian engine like this one and thus I've really got nothing to compare it to.

In any case, the little-endian typed array conversion definitely fixes the stuff that needed to get fixed and opens up some future possibilities for web applications we can also run like an Intel Mac can. The real question is whether asm.js compilation (OdinMonkey, as opposed to IonPower) pays off on PowerPC now that the memory model is apparently good enough at least for most things. It would definitely run faster than IonPower, possibly several times faster, but the performance delta would not be as massive as IonPower versus the interpreter (about a factor of 40 difference), the compilation step might bring lesser systems to their knees, and it would require some significant additional engineering to get it off the ground (read: a lot more work for me to do). Given that most of our systems are not going to run these big massive applications well even with execution time cut in half or even 2/3rds (and some of them don't work correctly as it is), it might seem a real case of diminishing returns to make that investment of effort. I'll just have to see how many free cycles I have and how involved the effort is likely to be. For right now, IonPower can run them and that's the important thing.

Finally, the petulant rant. I am a fairly avid reader of Thom Holwerda's OSNews because it reports on a lot of marginal and unusual platforms and computing news that most of the regular outlets eschew. The articles are in general very interesting, including this heads-up on booting the last official GameCube game (and since the CPU in the Nintendo GameCube is a G3 derivative, that's even relevant on this blog). However, I'm going to take issue with one part of his otherwise thought-provoking discussion on the new Apple A10 processor and the alleged impending death of Mac OS macOS, where he says, "I didn't refer to Apple's PowerPC days for nothing. Back then, Apple knew it was using processors with terrible performance and energy requirements, but still had to somehow convince the masses that PowerPC was better faster stronger than x86; claims which Apple itself exposed — overnight — as flat-out lies when the company switched to Intel."

Besides my issue with what he links in that last sentence as proof, which actually doesn't establish Apple had been lying (it's actually a Low End Mac piece contemporary with the Intelcalypse asking if they were), this is an incredibly facile oversimplification. Before the usual suspects hop on the comments with their usual suspecty things, let's just go ahead for the sake of argument and say everything its detractors said about the G5 and the late generation G4 systems are true, i.e., they're hot, underpowered and overhungry. (I contest the overhungry part in particular for the late laptop G4 systems, by the way. My 2005 iBook G4 to this day still gets around five hours on a charge if I'm aggressive and careful about my usage. For a 2005 system that's damn good, especially since Apple said six for the same model I own but only 4.5 for the 2008 MacBooks. At least here you're comparing Reality Distortion Field to Reality Distortion Field, and besides, all the performance/watt in the world doesn't do you a whole hell of a lot of good if your machine's out of puff.)

So let's go ahead and just take all that as given for discussion purposes. My beef with that comment is it conveniently ignores every other PowerPC chip before the Intel transition just to make the point. For example, PC Magazine back in the day noted that a 400MHz Yosemite G3 outperformed a contemporary 450MHz Pentium II on most of their tests (read it for yourself, April 20, 1999, page 53). The G3, which doesn't have SIMD of any kind, even beat the P2 running MMX code. For that matter, a 350MHz 604e was over twice as fast at integer performance than a 300MHz P2. I point all of this out not (necessarily) to go opening old wounds but to remind those ignorant of computing history that there was a time in "Apple's PowerPC days" when even the architecture's detractors will admit it was at least competitive. That time clearly wasn't when the rot later set in, but he certainly doesn't make that distinction.

To be sure, was this the point of his article? Not really, since he was more addressing ARM rather than PowerPC, but it is sort of. Thom asserts in his exchange with Grüber Alles that Apple and those within the RDF cherrypick benchmarks to favour what suits them, which is absolutely true and I just did it myself, but Apple isn't any different than anyone else in that regard (put away the "tu quoque" please) and Apple did this as much in the Power Mac days to sell widgets as they do now in the iOS ones. For that matter, Thom himself backtracks near the end and says, "there is one reason why benchmarks of Apple's latest mobile processors are quite interesting: Apple's inevitable upcoming laptop and desktop switchover to its own processors." For the record I see this as highly unlikely due to the Intel Mac's frequent use as a client virtual machine host, though it's interesting to speculate. But the rise of the A-series is hardly comparable with Apple's PowerPC days at all, at least not as a monolithic unit. If he had compared the benchmark situation with when the PowerPC roadmap was running out of gas in the 2004-5 timeframe, by which time even boosters like yours truly would have conceded the gap was widening but Apple relentlessly ginned up evidence otherwise, I think I'd have grudgingly concurred. And maybe that's actually what he meant. However, what he wrote lumps everything from the 601 to the 970MP into a single throwaway comment, is baffling from someone who also uses and admires Mac OS 9 (as I do), and dilutes his core argument. Something like that I'd expect from the breezy mainstream computer media types. Thom, however, should know better.

(On a related note, Ars Technica was a lot better when they were more tech and less politics.)

Next up: updates to our custom gdb debugger and a maintenance update for TenFourFoxBox. Stay tuned and in the meantime try it and see if you like it. Post your comments, and, once you've played a few videos or six, what you think the default should be for 45.5 (regular VP8 video or MSE/VP9).

Categorieën: Mozilla-nl planet

Mitchell Baker: Living with Diverse Perspectives

Mozilla planet - fr, 23/09/2016 - 23:19

Diversity and Inclusion is more than having people of different demographics in a group.  It is also about having the resulting diversity of perspectives included in the decision-making and action of the group in a fundamental way.

I’ve had this experience lately, and it demonstrated to me both why it can be hard and why it’s so important.  I’ve been working on a project where I’m the individual contributor doing the bulk of the work. This isn’t because there’s a big problem or conflict; instead it’s something I feel needs my personal touch. Once the project is complete, I’m happy to describe it with specifics. For now, I’ll describe it generally.

There’s a decision to be made.  I connected with the person I most wanted to be comfortable with the idea to make sure it sounded good.  I checked with our outside attorney just in case there was something I should know.  I checked with the group of people who are most closely affected and would lead the decision and implementation if we proceed. I received lots of positive response.

Then one last person checked in with me from my first level of vetting and spoke up.  He’s sorry for the delay, etc but has concerns.  He wants us to explore a bunch of different options before deciding if we’ll go forward at all, and if so how.

At first I had that sinking feeling of “Oh bother, look at this.  I am so sure we should do this and now there’s all this extra work and time and maybe change. Ugh!”  I got up and walked around a bit and did a few thing that put me in a positive frame of mind.  Then I realized — we had added this person to the group for two reasons.  One, he’s awesome — both creative and effective. Second, he has a different perspective.  We say we value that different perspective. We often seek out his opinion precisely because of that perspective.

This is the first time his perspective has pushed me to do more, or to do something differently, or perhaps even prevent me from something that I think I want to do.  So this is the first time the different perspective is doing more than reinforcing what seemed right to me.

That lead me to think “OK, got to love those different perspectives” a little ruefully.  But as I’ve been thinking about it I’ve come to internalize the value and to appreciate this perspective.  I expect the end result will be more deeply thought out than I had planned.  And it will take me longer to get there.  But the end result will have investigated some key assumptions I started with.  It will be better thought out, and better able to respond to challenges. It will be stronger.

I still can’t say I’m looking forward to the extra work.  But I am looking forward to a decision that has a much stronger foundation.  And I’m looking forward to the extra learning I’ll be doing, which I believe will bring ongoing value beyond this particular project.

I want to build Mozilla into an example of what a trustworthy organization looks like.  I also want to build Mozilla so that it reflects experience from our global community and isn’t living in a geographic or demographic bubble.  Having great people be part of a diverse Mozilla is part of that.  Creating a welcoming environment that promotes the expression and positive reaction to different perspectives is also key.  As we learn more and more about how to do this we will strengthen the ways we express our values in action and strengthen our overall effectiveness.

Categorieën: Mozilla-nl planet

Living with Diverse Perspectives

Mitchell Baker - fr, 23/09/2016 - 23:19

Diversity and Inclusion is more than having people of different demographics in a group.  It is also about having the resulting diversity of perspectives included in the decision-making and action of the group in a fundamental way.

I’ve had this experience lately, and it demonstrated to me both why it can be hard and why it’s so important.  I’ve been working on a project where I’m the individual contributor doing the bulk of the work. This isn’t because there’s a big problem or conflict; instead it’s something I feel needs my personal touch. Once the project is complete, I’m happy to describe it with specifics. For now, I’ll describe it generally.

There’s a decision to be made.  I connected with the person I most wanted to be comfortable with the idea to make sure it sounded good.  I checked with our outside attorney just in case there was something I should know.  I checked with the group of people who are most closely affected and would lead the decision and implementation if we proceed. I received lots of positive response.

Then one last person checked in with me from my first level of vetting and spoke up.  He’s sorry for the delay, etc but has concerns.  He wants us to explore a bunch of different options before deciding if we’ll go forward at all, and if so how.

At first I had that sinking feeling of “Oh bother, look at this.  I am so sure we should do this and now there’s all this extra work and time and maybe change. Ugh!”  I got up and walked around a bit and did a few thing that put me in a positive frame of mind.  Then I realized — we had added this person to the group for two reasons.  One, he’s awesome — both creative and effective. Second, he has a different perspective.  We say we value that different perspective. We often seek out his opinion precisely because of that perspective.

This is the first time his perspective has pushed me to do more, or to do something differently, or perhaps even prevent me from something that I think I want to do.  So this is the first time the different perspective is doing more than reinforcing what seemed right to me.

That lead me to think “OK, got to love those different perspectives” a little ruefully.  But as I’ve been thinking about it I’ve come to internalize the value and to appreciate this perspective.  I expect the end result will be more deeply thought out than I had planned.  And it will take me longer to get there.  But the end result will have investigated some key assumptions I started with.  It will be better thought out, and better able to respond to challenges. It will be stronger.

I still can’t say I’m looking forward to the extra work.  But I am looking forward to a decision that has a much stronger foundation.  And I’m looking forward to the extra learning I’ll be doing, which I believe will bring ongoing value beyond this particular project.

I want to build Mozilla into an example of what a trustworthy organization looks like.  I also want to build Mozilla so that it reflects experience from our global community and isn’t living in a geographic or demographic bubble.  Having great people be part of a diverse Mozilla is part of that.  Creating a welcoming environment that promotes the expression and positive reaction to different perspectives is also key.  As we learn more and more about how to do this we will strengthen the ways we express our values in action and strengthen our overall effectiveness.

Categorieën: Mozilla-nl planet

Mozilla WebDev Community: Beer and Tell – September 2016

Mozilla planet - fr, 23/09/2016 - 19:56

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

emceeaich: Gopher Tessel

First up was emceeaich, who shared Gopher Tessel, a project for running a Gopher server (an Internet protocol that was popular before the World Wide Web) on a Tessel. Tessel is small circuit board that runs Node.js projects; Gopher Tessel reads sensors (such as the temperature sensor) connected to the board, and exposes their values via Gopher. It also can control lights connected to the board.

groovecoder: Crypto: 500 BC – Present

Next was groovecoder, who shared a preview of a talk about cryptography throughout history. The talk is based on “The Code Book” by Simon Sign. Notable moments and techniques mentioned include:

  • 499 BCE: Histiaeus of Miletus shaves the heads of messengers, tattoos messages on their scalps, and sends them after their hair has grown back to hide the message.
  • ~100 AD: Milk of tithymalus plant is used as invisible ink, activated by heat.
  • ~700 BCE: Scytale
  • 49 BC: Caesar cipher
  • 1553 AD: Vigenère cipher
bensternthal: Home Monitoring & Weather Tracking

bensternthal was up next, and he shared his work building a dashboard with weather and temperature information from his house. Ben built several Node.js-based applications that collect data from his home weather station, from his Nest thermostat, and from Weather Underground and send all the data to an InfluxDB store. The dashboard itself uses Grafana to plot the data, and all of these servers are run using Docker.

The repositories for the Node.js applications and the Docker configuration are available on GitHub:

craigcook: ByeHolly

Next was craigcook, who shared a virtual yearbook page that he made as a farewell tribute to former-teammate Holly Habstritt-Gaal, who recently took a job at another company. The page shows several photos that are clipped at the edges to look curved like an old television screen. This is done in CSS using clip-path with an SVG-based path for clipping. The SVG used is also defined using proportional units, which allows it to warp and distort correctly for different image sizes, as seen by the variety of images it is used on in the page.

peterbe: react-buggy

peterbe told us about react-buggy, a client for viewing Github issues implemented in React. It is a rewrite of buggy, a similar client peterbe wrote for Bugzilla bugs. Issues are persisted in Lovefield (a wrapper for IndexedDB) so that the app can function offline. The client also uses elasticlunr.js to provide full-text search on issue titles and comments.

shobson: tic-tac-toe

Last up was shobson, who shared a small Tic-Tac-Toe game on the viewsourceconf.org offline page that is shown when the site is in offline mode and you attempt to view a page that is not available offline.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Categorieën: Mozilla-nl planet

Air Mozilla: Participation Q3 Demos

Mozilla planet - fr, 23/09/2016 - 19:32

Participation Q3 Demos Watch the Participation Team share the work from the last quarter in the Demos.

Categorieën: Mozilla-nl planet

Emily Dunham: Setting a Freenode channel's taxonomy info

Mozilla planet - fr, 23/09/2016 - 09:00
Setting a Freenode channel’s taxonomy info

Some recent flooding in a Freenode channel sent me on a quest to discover whether the network’s services were capable of setting a custom message rate limit for each channel. As far as I can tell, they are not.

However, the problem caused me to re-read the ChanServ help section:

/msg chanserv help - ***** ChanServ Help ***** - ... - Other commands: ACCESS, AKICK, CLEAR, COUNT, DEOP, DEVOICE, - DROP, GETKEY, HELP, INFO, QUIET, STATUS, - SYNC, TAXONOMY, TEMPLATE, TOPIC, TOPICAPPEND, - TOPICPREPEND, TOPICSWAP, UNQUIET, VOICE, - WHY - ***** End of Help *****

Taxonomy is a cool word. Let’s see what taxonomy means in the context of IRC:

/msg chanserv help taxonomy - ***** ChanServ Help ***** - Help for TAXONOMY: - - The taxonomy command lists metadata information associated - with registered channels. - - Examples: - /msg ChanServ TAXONOMY #atheme - ***** End of Help *****

Follow its example:

/msg chanserv taxonomy #atheme - Taxonomy for #atheme: - url : http://atheme.github.io/ - ОХЯЕБУ : лололол - End of #atheme taxonomy.

That’s neat; we can elicit a URL and some field with a cryllic and apparently custom name. But how do we put metadata into a Freenode channel’s taxonomy section? Google has no useful hits (hence this blog post), but further digging into ChanServ’s manual does help:

/msg chanserv help set - ***** ChanServ Help ***** - Help for SET: - - SET allows you to set various control flags - for channels that change the way certain - operations are performed on them. - - The following subcommands are available: - EMAIL Sets the channel e-mail address. - ... - PROPERTY Manipulates channel metadata. - ... - URL Sets the channel URL. - ... - For more specific help use /msg ChanServ HELP SET command. - ***** End of Help ***** Set arbirary metadata with /msg chanserv set #channel property key value

The commands /msg chanserv set #channel email a@b.com and /msg chanserv set #channel property email a@b.com appear to function identically, with the former being a convenient wrapper around the latter.

So that’s how #atheme got their fancy cryllic taxonomy: Someone with the appropriate permissions issued the command /msg chanserv set #atheme property ОХЯЕБУ лололол.

Behaviors of channel properties

I’ve attempted to deduce the rules governing custom metadata items, because I couldn’t find them documented anywhere.

  1. Issuing a set property command with a property name but no value deletes
the property, removing it from the taxonomy.
  1. A property is overwritten each time someone with the appropriate permissions
issues a /set command with a matching property name (more on the matching in a moment). The property name and value are stored with the same capitalization as the command issued.
  1. The algorithm which decides whether to overwrite an existing property or
create a new one is not case sensitive. So if you set ##test email test@example.com and then set ##test EMAIL foo, the final taxonomy will show no field called email and one field called EMAIL with the value foo.
  1. When displayed, taxonomy items are sorted first in alphabetical order (case
insensitively), then by length. For instance, properties with the names a, AA, and aAa would appear in that order, because the initial alphebetization is case-insensitive.
  1. Attempting to place [mIRC color codes](http://www.mirc.com/colors.html) in the

property name results in the error “Parameters are too long. Aborting.”

However, placing color codes in the value of a custom property works just fine.

Other uses

As a final note, you can also do basically the same thing with Freenode’s NickServ, to set custom information about your nickname instead of about a channel.

Categorieën: Mozilla-nl planet

Mozilla Firefox 49.0 and Thunderbird 45.3 Land in All Supported Ubuntu OSes - Softpedia News

Nieuws verzameld via Google - fr, 23/09/2016 - 01:28

Softpedia News

Mozilla Firefox 49.0 and Thunderbird 45.3 Land in All Supported Ubuntu OSes
Softpedia News
Today, September 22, 2016, Chris Coulson from Canonical published two security advisories to inform the Ubuntu Linux community about the availability of the latest Mozilla products in all supported releases. Mozilla announced the other day that its ...

Google Nieuws
Categorieën: Mozilla-nl planet

Pages