Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 3 dagen 14 uur geleden

Rabimba: HackRice 7.5: How "uFilter" was born

do, 08/03/2018 - 08:59

I have a thing for Hackathon. I am a procrastinator. A lazy and procrastinator graduate student, not a nice combination to have. But still when I see hundreds of sharp minds in a room scrabbling over idea, hungry to build and prototype their idea. Bring it to life, it finally pushes me to activity, makes me productive. That is why I love Hackathon, that is why I love HackRice, our resident Hackathon of Rice University.

TL;DR: if you just want to try the extension, chrome version is here and Firefox version is here.
I have been participating at HackRice since 2014, when I think for the first time it was open for non-rice students, and have been participating ever since. What a roller coaster ride it has been, but that is a story for another day.HackRice 7.5 being the last one I will be able to attend at Rice, it was somewhat special and emotional for me.
Hackrice 7.5 starts now!HackRice 7.5 was a tad different form the other iterations. For starters it was the first time it was being held in Spring semester, and hence on a smaller scale and only to Rice Students. And also instead of normal 26 hours, it was exactly 24 hours. The venue was Liu Idea Lab. I have never been to the lab before, and it seemed to be a nice place to sit and work. The event started on Friday evening and ended on Saturday evening.The event had two tracks, with a beginner and a Data Science track. The organizers had two in depth workshop/tutorials set up for both of these tracks to help out starters. Which I though was really cool. Even though I was brainstorming and prototyping on something different, I sat through them anyway and felt they were really thorough.
Being a one person team, and not really knowing anybody else I decided to work on a relatively smaller project which I can finish instead of trying anything in Data Science track. The idea I initially had was of a privacy filter. After some more brainstorming realized to properly make one, taking account of all anonymizing factors it probably will take me more time than 24 hours. I decided to settle on more of a toxic/malicious/sanity/trigger word filter. 
The Idea: Create a browser based extension that can filter out abusive posts, word, sentences paragraphs.

Inspiration: Lately a lot fo us have started noticing the rise of cyber bullying and abusive behaviors across the internet. Be that reddit or facebook group. Often I see it gets me rallied up just before I goto sleep. Often I wish if only I did not read that. Recent increase in cyber bullying is one of the primary reason for the tool. Mental health and online harassment are major, relevant issues today in our current society. Everyone should be able to access content in the internet without fearing for trigger words or harassment. And that goes specially for the people who have been victim of such incidents and really doesn't wish to see any such trigger words.

What is uFilter:uFilter is a smart web extension made to help people browse the web without seeing content they don't like to see. Bringing the power to choose what to see back to users. The user has a list of buttons as filters they can choose. Either individual or more than one at a go. The process is simple and subtle: check off the type of content you want to avoid and let us handle the rest! Questionable content is blurred out, if you wish to see it nonetheless you can click to reveal the text.
You can see it in action here:

What it does in the background: The contents are blocked at page load, so the user is still able to access the context of the site before making up their mind if they are staying or leaving. The extension has s simple UI which lets them choose what to block and what not.
They can also click on the covered sections to reveal as they go. The script searches through the entire DOM looking for elements wherever they may be on the page. Sentiment analysis was implemented to determine what content was malicious.The script also observes the page so it can adaptively block content on pages like Youtube loading comments, Facebook feed as well as Twitter pages. uFilter is not just a dumb keyword filter. It first combs every web page you visit for questionable content based on your filter selection, once it identifies sentences containing questionable content and uses the AFINN-165 wordlist and Emoji Sentiment Ranking to perform sentiment analysis. Once it determines it has abusive content. It blurs out only that portion.

The most useful part of uFilter is, it can observe dynamic webpages and works on texts which are dynamically loaded into the webpage. Hence it works for twitter or Facebook with rolling feed and dynamic texts as well. Another distinguishing feature of uFilter is, it does not remove/replace any content. If a user decides from the context of the page that s/he wants to read the content, just clicking on the blurry portion will reveal the text to the user.
All this is done in realtime so the user does not notice any difference in their normal browsing behavior. But of course properly identifying abusive content just programmatically is a hard problem. Recognizing that uFilter gives the user an option to tag/mark/categorize text as offensive. Once a user does that, the filter will learn from it. This information is stored in a firebase datastore without any identifying information and helps uFilter.

The end result is a uFilter which can intelligently sanitize any website or webpage you visit of any abusive content you do not wish to see. You can see it in action

Got this beautiful earphone as prizeComing back to Hackrice. I really did not expect much when I submitted it for judging. I had just barely made a working prototype and published it in Chrome Web Store. It was working in Firefox but it still had some security problems for which Firefox was not publishing the add-on. Surprisingly the judges including Dr. Wang was really interested in the idea and specially the implementation. When the time came for deciding winners it was announced that uFilter won the first prize! Imagine my delight!

If you want to know more about the project, visit the submission page at devpost.
If you want to try the add-on, I will be delighted to hear your feedback!

Chrome version download link!
Firefox version download link!
Categorieën: Mozilla-nl planet

David Teller: Thinkerbell Postmortem/Brain dump

do, 08/03/2018 - 08:50

Two years ago, I was working on a research project called “Project Link” as part of the Connected Devices branch of Mozilla. While this branch has since been stopped, some part of Project Link lives on as Project Things.

One of the parts of Project Link that hasn’t made it to Project Things (so far) was Thinkerbell: a Domain-Specific Language designed to let users program their SmartHome without coding. While only parts of Thinkerbell were ever implemented, they were sufficient to write programs such as:

Whenever I press any button labelled “light” in the living room, toggle all the lights in the living room.


If the entry door is locked and the motion detector notices motion, send an alarm to my SmartPhone.

Thinkerbell also had:

  • semantics that ensured that scripts could continue/resume running unmonitored even when hardware was replaced/upgraded/moved around the house, including both the server and the sensors;
  • a visual syntax, rather than a text syntax;
  • a novel type system designed to avoid physical accidents;
  • a semantics based on process algebras.

Ideally, I’d like to take the time to write a research paper on Thinkerbell, but realistically, there is very little chance that I’ll find that time. So, rather than letting these ideas die in some corner of my brain, here is a post-mortem for Thinkerbell, in the hope that someone, somewhere, will pick some of the stuff and gives it a second life.

Note that some of the ideas exposed here were never actually implemented. Project Link was cancelled while Thinkerbell was still in its infancy.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MDN Changelog for February 2018

do, 08/03/2018 - 01:00

Here’s what happened in February to the code, data, and tools that support MDN Web Docs:

Here’s the plan for March:

Done in February Migrated 14% of compatibility data

In February, we asked the MDN community to help convert compatibility data to the browser-compat-data repository. Florian Scholz led this effort, starting with a conference talk and blog post last month. He created GitHub issues to suggest migration tasks, and added a call to action on the old pages:


The response from the community has been overwhelming. There were 203 PRs merged in February, and 96 were from 23 first-time contributors. Existing contributors such as Mark Boas, Chris Mills, and wbamberg kept up their January pace. The PRs were reviewed for the correctness of the conversion as well as ensuring the data was up to date, and Florian, Jean-Yves Perrier, and Joe Medley have done the most reviews. In February, the project jumped from 43% to 57% of the data converted, and the data is better than ever.

There are two new tools using the data. SphinxKnight is working on compat-tester, which scans an HTML, CSS, or Javascript file for compatibility issues with a user-defined set of browsers. K3N is working on mdncomp, which displays compatibility data on the command line:


If you have a project using the data, let us know about it!

Improved and Extended Interactive Examples

We continue to improve and expand the interactive examples, such as a clip-path demo from Rachel Andrew:


We’re expanding the framework to allow for HTML examples, which often need a mix of HTML and CSS to be interesting. Like previous efforts, we’re using user testing to develop this feature. We show the work-in-progress, like the <table> demo, to an individual developer, watch how the demo is used and ask for feedback, and then iterate on the design and implementation.


The demos have gone well, and the team will firm up the implementation and write more examples to prepare for production. The team will also work on expanding test coverage and formalizing the build tools in a new package.

Prepared for a CDN and Django 1.11

We made many changes last month to improve the performance and reliability of MDN. They worked, and we’ve entered a new period of calm. We’ve had a month without 3 AM downtime or performance alerts, for the first time since the move to AWS. The site is responding more smoothly, and easily handling MDN’s traffic.


This has freed us to focus on longer term fixes and on the goals for the quarter. One of those is to serve MDN from behind a CDN, which will further reduce server load and may have a huge impact on response time. Ryan Johnson is getting the code ready. He switched to Django’s middleware for handling ETag creation (PR 4647), which allowed him to remove some buggy caching code (PR 4648). Ryan is now working through the many endpoints, adding caching headers and cleaning up tests (PR 4676, PR 4677, and others). Once this work is done, we’ll add the CDN that will cache content based on the directives in the headers.

My focus has been on the Django 1.11 upgrade, since Django 1.8 is scheduled to lose support in April. This requires updating third-party libraries like django-tidings (PR 4660) and djangorestframework (PR 4664 from Safwan Rahman). We’re moving away from other requirements, such as dropping dbgettext (PR 4669). We’ve taken care of the most obvious upgrades, but there are 142,000 lines of Python in our libraries, so we expect more surprises as we get closer to the switch.

Once the libraries are compatible with Django 1.11, the remaining issues will be with the Kuma codebase. Some changes are small and easy, such as a one-liner in PR 4684. Some will be quite large. Our code that serves up locale-specific content, such as reverse and LocaleURLMiddleware, are incompatible, and we’ll have to swap some of our oldest code for Django’s version.

Shipped Tweaks and Fixes

There were 413 PRs merged in February:

147 of these were from first-time contributors:

Other significant PRs:

Planned for March

We’ll continue with the compatibility migration, interactive examples, the CDN, and the Django 1.11 migration in March.

Move developers to Emerging Technologies

Starting March 2, the MDN developers move from Marketing to Emerging Technologies. We’ll be working on the details of this transition in March and the coming months. That will include planning a infrastructure transition, and finding a new home for the MDN Changelog.

Stephanie Hobson and I joined Marketing Engineering and Operations in March 2016, back when it was still Engagement Engineering. EE was already responsible for 50% of Mozilla’s web traffic with, and adding (34%) and (16%) put 99% of Mozilla’s web presence under one engineering group. MDN benefited from this amazing team in many ways:

  • Josh Mize led the effort to integrate MDN into the marketing technology and processes. He helped with our move to Docker-based development and deployment, implemented demo deploys, advocated for a read-only and statically-generated deployment, and worked out details of the go-to-AWS strategy, such as file serving and the master database transfer. Josh keeps up to date on the infrastructure community, and knows what tech is reliable, what the community is excited about, and what the next best practices will be.
  • Dave Parfitt did a lot of the heavy lifting on the AWS transition, from demo instances, through maintenance mode and staging deployments, and all the way to a smooth production deployment. He figured out database initialization, implemented the redirects, and tackled the dark corners of unicode filenames. He consistently does what need to be done, then goes above and beyond by refining the process, writing excellent documentation, and automating whenever possible.
  • Jon Petto introduced and integrated Traffic Cop, allowing us to experiment with in-content changes in a lightweight, secure way.
  • Giorgos Logiotatidis’s Jenkins scripts and workflows are the foundation of MDN’s Jenkins integration, used to automate our tests and AWS deployments.
  • Paul McLanahan helped review PRs when we had a single backend developer. His experience migrating bedrock to AWS was invaluable, and his battle-tested django-redirect-urls made it possible to migrate away from Apache and get 10 years of redirects under control.
  • Schalk Neethling reviewed front-end code when we were down to one front-end developer. He implemented the interactive examples from prototype to production, and joined the MDN team when Stephanie Hobson transitioned to bedrock.
  • Ben Sternthal made the transition into Marketing possible. He made us feel welcome from day one, hired some amazing contractors to help with the dark days of the 2016 spam attack, hired Ryan Johnson, and worked for the resources and support to move to AWS. He created a space where developers could talk about what is important to us, where we spent time and effort on technical improvements and career advancement, and where technical excellence was balanced with features and experiments.

MDN is on a firmer foundation after the time spent in MozMEAO, and is ready for the next chapter in its 13 year history.

Ryan Johnson, Schalk Neethling, and I will join the Advanced Development team in Emerging Technologies, reporting to Faramarz Rashed. The Advanced Development team has been working on various ET projects, most recently Project Things, an Internet of Things (IoT) project that is focused on decentralization, security, privacy, and interoperability. It’s a team that is focused on getting fresh technology into users’ hands. This is a great environment for the next phase of MDN, as we build on the more stable foundation and expand our reach.

Meet in Paris for Hack on MDN

We’re traveling to the Mozilla Paris Office in March. We’ll have team meetings on Tuesday, March 13 through Thursday, March 15, to plan for the next three months and beyond.

From Friday, March 16 through Sunday, March 18, we’ll have the third Hack on MDN event. The last one was in 2015 in Berlin, and the team is excited to return to this format. The focus of the Paris event will be the Browser Compat Data project. We expect to build some tools using the data, alternative displays of compat information, and improve the migration and review processes.

Evaluate Proposals for a Performance Audit

One of our goals for the year is to improve page load times on MDN. We’re building on a similar SEO project last year, and looking for an external expert to measure MDN’s performance and recommend next steps. Take a look at our Request for Proposal. We plan to select the top bidders by March 30, 2018.

Categorieën: Mozilla-nl planet

K Lars Lohn: Things Gateway - Part 6

wo, 07/03/2018 - 23:12
Today I'm going to play around with some switches from TP-Link with the experimental Things Gateway from Mozilla. Previously in this series I covered other home automation technologies (Zigbee, Z-Wave, Philips Hue) from the perspective of the Things Gateway.

While TP-Link makes a varieties of devices: plugs, switches and bulbs, I only had access to a pair of the Smart Plugs.

Because the TP-Link devices use the local WiFi for all their communication, they are among the scariest devices on my local area network.  They are black boxes and I must implicitly trust the manufacturer that they are secure: forever.  Yeah, I've expressed trepidation over smart hubs (Samsung Smart Things, Philips Hue Bridge, etc) for the same reason, but these are even scarier.  Why? Because there's potentially an army of these devices in a smart home.  They may last for years and years, but how long will they receive firmware security updates?  Updating the firmware requires action on the part of the owner. Electrical outlets should just work, I don't want to have to track them all.  Missing just one in a security update may be all it takes to compromise the local area network.

Goal: Pair the Things Gateway with a pair of TP-Link Smart Plugs
ItemWhat's it for?Where I got itThe Raspberry Pi and associated hardware from Part 2 of this series minus the DIGI X-stickthis is the base platform that we'll be adding ontofrom Part 2 of this seriesTP-Link Smart Plug HS-110a smart plug to pair with the Things GatewayAmazonAn iOS or Android phone or tabletThe set up of the Smart Plugs requires a controller app on a mobile deviceI used my Android phone
The first word that came to mind when I opened the package for a TP-Link HS-110 Smart Plug was "humongous".  In a standard wall outlet, they block both receptacles no matter which one is used.  On the power strip that I used for testing, two smart plugs used four receptacles and they could not be placed next to each other.  This a poor design that makes me think they were not tested in real world applications.  While physical design problems do not automatically imply software design problems, I am wary.  

Step 1: The devices need to access the local WiFi network for communication.  This involves downloading the TP-Link Kasa for Mobile app on a mobile device. I chose my Android Phone for this process.

On starting Kasa for Mobile, the first thing I encountered was a login page.  Because I don't want these devices communicating outside my local area network, I declined by pressing the tiny little "Skip" button in the lower right corner.

It then asked me to create an Amazon Alexa account. Again, I don't want to do this and skipped it by touching the tiny little "Not Now" button on the bottom of the screen

Now I'm ready to add my devices to the app.  I pressed the "+" button in the upper right corner and then selected the matching Smart Plug on the next screen.  As specified, I plugged in the Smart Plug and the lighted indicator turned solid orange.  After pressing "Next" the light began alternating between orange and green.  I pressed "Next" again. 

This got me to a privacy buster.  I was asked to allow Kasa to access my phone's location.  I said no.  That left me at the same orange/green flashing with a "Next" button.   After pressing "Next" again, I got a more detailed dialog box begging for location access for my phone.  Again, I said no.  That left me at exactly the same place: a "Next" button that results in asking me for location access.  I must agree or the setup will not continue.  Perhaps poor physical design does imply poor software design.  I begrudgingly allowed location access to continue the setup.

The next screen asked me to name my device.  I chose "One" and continued.  I selected an icon and then selected "Save Device".

That got me to the meat of the set up: access to the local area WiFi network.  I chose my network and added the network password.  It took about two minutes to complete the setup.  Adding the second plug was a similar procedure, though without the asking for location permission nor the WiFi password again.  I named the second Smart Plug as "Two".

Step 2: I'm assuming that the Things Gateway has already been set up.  See Step 1 of Part 2 of this series and proceed through Step 6, if you've not done that yet.

The next step is to add the "TP-Link" add-on to the Things Gateway. I pressed the "Menu" button, selected "Setup" and then selected "Add-ons".

From there, I pressed the "+" button to enable a new add-on and then selected "TP-Link" from the scrollable list. From there, I used the left arrow button in the upper left to go all the way back to the "Things" page.  

Step 3: Now I've got to add the two smart plugs to the known devices:

I pressed the "+", Things Gateway immediately detected the two new devices, I pressed "Save" on each of them and then "Done".

Step 4: I now have control of the two TP-Link Smart Plugs from the Things Gateway.   I can see the additional information about power usage by pressing the "splat" on the icons:

What's next? In the next installment, I'm hoping to introduce the Ikea TRÅDFRI devices.  However, that means I must acquire the hardware, so there may be a little bit of a delay.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 07 Mar 2018

wo, 07/03/2018 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Welcome Teon Brooks to the Test Pilot Team!

wo, 07/03/2018 - 21:59

Late last year, the Test Pilot team welcomed a new data scientist, Teon Brooks. In this post, Teon talks about some of his recent work and his role with Test Pilot.

How would you describe your role on the Test Pilot team?

I work as the team’s data scientist. I am responsible for providing data insights from our participants’ engagement and interactions to help improve the user experience.

What does a typical day at Mozilla look like for you?

A typical day for me consists of me learning about the new features we are building in Test Pilot and conceptualizing what types of measurements we should have to evaluate the effectiveness of a given product. I also spend my time doing data exploration to better understand how users engage with our products.

Where were you before Mozilla?

I am a cognitive scientist by training; I spent the past decade as an experimental researcher looking at the how the brain processes and understands language. Over the past five years, I have become an open-source developer working on the MNE project, a data analysis and visualization package for time-series brain recording. I first came to the Mozilla Foundation as a “Mofo-er” through its Science Fellowship program where I worked on developing data standards for time-series brain data.

On Test Pilot, what are you most looking forward to and why?

I am excited to see how Test Pilot grows as a platform for testing new ideas for Firefox with users in the loop. We want to empower our users to have control over their experience on the web and Test Pilot allows us to build the tools to help with that.

Tell me something most people at Mozilla don’t know about you.

I am a huge fan of the performing arts, especially dance. Misty Copeland is one of my heroes in the ballet world. I’m not only a fan of dance but I enjoy performing. I competed as an amateur Latin ballroom dancer for six years. In recent years, I have taken up ballet.

Welcome Teon Brooks to the Test Pilot Team! was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

David Humphrey: On standards work

wo, 07/03/2018 - 21:18

This week I'm looking at standards with my open source class. I find that students often don't know about standards and specs, how to read them, how they get created, or how to give feedback and participate. The process is largely invisible. The timing of this topic corresponds to a visit from David Bruant, who is a guest presenter in the class this week. I wanted to discuss his background working "open" while he was here, and one of the areas he's focused on is open standards work for the web, in particular, for JavaScript.

All of the students are using JavaScript. Where did it come from? Who made it? Who maintains it? Who defines it? Who is in charge? When we talk about open source we think about code, tests, documentation, and how all of these evolve. But what about open standards? What does working on a standard look like?

There's a great example being discussed this week all over Twitter, GitHub, Bugzilla and elsewhere. It involves a proposal to add a new method flatten() to Arrays. There are some good docs for it on MDN as well.

The basic idea is to allow an Array containing other Arrays, or "holes" (i.e., empty elements), to be compressed into a new "flat" Array. For example, the "flattened" version of [1, 2, [3, 4]] would be [1, 2, 3, 4]. It's a great suggestion, and one of many innovative and useful things that have been added to Array in that last few years.

However, changing the web is hard. There's just so much of it being used (and abused) by people all over the world in unexpected ways. You might have a good idea for a new thing the web and JavaScript can do, but getting it added is not easy. You might say to yourself, "I can see how removing things would be hard, but why is adding something difficult?" It's difficult because one of the goals of the people who look after web standards is to not intentionally break the web unnecessarily. Where possible, something authored for the web of 1999 should still work in 2019.

So how does flatten() break the web? Our story starts 150 years ago, back in the mid 1990s. When it arrived on the scene, JavaScript was fairly small and limited. However, people used it, loved it, (and hated it), and their practical uses of it began to wear grooves: as people wrote more and more code, best practices emerged, and some of those calcified into utility functions, libraries, and frameworks.

One of the frameworks was MooTools. Among other conveniences, MooTools added a way to flatten() an Array. While JavaScript couldn't do this "natively," it was possible to "shim" or "polyfill" the built-in Array type to add new properties and methods. MooTools did this in a way that causes problems: we have all be told that it's a bad idea to modify, "step on," or otherwise alter the definitions of the language and runtime without first checking to see if they are, in fact, available. We wrote code in 2007 using assumptions that don't necessarily hold true in 2018: the browsers change, versions change, the language changes.

I remember when I started writing JavaScript, and looking at the long list of reserved keywords I wasn't supposed to use, things like class that everyone knew you'd be safe to use, since JS doesn't use classes! Well, much like hundred-year land leases, eventually things change, and what was true once upon a time doesn't necessarily hold today. It's easy to point a finger at MooTools (many people are), but honestly, none of us thinks with enough long-term vision to truly understand all the implications of our decisions now on the world 10, 20, or 50 years hence (code people are writing today will still be in use by then, I promise you--I was a developer during Y2K, so I know it's true!).

At any rate, MooTools' flatten() is going to collide with JavaScript's new flatten(), because they don't work exactly the same (i.e., different argument signatures), and any code that relies on MooTools' way of doing flatten() will get...flattened.

And so, someone files a bug on the proposal, suggesting flatten() get changed to smoosh(). Before this gets resolved, imagine you have to make the decision. What would you do? Is "smoosh" logical? Maybe you'd say "smoosh" is silly and instead suggest "press" or "mix". Are those safe choices? What if you used a made-up word and just documented it? What about "clarmp"? What about using a word from another language? They say that naming things is one of the great problems in computer science? It really is a hard problem! On some level we really should have lexicographers sitting on these committees to help us sort things out.

I won't give you my opinion. I intentionally stay out of a lot of these debates because I don't feel qualified to make good decisions, nor do I feel like it matters what I think. I have ideas, possibly good ideas, but the scope and scale of the web is frightening. I've had the privilege to work with some amazing web standards people in the past, and the things they know blow my mind. Every choice is fraught, and every solution is a compromise. It's one of the reasons why I'm so patient with standards bodies and implementors, who try their best and yet still make mistakes.

One thing I do know for sure is that the alternative, where one person or company makes all the decisions, where old code gets trampled and forgotten by progress, where we only care about what's new--is a world that I don't want either. If I have to smoosh() in order to live on a web that's bigger than me and my preferences, I'm OK with that.

It's easy to laugh, but instead I think we should really be thanking the invisible, hard working, well intentioned open standards people who do amazing work to both advance the front and guard the flank.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 131

wo, 07/03/2018 - 19:00

The Joy of Coding - Episode 131 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Computer Security In The Past, Present and Future, with Mikko Hypponen

wo, 07/03/2018 - 19:00

Computer Security In The Past, Present and Future, with Mikko Hypponen Computer security researcher Mikko Hypponen has been hunting hackers since 1991. Join us to hear his insights and stories on computer security history. Mikko will...

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting, 07 Mar 2018

wo, 07/03/2018 - 18:00

Weekly SUMO Community Meeting This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Building an Immersive Game with A-Frame and Low Poly Models

wo, 07/03/2018 - 17:11

Note: This is Part 1 of a two-part tutorial.

There is a big difference between immersion and realism. A high-end computer game with detailed models and a powerful GPU can feel realistic, but still not feel immersive. There’s more to creating a feeling of being there than polygon count. A low poly experience can feel very immersive through careful set design and lighting choices, without being realistic at all.

Today I’m going to show you how to build a simple but immersive game with A-Frame and models from the previous Sketchfab design challenge. Unlike my previous tutorials, in this one we will walk through creating the entire application. Not just the basic interaction, but also adding and positioning 3d models, programmatically building a landscape with rocks, adding sounds and lighting to make the player feel immersed in the environment, and finally interaction tweaks for different form factors.

I hope this blog will inspire you to submit to the current challenge we are running with SketchFab. There’s still time to enter before submissions close on April 2nd.


Our WebVR Whack-an-Imp game is a variation on Whack-A-Mole, except in our case it will be an imp flying out of a bubbling cauldron. Before we get to fancy 3D models, however, we must begin with an empty HTML file that includes the A-Frame library.

<html> <head> <!-- aframe itself --> <script src=""></script> </head> <body> </body> </html>

At first we won’t make the scene pretty at all. We just want to prove that our concept will work, so we will keep it simple. That means no lighting, models, or sound effects. Once the underlying concept is proven we will make it pretty.

Let’s start off with a scene with stats turned on, then add a camera with look-controls at a height of 1.5 m; which is a good camera height for VR interaction (roughly corresponding to the average eye height of most adult humans).

<a-scene stats> <a-entity camera look-controls position="0 1.5 0"> <a-cursor></a-cursor> </a-entity> </a-scene>

Notice the a-cursor inside of the camera. This will draw a little circular cursor, which is important for displays that don’t have controllers, such as Cardboard.

Our game will have an object that pops up from a cauldron, then falls back down as gravity takes hold. The player will have a paddle or stick to hit the object. If the player misses, then the object should fall on the ground. For now let’s represent the object with a sphere and the ground with a simple plane. Put this code inside of the a-scene.

<a-entity id='ball' position="0 1 -4" material="color:green;" geometry="primitive:sphere; radius: 0.5;" ></a-entity> <a-plane color='red' rotation="-90 0 0" width="100" height="100"></a-plane>

Note that I’m using the long syntax of a-entity for the ball rather than a-sphere. That’s because later we will switch the geometry to an externally loaded model. However, the plane will always be a plane, so I’ll use the shorter a-plane syntax for that one.

We have an object to hit but nothing to hit it with. Now add a box for the paddle. Instead of using a controller to swing the paddle, we will start with the simplest possible interaction: put the box inside of the camera. Then you can swing it by just turning your head (or dragging the scene camera w/ the mouse on desktop). A little awkward but it works well enough for now.

Also note that I placed the paddle box at z -3. If I’d left it at the default position it would seem to disappear, but it’s actually still there. The paddle is too close to the camera for us to see. If I look down at my feet I can see it though. Whenever you are working with VR and your object doesn’t show up, first check if it’s behind you or too close to the camera.

<a-entity position="0 0 -3" id="weapon"> <a-box color='blue' width='0.25' height='0.5' depth='3'></a-box> </a-entity>

Great. Now all of the elements of our scene are here. If you followed along you should have a scene on your desktop that looks like this.

Basic Geometry

If you play with this demo you’ll see that you can move your head and the paddle moves with it, but trying to hit the ball won’t do anything. That’s because we only have geometry. The computer knows how our objects look but nothing about how they behave. For that we need physics.


Physics engines can be complicated, but fortunately Don McCurdy has created A-Frame bindings for the excellent Cannon.js open source physics framework. We just need to include his aframe-extras library to start playing with physics.

Add this to the head of the html page:

<!-- physics and other extras --> <script src="//"></script>

Now we can turn on physics by adding physics="debug:true;” to the a-scene.

Of course merely turning on the physics engine won’t do anything. We still have to tell it which objects in the scene should be affected by gravity and other forces. We do this with dynamic and static bodies. A dynamic body is an object with full physics. It can transmit force and be affected by other forces, including gravity. A static body can transmit force when something hits it, but is otherwise unaffected by forces. Generally you will use a static body for something that doesn’t move, like the ground or a wall, and a dynamic body for things which do move around the scene, such as our ball.

Let’s make the ground static and the ball dynamic by adding dynamic-body and static-body to their components:

<a-entity id='ball' position="0 1 -4" material="color:green;" geometry="primitive:sphere; radius: 0.5;" dynamic-body ></a-entity> <a-plane color='red' static-body rotation="-90 0 0" width="100" height="100"></a-plane>

Great. Now when you reload the page the ball will fall to the ground. You may also see grid lines or dots on the ball or plane. These are bits of debugging information from the physics engine to let us see the edges of our objects from a physics perspective. It is possible to have the physics engine use a size or shape for our objects that’s different than the real drawn geometry. I know this sounds strange, but it’s actually quite useful, as we will see later.

Now we need to make the paddle able to hit the ball. Since the paddle moves, you might think we should use a dynamic-body, but really we want our code (and the camera) to control the position of the paddle, not the physics engine. We just want the paddle to be there for exerting forces on the ball, not the other way around, so we will use a static-body.

<a-entity camera look-controls position="0 1.5 0"> <a-cursor></a-cursor> <a-entity position="0 0 -3" id='weapon'> <a-box color='blue' width='0.25' height='0.5' depth='3' static-body></a-box> </a-entity> </a-entity>

Now we can move the camera to swing the paddle and hit the ball. If you hit it hard then it will fly off to the side instead of rolling, exactly what we want!

You might ask why not just turn on physics for everything. Two reasons: First, physics requires CPU time. If more objects have associated physics, the more CPU resources they will consume.

Second reason: For many objects in the scene, we don’t actually want physics turned on. If I have a tree in my scene, I don’t want it to fall down just because it’s a millimeter above the ground. I don’t want the moon to be able to fall from the sky just because it’s above the ground. Only turn on physics for things that really need it for your application.


Moving the ball by hitting it is fun, but for a real game we need to track when the paddle hits the ball to increase the player’s score. We also need to reset the ball back to the middle for another shot. We use collisions to do this. The physics engine emits a collide event each time an object hits another object. By listening to this event we can find out when something has been hit, what it is, and we can manipulate it.

First, let’s make some utility functions for accessing DOM elements. I’ve put these at the top of the page so they will be available to code everywhere.

<script> $ = (sel) => document.querySelector(sel) $$ = (sel) => document.querySelectorAll(sel) on = (elem, type, hand) => elem.addEventListener(type,hand) </script>

Let’s talk about the functions we need. First, we want to reset the ball after the player has hit it or if they’ve missed and a certain number of seconds have gone by. Resetting means moving the ball back to the center, setting the forces back to zero, and initializing a timeout. Let’s create the resetBall function to do this:

let hit = false let resetId = 0 const resetBall = () => { clearTimeout(resetId) $("#ball").body.position.set(0, 0.6,-4) $("#ball").body.velocity.set(0, 5,0) $("#ball").body.angularVelocity.set(0, 0,0) hit = false resetId = setTimeout(resetBall,6000) }

In the above code I’m using the $ function with a selector to find the ball element in the page. The physics engine adds a body property to the element containing all of the physics attributes. We can reset the position, velocity, and angularVelocity from here. The code above also sets a timeout to call resetBall again after six seconds, if nothing else happens.

There are two things to note here. First, I’m setting body.position rather than the regular position component that all A-Frame entities have. That’s because the physics engine is in charge of this object, so we need to tell the physics engine about the changes, not A-Frame.

The second thing to note—the velocity is not reset to zero. Instead it’s set to the vector 0,5,0. This means zero velocity in the x and z directions, but 5 in the y direction. This gives the ball an initial vertical velocity, shooting it up. Of course gravity will start to affect it as soon as the ball jumps, so the velocity will quickly slow down. If I wanted to make the game harder I could increase the initial velocity here, or point the vector in a random direction. Lots of opportunities for improvements.

Now we need to know when the collision actually happens so we can increment the score and trigger the reset. We’ll do this by handling the collide event on the #weapon entity.

on($("#weapon"),'collide',(e)=>{ const ball = $("#ball") if( === && !hit) { hit = true score = score + 1 clearTimeout(resetId) resetId = setTimeout(resetBall,2000) } }) setTimeout(resetBall,3000)

The code above checks if the collision event is for the ball by comparing the body ids. It also makes sure the player didn’t already hit the ball, otherwise they could hit the ball over and over again before we reset it. If the ball was hit, then set hit to true, clear the reset timeout, and schedule a new one for two seconds in the future.

Great, now we can launch the ball over and over and keep track of score. Of course a score isn’t very useful if we can’t see it. Let’s add a text element inside of the camera, so it is always visible. This is called a Heads Up Display or HUD.

<a-entity camera .... <a-text id="score" value="Score" position="-0.2 -0.5 -1" color="red" width="5" anchor="left"></a-text> </a-entity>

We need to update the score text whenever the score changes. Let’s add this to the end of the collide event handler.

on($("#weapon"),'collide',(e)=>{ const ball = $("#ball") if( === && !hit) { ... $("#score").setAttribute('text','value','Score '+score) } })

Now we can see the score on screen. It should look like this:

Score and Physics


We have a basic game running. The player can hit the ball with a paddle and get points. It’s time to make this look better with real 3D models. We need a cool-looking imp to whack with the stick.

The last challenge resulted in tons of great 3D scenes built around the theme of Low-Poly Medieval Fantasy. Many of these have already been split up into individual assets and tagged with medievalfantasyassets.

For this project I chose to use this imp model for the ball and this staff mode for the paddle.

Since we are going to be loading lots of models we should load them as assets. Assets are large chunks of data (images, sounds, models) that are preloaded and cached automatically when the game starts. Put this at the top of the scene and adjust the src urls to point to wherever you downloaded the models.

<a-assets> <a-asset-item id="imp" src="models/imp/scene.gltf"></a-asset-item> <a-asset-item id="staff" src="models/staff/scene.gltf"></a-asset-item> </a-assets>

Now we can swap the sphere with the imp and the paddle box for the staff. Update the weapon element like this:

<a-entity position="0 0 -3" id="weapon"> <a-entity gltf-model="#staff"></a-entity> </a-entity>

And the ball element like this:

<a-entity id='ball' position="0 1 -4" dynamic-body > <a-entity id='imp-model' gltf-model="#imp"></a-entity> </a-entity>

Imp and missing staff

We can see the imp but the staff is missing. What happened?

The problem is the staff model itself. The imp model is (mostly) centered inside of its coordinate system, so it is visually positioned where we put it. However the staff model’s center is significantly off from the center of its coordinate system; roughly 15 to 20 meters away. This is a common issue with models you find online. To fix it we need to translate the model’s position to account for the offset. After playing around with the staff model I found that an offset of 2.3, -2.7, -16.3 did the trick. I also had to rotate it 90 degrees to make it horizontal and shift it forward by four meters to make it visible. Wrap the model with an additional entity to apply the translation and rotation.

<a-entity id=“weapon” rotation="-90 0 0" position="0 0 -4"> <a-entity position="2.3 -2.7 -16.3" gltf-model="#staff" static-body></a-entity> </a-entity>

Now we can see the staff, but we still have a problem. The staff is not a simple geometric shape, it’s a full 3d model. The physics engine can’t work directly with a full mesh. Instead it needs to know which primitive object to use. We could use a box like we did originally, but I chose to go with a sphere centered at the end of the staff. That’s the part that the player should actually use to hit the imp, and by making it larger than the staff’s diameter we can make the game easier than it would be in real life. We also need to move the static-body definition to the outer entity so that it isn’t affected by the model offset.

<a-entity rotation="-90 0 0" position="0 0 -4" id='weapon' static-body="shape:sphere; sphereRadius: 0.3;"> <a-entity position="2.3 -2.7 -16.3" gltf-model="#staff" ></a-entity> </a-entity>

Imp and Staff


We have the core game mechanics working correctly with the new models, let’s add some decorations next. I grabbed more models from SketchFab for a moon, a cauldron, a rock, and two different trees. Place them in the scene at different positions.

<a-assets> <a-asset-item id="imp" src="models/imp/scene.gltf"></a-asset-item> <a-asset-item id="staff" src="models/staff/scene.gltf"></a-asset-item> <a-asset-item id="tree1" src="models/arbol1/scene.gltf"></a-asset-item> <a-asset-item id="tree2" src="models/arbol2/scene.gltf"></a-asset-item> <a-asset-item id="moon" src="models/moon/scene.gltf"></a-asset-item> <a-asset-item id="cauldron" src="models/cauldron/scene.gltf"></a-asset-item> <a-asset-item id="rock1" src="models/rock1/scene.gltf"></a-asset-item> </a-assets> ... <!-- cauldron --> <a-entity position="1.5 0 -3.5" gltf-model="#cauldron"></a-entity> <!-- the moon --> <a-entity gltf-model="#moon"></a-entity> <!-- trees --> <a-entity gltf-model="#tree2" position="38 8.5 -10"></a-entity> <a-entity gltf-model="#tree1" position="33 5.5 -10"></a-entity> <a-entity gltf-model="#tree1" position="33 5.5 -30"></a-entity>

Our little game is starting to look like a real scene!

Trees and Moon

The cauldron has bubbles which appeared to animate on SketchFab but they aren’t animating here. The animation is stored inside the model but it isn’t automatically played without an additional component. Just add animation-mixer to the entity for the cauldron.

The final game has rocks scattered around the field. However, we really don’t want to manually position fifty different rocks. Instead we can write a component to randomly position them for us.

The A-Frame docs explain how to create a component so I won’t recount it all here. The gist of it is this: A component has some input properties and then executes code when init() is called (and a few other functions). In this case, we want to accept the source of a model, some variables controlling how to distribute the model around the scene, and then have a function which will create N copies of the model.

Below is the code. I know it looks intimidating but it’s actually pretty simple. We’ll go through it step by step.

<!-- alternate random number generator --> <script src="js/random.js"></script> <!-- our `distribute` component --> <script> AFRAME.registerComponent('distribute', { schema: { src: {type:'string'}, jitter: {type:'vec3'}, centerOffset: {type:'vec3'}, radius: {type:'number'} }, init: function() { const rg = new Random(Random.engines.mt19937().seed(10)) const center = new THREE.Vector3(,, const jx = const jy = const jz = if($( { const s = for(let i = -s; i<s; i++) { for(let j=-s; j<s; j++) { const el = document.createElement('a-entity') el.setAttribute('gltf-model', const offset = new THREE.Vector3(i*s + rg.real(-jx,jx), rg.real(-jy,jy), j*s - rg.real(-jz,jz)); el.setAttribute('position', center.clone().add(offset)); el.setAttribute('rotation',{x:0, y:rg.real(-45,45)*Math.PI/180, z:0}) const scale = rg.real(0.5,1.5) el.setAttribute('scale',{x:scale,y:scale,z:scale}) $('a-scene').appendChild(el) } } } } }) </script>

First I import random.js. This is a random number generator from this random-js project by Cameron Knight. We could use the standard Math.random() function built into Javascript, but I want to ensure that the rocks are always positioned the same way every time the game is run. This other generator lets us provide a seed.

In the first line of the init() code you can see that I used the seed 10. I actually tried several different seeds until I found one that I liked the look of. If I did actually want each load to be different, say for different levels of the game, then I could provide a different seed for each level.

The core of the distribute component consists of the nested for loops. The code creates a grid of entities, each attached to the same model. For each copy, we will translate it from the natural center point of the original model (the modelCenter parameter) , adding a random offset using the jitter parameter. Jitter represents the maximum amount the rock should move from that grid point. Using 0 0 0 would be no jitter. Using 0 10 0 would make the rocks go vertically anywhere between -10 and 10, but not move at all in the horizontal plane. For this game I used 2 0.5 2 to move them around mostly horizontally but move up and down a tiny bit. The loop code also gives the rocks a random scale and rotation around the Y axis, just to make the scene look a bit more organic.

This is the final result.

Distributed Rocks

This blog has gotten pretty long and we still haven’t worked on lighting, sounds, or polish. Let’s continue the game right now in Part 2.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Building an Immersive Game with A-Frame and Low Poly Models (Part 2)

wo, 07/03/2018 - 17:10

In part one of this two-part tutorial, we created an A-Frame game using 3D models from Sketchfab and a physics engine. Whack-an-Imp works and it has nice landscaping but it still doesn’t feel very immersive.

The lighting is all wrong. The sky is pure white and the ground is pure red. The trees don’t have shadows and there is no firelight coming from the cauldron. The moon is out so it must be night time, but we don’t see reflections of moonlight anywhere. A-Frame has given us default lighting but it no longer meets our needs. Let’s add our own lighting.


Change the color of the ground to something more ground-like, a dark green.

<a-plane color="#52430e" ...

Add a dark twilight sky:

<!-- background sky --> <a-sky color="#270d2c"></a-sky>

I did try adding fog for extra mood, but it simply blocked the sky, so I took it out.

For the moonlight we will use a directional light. This means the light comes from a particular direction but is positioned infinitely far away so that the light hits all surfaces equally. For something like the moon, this is what we want.

<a-entity light="type: directional; color: #ffffff; intensity: 0.5;" position="31 80 -50"></a-entity>

Here’s what it looks like now:

with lighting

Hmm… We are getting there but it’s still not quite right. The moonlight certainly reflects nicely off the tops of the rocks, but the bottoms of the rocks and trees are too dark to see. While this might be a realistic scene it doesn’t feel like a place that I would want to visit.

A common movie trick for shooting a night scene is to have a colored light shining up to illuminate the undersides of objects without making the scene so bright that the illusion of nighttime is ruined. We can do this with a hemisphere light.

A hemisphere light gives us one color above and one below. I used white for the upper and a sort of purplish dark blue for the lower, at an intensity of 0.4. Feel free to experiment with different settings.

<!-- hemisphere light going from white to dark blue --> <a-entity light="type: hemisphere; color: white; groundColor: #5424ff; intensity: 0.4" ></a-entity>

Now just one more thing. The fire under the cauldron should emit a warm red glow and the nearby rock should reflect this glow.

<a-entity light="type: point; intensity: 1.6; distance: 5; decay: 2; color: red" position="0.275 -0.32 -3.77"></a-entity>

This is a red-colored point light, meaning it has a specific position and will decay with distance. I set the decay to 2 and the intensity to 1.6. It is positioned just slightly offset from the bottom of the cauldron so that we get a nice red reflection. I also set the distance to 5 so that only the very closest rocks will get any of the red light.

Here’s what it looks like now. I think we finally have a cool-looking scene. It feels like a place where stuff is happening, with secrets to explore.

lighting with up-lights


There’s just one more piece of lighting work to do. We need some shadows. Shadows are expensive computationally speaking, so we only want to turn them on for objects whose shadows we actually care about.

First we must enable casting shadows from the light that will create them, the moonlight. Simply add castShadow:true to the light attribute.

<a-entity light="type: directional; color: #ffc18f; intensity: 0.5; castShadow: true;" position="31 80 -50"></a-entity>

Now add shadow="receive:true" to the ground. All of the objects now automatically cast shadows onto the ground.

<!-- the ground ---> <a-plane color="#52430e" static-body rotation="-90 0 0" width="100" height="100" shadow="receive:true"></a-plane>

It’s starting to feel like a real place.

Lighting with Shadows

In an effort to save cycles, the shadows will only be cast in an area called the shadow frustum. To see this area set shadowCameraVisible to true on the light.


Just a few more things to polish up our game. Some audio. Lived-in worlds aren’t silent. A summer night should have crickets or a breeze, the bubbling of the cauldron, and of course when we hit the imp it probably should complain, loudly. To liven things up I found a few useful sounds at

First up: the nighttime sounds of crickets and other creatures. I found a clip by freesound user sagetyrtle called October Night 2. Since this clip contains background sounds I don’t want them to be positional. The player should be able to hear them from anywhere, and it should loop over and over. To make this happen I put the sound on the scene itself using the sound attribute.

<a-scene ... sound="src: url(./audio/octobernight2.mp3); loop:true; autoplay:true; volume:0.5;" >

Notice that I set the volume to 50% so it won’t drown out the other sound effects.

Next we need a sound for the bubbles in the cauldron. I’m using this sound called SFX Boiling Water by Euphrosyyn.

<!-- cauldron --> <a-entity gltf-model="#cauldron" ... sound="src: url(./audio/boilingwater-loop.mp3); autoplay: true; loop:true;" ></a-entity>

Again I have set the audio to loop, but because it’s attached to the cauldron’s entity instead of the scene, the sound will appear to come from the cauldron itself. Positional audio really enhances the immersiveness of virtual scenes. Granted, this boiling water effect is way overkill for the cauldron. In real life the cauldron wouldn’t bubble as quickly or loudly, but we want immersion, not realism.

Finally we need a sound on the imp. I chose this oops sound by metekavruk.

<a-entity id='imp-model' ... sound="src: url(./audio/gah.mp3); autoplay: false; loop: false;" ></a-entity>

Both autoplay and loop are set to false because we only want the sound to play when the imp is hit with the weapon. Go down to the collide event handler and add this line to play the sound on every collision.


The original files from are in wav format, which is completely uncompressed and large. If you plan to edit the sound files then this is what you want, but for distribution on the web we want something far smaller. Be sure to convert them to MP3s first, which gives a 90% file size savings. On Mac and Linux you can use the ffmpeg tool to convert them like this:

ffmpeg -i boilingwater-loop.wav boilingwater-loop.mp3 More Polish

Creating the basic code is the first 90% of building a game. Polishing the experience is the second 90%. After I first built Whack-an-Imp I realized it would get boring really fast. The only thing the player can do is wait until the imp jumps out and hit it. It would be more interesting if every now and then something popped out that the player shouldn’t hit. Let’s add a dragon’s egg.

Inside the ball entity we have a model for the imp. Next to it add another entity called egg-model, this time using a slightly distorted sphere.

<!-- the ball contains two models that we swap --> <a-entity id='ball' position="0 0.1 -4" rotation="0 0 0" dynamic-body="shape:sphere; sphereRadius: 0.3; mass:4" > <a-entity id='imp-model' gltf-model="#imp" position="0 -0.4 0" sound="src: url(./audio/gah.mp3); autoplay: false; loop: false;" ></a-entity> <a-sphere id='egg-model' radius="0.25" segments-height="8" segments-width="8" scale="1 0.6 0.8" material="color: purple; flatShading:true; emissive:red; emissiveIntensity:0.2" sound="src: url(./audio/cowbell.mp3); autoplay: false; loop: false;" ></a-sphere> </a-entity>

To make the sphere look more magical I gave it a purple colored material with flat shading, but also set the emissive color to red. Normally a material only reflects light that comes from a light source, but an emissive color lets the material produce it’s own light, even in the dark. In effect, it glows. I also added a cowbell sound from pj1s for when the player hits the egg.

Note in the code above that I moved the dynamic-body from the imp to the surrounding ball entity. This is because we want the same physics behavior regardless of which object is hit. However, the imp model is slightly offset and will stick outside of the sphere bounds, so I adjusted the position by -0.4 in the y direction.

Now we need to update the resetBall event handler with a boolean indicating if we should show the imp or the dragon’s egg.

let showImp = true const resetBall = () => { clearTimeout(resetId) $("#ball").body.position.set(0, 0.6,-4) $("#ball").body.velocity.set(0, 5,0) $("#ball").body.angularVelocity.set(0, 0,0) showImp = (Math.floor(Math.random()*4)!==0) $("#imp-model").setAttribute('visible',showImp); $("#egg-model").setAttribute('visible',!showImp); hit = false resetId = setTimeout(resetBall,6000) }

We also need to make the collide-handler play the correct sound and decrement the score by 10 if you accidentally hit the egg.

on($("#weapon"),'collide',(e)=>{ const ball = $("#ball") if( === && !hit) { hit = true if(showImp) { $("#imp-model").components.sound.playSound(); score = score + 1 } else { $("#egg-model").components.sound.playSound(); score = score - 10 } $("#score").setAttribute('text','value','Score '+score) clearTimeout(resetId) resetId = setTimeout(resetBall,2000) } }) More Details

Let’s fix a few final details before we go: Display the score in white so we can see it in the dark, turn off physics debugging in the a-scene, and remove the cursor inside the camera. We don’t need the cursor anymore because we have the staff to indicate where the camera is pointed.

Final Game

Whack-an-Imp is complete! It’s time to test it out. We already know it works on the desktop. Here it is on my phone.

phone screenshot

Full VR Headset

The only way to test VR is to run it on real hardware. I ran it on my Windows Mixed Reality headset and it looks pretty good. The image and positional audio work quite well. I definitely have a feeling of being present. However, the interaction feels very awkward because the staff is attached to my head. Instead, I want to use the staff with my actual 6dof controller. We can do this by moving the staff inside of a new entity with laser-controls.

<a-entity id='laser' laser-controls="hand: left" raycaster="showLine:false;" line="opacity:0.0;"> <a-entity rotation="-105 0 0" position="0 0 -3.5" id='weapon' static-body="shape:sphere; sphereRadius: 0.3;"> <a-entity scale="1.8 1.8 1.8" position="0 1.5 0"> <a-entity position="2.3 -2.7 -16.3" gltf-model="#staff" ></a-entity> </a-entity> </a-entity> </a-entity>

The laser-controls will automatically attach its contents to the user’s six degrees of motion controller. These are typically big handsets that come with PC headsets like the Vive, Rift, and MR headsets. The laser-controls component also works with three degrees of freedom controllers like the ones that come with Google Daydream and Gear VR.

This creates a new problem. The game now works with a controller in a traditional VR headset, but it won’t work with a phone anymore. There is no canonical solution to this problem, so I have chosen to enable both behaviors and simply enable and disable the correct one at runtime. To do this we’ll need to change our markup slightly.

Add the laser group to the scene, then change the id=weapon of both staffs to class='weapon', and add an extra class for gaze to the one inside of the camera and dof6 to the one inside of the laser.

Here is the final result.

<a-entity camera look-controls position="0 1.5 0"> <a-text id="score" value="Score" position="-0.2 -0.5 -1" color="white" width="5" anchor="left"></a-text> <a-entity rotation="-90 0 0" position="0 0 -4" class='weapon gaze' static-body="shape:sphere; sphereRadius: 0.3;"> <a-entity position="2.3 -2.7 -16.3" gltf-model="#staff" ></a-entity> </a-entity> </a-entity> <a-entity id='laser' laser-controls="hand: left" raycaster="showLine:false;" line="opacity:0.0;"> <a-entity rotation="-105 0 0" position="0 0 -3.5" class='weapon dof6' static-body="shape:sphere; sphereRadius: 0.3;"> <a-entity scale="1.8 1.8 1.8" position="0 1.5 0"> <a-entity position="2.3 -2.7 -16.3" gltf-model="#staff" ></a-entity> </a-entity> </a-entity> </a-entity>

Let’s add some functions to turn a set of controls on or off. The setEnabled function below sets the visible property of the desired element and also turns physics off of its static body. Then the switch6DOF and switchGaze functions call setEnabled with the right parameters.

function setEnabled(sel,vis) { const elem = $(sel) elem.setAttribute('visible', vis) if(elem.components['static-body']) { const sb = elem.components['static-body']; if(vis) { } else { sb.pause() } } } function switch6DOF() { $('#laser').setAttribute('visible',true) setEnabled('.weapon.gaze',false) setEnabled('.weapon.dof6',true) } function switchGaze() { $('#laser').setAttribute('visible',false) setEnabled('.weapon.gaze',true) setEnabled('.weapon.dof6',false) } Play Whack-An-Imp

Now we just need to decide when to use which set of components. We could check if the device is a mobile device but that won’t handle the desktop case when the headset isn’t connected. Instead we should look for the ‘enter-vr’ event and then check if a headset is connected. If it is, switch to 6DOF mode, otherwise use gaze mode.

on($('a-scene'),'enter-vr',()=>{ if(AFRAME.utils.device.checkHeadsetConnected()) { switch6DOF() } else { switchGaze() } }) on($('a-scene'),'exit-vr',switchGaze) //always start up in gaze mode on($('a-scene'),'loaded',switchGaze)

And with that, Whack-an-Imp will always adapt to the current situation. The player can switch in and out of VR mode seamlessly. This is the hallmark of web applications, applied to a mixed reality experience: responsive design.

And with that, we conclude this project. You can play with the live version on my website and check out the code on GitHub. And remember to submit your own entry to our current WebVR challenge. There are lots of great prizes to be had, and lots of learning as well. Let us know how it goes…

Categorieën: Mozilla-nl planet

Air Mozilla: NYU MSPN Webinar Series - Women in Tech

di, 06/03/2018 - 22:30

NYU MSPN Webinar Series - Women in Tech This is a panel discussing being a woman in the field of technology.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: New Review Team Member 2018

di, 06/03/2018 - 14:31

Hi amazing Reps!

We’re so very happy to announce the new Review Team members who have just been officially on boarded. Welcome Michael, Pushpita, Jason, and Arturo to the team!

Review Team 2018The Review Team is a specialized group responsible in reviewing and approving or rejecting every budget requests made by Mozilla Reps. This team is working in close coordination and supervision of Reps Council. The new Review Team will replace the old members and team up with the 3 remaining members to continue the work for a year. You can check more information about The Review Team in this wiki page.

Last but not least, I would also like to thank and appreciate the previous Review Team members Dian Ina, Priyanka, Faisal, and Flore for all their contribution for the past year in the Review Team. Your contribution & dedication has been a great help for the program so far. We can’t thank you enough for that.

Please join me to congratulate all of them on the Discourse topic!

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 05 Mar 2018

ma, 05/03/2018 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Updates to Add-on Review Policies

ma, 05/03/2018 - 18:00

The Firefox add-ons platform provides developers with a great level of freedom to create amazing features that help make users’ lives easier. We’ve made some significant changes to add-ons over the past year, and would like to make developers aware of some updates to the policies that guide add-ons that are distributed publicly. We regularly review and update our policies in reaction to changes in the add-on ecosystem, and to ensure both developers and users have a safe and enjoyable experience.

With the transition to the WebExtensions API, we have updated our policies to better reflect the characteristics of the new technology, and to better clarify the  practices that have been established over the years.

As existing add-ons may require changes to comply with the new policies, we would like to encourage add-on developers to preview the policies, and make any necessary preparations to adjust their add-ons.

Some notable changes and clarifications include:

  • With some minor exceptions for add-ons listed on, all policies apply to any add-ons that are distributed to consumers in any manner.
  • Add-on listings should have an easy-to-read description about everything it does.
  • Add-ons that contain obfuscated, minified or otherwise machine-generated code, must provide the original, non-generated source code to Mozilla during submission as well as instructions on how to reproduce the build.
  • Add-ons that collect, store, use or share user data must clearly disclose the behavior in the privacy policy and summarize it in the description. Users must be provided with a way to control the data collection.
  • Collecting data not explicitly required for the add-on’s basic functionality is prohibited. Add-ons must only collect information about add-on performance and/or use.

If you have questions about the updated policies or would like to provide feedback, feel free to reply on the discourse thread.

The new policies will be effective April 1, 2018.

The post Updates to Add-on Review Policies appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: How to Write CSS That Works in Every Browser, Even the Old Ones

ma, 05/03/2018 - 16:38

Let me walk you through how exactly to write CSS that works in every browser at the same time, even the old ones. By using these techniques, you can start using the latest and greatest CSS today — including CSS Grid — without leaving any of your users behind. Along the way, you’ll learn the advanced features of Can I Use, how to do vertical centering in two lines of code, the secrets to mastering Feature Queries, and much more.

For more videos on CSS Grid, other new CSS, and how to create great layouts on the web, subscribe to Layout Land on YouTube.

We’d love to hear what you think. Comment on YouTube.

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #15

ma, 05/03/2018 - 16:28

I was in Toronto (where a large part of the gfx team is) last week and we used this time to make plans on various unresolved questions regarding WebRender in Gecko. One of them is how to integrate APZ with the asynchronous scene building infrastructure I have been working on for the past few weeks. Another one is how to separate rendering different parts of the browser window (for example the web content and the UI) and take advantage of APIs provided by some platforms (direct composition, core animation, etc.) to let the window manager help alleviating the cost of compositing some surfaces and improve power usage. We also talked about ways to improve pixel snapping. With these technical questions out of the way the rest of the week -just like the weeks before that- revolved around the usual stabilization and bug fixing work.

Notable WebRender changes
  • Nical implemented the infrastructure for asynchronous scene building. This will allow us to move expensive operations out of the critical path and ensure that scrolling and animations are always smooth.
  • Kats fixed a render backend shutdown bug.
  • Kvark fixed the ordering of resource cache operations furing frame capture.
  • Nical fixed some issues with the way pipeline epochs are tracked.
  • Glenn removed an optimization that had become obsolete.
  • Martin cleaned up the scene building code.
  • Glenn made drop-shadow and blur filters use the brush image shader.
  • Kvark fixed a hang with wrench on Windows.
  • Kvark properly cleaned up resources in wrench.
  • Martin fixed a clipping issue with fixed position children elements.
  • Martin relaxed the checks that detect 2D translations, to avoid a lot of expensive and unnecessary 3D transform inversions.
  • Martin optimized out more 3D transform inversions.
  • Glenn shared GPU cache entries for repeated gradient primitives.
  • Kvark fixed some issues with the YUV shader (2).
  • Martin removed some hash map lookups (2).
  • Glenn fixed inverted texture coordinates with the image brush shader.
  • Patrick improved anti-aliasing quality.
  • Glenn implemented clip masks for picture tasks.
  • Glenn ported the YUV shader, the radial gradient shader and the ps_composite shader to be brush primitives.
  • Glenn reduced the GPU cache size of pictures.
  • Glenn allowed selection of perspective interpolation per brush instance.
  • Glenn implemented decomposing repeated radial gradients on the CPU instead of the shader (yielding performance improvements).
  • Kvark implemented disabling perspective interpolation for blend brushes.
  • Kvark implemented default cases for shader switches. While not strictly necessary this works around driver bugs and avoid undefined behavior if an unexpected switch value is passed.
  • Kvark implemented more aggressively reusing intermediate render targets within frames.
Notable Gecko changes
  • Kats worked around a race condition preventing the window from rendering on Windows.
  • Andrew disabled D2D for content rendering when WebRender is enabled.
  • Kats enabled WebGL mochitests on Windows with WebRender.
  • Nical and Sotaro fixed a memory management issue with external images.
  • Botond fixed some APZ freezes related to auto-scrolling.
  • Kats fixed an assertion related to frame throttling.
  • Sotaro ensured webrender’s embedded profiler UI doesn’t break sanity tests.
  • Gankro implemented a fallback for non-trivieal text-combine-upright cases.
  • Kats ensured renderer wakeup notifications allow processing frame capture messages.
  • Sotaro ensured the render thread is properly shut down.
  • Jeff disabled using D2D when painting masks on the content process.
  • Kats fixed a bug in the way we deal with shared memory when allocation fails.
  • Kats removed some unnecessary ipc traffic.
  • Kats enabled some talos tests on Windows.
  • Sotaro improved the way we initialize ANGLE’s GL context with WebRender.
  • Jeff did tons of blob-image related work.
Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser.

Note that WebRender can only be enabled in Firefox Nightly.

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 106

ma, 05/03/2018 - 01:30

Windows nightlies no longer crash on startup! Sorry about the long delay in reverting the change that originally triggered the crash.

In the last week, we merged 70 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions
  • nox removed more ToCss implementations by deriving them.
  • paul added a URL prompt to allow navigating pages in nightly builds.
  • manish fixed a panic that appeared on Wikipedia due to the use of rowspan and colspan.
  • ajeffrey avoided a deadlock caused by IPC channels on certain pages.
  • gw adjusted the behaviour of clipped blend operations.
  • emilio improved the behaviour of iterating over CSS longhand properties in the style system.
  • manish implemented rowspan support for tables.
  • alexfjw improved the performance of some operations that check computed display values.
  • emilio made the style system respect conditionally-enabled CSS properties better.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: Make that Power Mac into a radio station (plus: the radioSHARK tank and AltiVec + LAME = awesome)

zo, 04/03/2018 - 03:17
As I watch Law and Order reruns on my business trip, first, a couple followups. The big note is that it looks like Intel and some ARM cores aren't the only ones vulnerable to Meltdown; Raptor Computer Systems confirms that Meltdown affects at least POWER7 through POWER9 as well, and the Talos II has already been patched. It's not clear if this is true for POWER4 (which would include the G5) through POWER6 as these processor generations have substantial microarchitectural differences. However, it doesn't change anything for the G3 and 7400, since because they appear to be immune to Spectre-type attacks means they must also be immune to Meltdown. As a practical matter, though, unless you're running an iffy program locally there is no known JavaScript vector that successfully works to exploit Spectre (let alone Meltdown) on Power Macs, even on the 7450 and G5 which are known to be vulnerable to Spectre.

Also, the TenFourFox Downloader is now live. After only a few days up with no other promotion, it's pulling down about 200 downloads a day. I note that some small number are current TenFourFox users, which isn't really what this is intended for: the Downloader is unavoidably -- and in this case, also unnecessarily -- less secure, and just consumes bandwidth on Floodgap downloading a tool to download something the browser can just download directly. If you're using TenFourFox already (at least 38 or later), please just download upgrades with the browser itself. In addition, some are Intel Mac users on 10.6 and earlier, which the Downloader intentionally won't grab for because we don't support them. Nevertheless, the Downloader is clearly accomplishing its goal, which is important given that many websites won't be accessible to Power Mac users anymore without it, so it will be a permanent addition to the site.

Anyway, let's talk about Power Macs and radios. I'm always fond of giving my beloved old Macs new things to do, so here's something you can think about for that little G4 Mac mini you tossed in the closet. Our 2,400 square foot house has a rather curious floor plan: it's a typical California single-floor ranch but configured as a highly elongated L-shape along the bottom and right legs of the property's quadrilateral. If I set something playing somewhere in the back of the house you probably won't hear it very well even just a couple rooms away. The usual solution is to buy something like a Sonos, which are convenient and easy to operate, but streaming devices like that can have synchronization issues and they are definitely not cheap.

But there's another solution: set up a house FM transmitter. With a little spare time and the cost of the transmitter (mine cost $125), you can devise a scheme that turns any FM radio inside your house into a remote speaker with decent audio quality. Larger and better engineered than those cheapo little FM transmitters you might use in a car, the additional power allows the signal to travel through walls and with careful calibration can cover even a relatively large property. Best of all, adding additional drops is just the cost of another radio (instead of an expensive dedicated receiver), and because it's broadcast everything is in perfect sync. If your phone has an FM radio you can even listen to your home transmitter on that!

There are some downsides to this approach, of course. One minor downside is because it's broadcast, your neighbours could tune in (don't play your potentially embarrassing, uh, "home movie" audio soundtracks this way). Another minor downside is that the audio quality is decent but not perfect. The transmitter is in your house, so interference is likely to be less, but things as simple as intermittently energized electrical circuits, bad antenna positioning, etc., can all make reception sometimes maddeningly unpredictable. If you're an uncompromising audiophile, or you need more than two-channel audio, you're just going to have to get a dedicated streaming system.

The big one, though, is that you are now transmitting on a legally regulated audio band without a license. The US Federal Communications Commission has provisions under Part 15 for unlicensed AM/FM transmission which limit your signal to an effective distance of just 200 feet. There are more specific regulations about radiated signal strength, but the rule of thumb I use is that if you can detect a usable signal at your property line you are probably already in violation (and you can bet I took a lot of samples when I was setting this up). The FCC doesn't generally drive around residential neighbourhoods with a radio detector van and no one's going to track down a signal no one but you can hear, but if your signal leaks off your property it only takes one neighbourhood busybody with a scanner and nothing better to do to complain and initiate an investigation. Worse, if you transmit on the same frequency as an actually licensed local station and meaningfully interfere with their signal, and they detect it (and if it's meaningful interference, I guarantee you they will sooner or later), you're in serious trouble. The higher the rated wattage for your transmitter, the greater the risk you run of getting busted, especially if you are in a densely populated area. If you ever get a notice of violation, take it seriously, take your transmitter completely offline immediately, and make sure you tell the FCC in writing you turned it off. Don't turn it back on again until you're sure you're in compliance or you may be looking at a fine of up to $75,000. If you're not in the United States, you'd better know what the law is there too.

So let's assume you're confident you're in (or can be in) compliance with your new transmitter, which you can be easily with some reasonable precautions I'll discuss in a moment. You could just plug the transmitter into a dedicated playback device, and some people do just that, but by connecting the transmitter to a handy computer you can do so many other useful things. So I plugged it into my Sawtooth G4 file server, which lives approximately in the middle of the house in the dedicated home server room:

There it is, the slim black box with the whip antenna coming off the top sandwiched between the FireWire hub (a very, very useful device and much more reliable than multiple FireWire controllers) and the plastic strut the power strip is mounted on. This is the Whole House FM Transmitter 3.0 "WHFT3" which can be powered off USB or batteries (portable!), has mic and line-level inputs (though in this application only line input is connected), includes both rubber duck and whip antennas (a note about this presently) and retails for about $125. Amazon carries it too (I don't get a piece of any sales, I'm just a satisfied customer). It can crank up to around 300 milliwatts, which may not seem like much to the uninitiated, but easily covers the 100 foot range of my house and is less likely to be picked up by nosy listeners than some of the multi-watt Chinese import RF blowtorches they sell on eBay (for a point of comparison, a typical ham mobile radio emits around 5 watts). It also has relatively little leakage, meaning it is unlikely to be a source of detectable RF interference when properly tuned.

By doing it this way, the G4, which is ordinarily just acting as an FTP and AFP server, now plays music from playlists and the audio is broadcast over the FM transmitter. How you decide to do this is where the little bit of work comes in, but I can well imagine just having MacAmp Lite X or Audion running on it and you can change what's playing over Screen Sharing or VNC. In my case, I wrote up a daemon to manage playlists and a command-line client to manipulate it. 10.5+ offers a built-in tool called afplay to play audio files from the command line, or you can use this command line playback tool for 10.2 through 10.4. The radio daemon uses this tool (the G4 server runs Tiger) to play each file in the selected folder in order. I'll leave writing such a thing to the reader since my radio daemon has some dependencies on the way my network is configured, but it's not very complex to devise in general.

Either way works fine, but you also need to make sure that the device has appropriate signal strength and input levels. The WHFT3 allows you to independently adjust how much strength it transmits with a simple control on the side; you can also adjust the relative levels for the mic and line input if you are using both. (There is a sorta secret high-level transmission mode you can enable which I strongly recommend you do not: you will almost certainly be out of FCC compliance if you do. Mine didn't need this.) You should set this only as high as necessary to get good reception where you need it, which brings us to making sure the input level is also correct, as the WHFT3 is somewhat more prone to a phenomenon called over-modulation than some other devices. This occurs when the input level is too high and manifests as distortion or clipping but only when audio is actually playing.

To calibrate my system, I first started with a silent signal. Since the frequency I chose had no receivable FM station in my region of greater Los Angeles (and believe me, finding a clear spot on the FM dial is tough in the Los Angeles area), I knew that I would only hear static on that frequency. I turned on the transmitter with no input using the "default" rubber duck antenna and went around the house with an FM radio with its antenna fully retracted. When I heard static instead of nothing, I knew I was exceeding the transmission range, which gave me an approximate "worst case" distance for inside the house. I then walked around the property line with the FM radio and its antenna fully extended this time for a "within compliance" test. I only picked up static outside the house, but inside I couldn't get enough range in the kitchen even with the transmitter cranked up all the way, so I ended up switching the rubber duck antenna for the included whip antenna. The whip is not the FCC-approved configuration (you are warned), but got me the additional extra range, and I was able to back down the transmitter strength and still be "neighbour proof" at the property line. This is also important for audio quality since if you have the transmitter power all the way up the WHFT3 tends to introduce additional distortion no matter what your input level is.

Next was to figure out the appropriate input level. I blasted Bucko and Champs Australian Christmas music and backed down the system volume on the G4 until there was no distortion for the entire album (insert your own choice of high volume audio here such as Spice Girls or Anthrax), and checked the new level a few times with a couple other albums until I was satisfied that distortion and overmodulation was at a minimum. Interestingly, while you can AppleScript setting the volume in future, what you get from osascript -e 'set ovol to output volume of (get volume settings)' is in different units than what you feed to osascript -e 'set volume X': the first returns a number from 0-100 with 14 unit steps, but the second expects a number from 1-10 in 0.1 unit steps. The volume on my G4 is reported by AppleScript as "56" but I set that on startup in a launchd startup item with a volume value of 4.0 (i.e., 4 times 14 equals 56). Don't ask me why Apple did it this way.

There were two things left to do. First was to build up a sufficient library of music to play from the file server, which (you may find this hard to believe) really is just a file server and handles things like backups and staging folders, not a media server. There are many tools like the most excellent X Lossless Decoder utility -- still Tiger and PowerPC compatible! -- which will rip your CDs into any format you like. I decided on MP3 since the audio didn't need to be lossless and they were smaller, but most of the discs I cared about were already ripped in lossless format on the G5, so it was more a matter of transcoding them quickly. The author of XLD makes the AltiVec-accelerated LAME encoder he uses available separately, but this didn't work right on 10.4, so I took his patches against LAME 3.100, tweaked them further, restored G3 and 10.4 compatibility, and generated a three-headed binary that selects for G3, G4 and a special optimized version for G5. You can download LAMEVMX here, or get the source code from Github.

On the G5 LAMEVMX just tears through music at around 25x to as much as 30x playback speed, over three times as fast as the non-SIMD version. I stuck the MP3 files on a USB drive and plugged that in the Sawtooth so I didn't have to take up space on its main RAID, and the radio daemon iterates off that.

The second was figuring out some way to use my radios as, well, radios. Yes, you could just tune them to another station and then tune them back, but I was lazy, and when you get an analogue tuner set at that perfect point you really don't want to have to do it again over and over. Moreover, I usually listen to AM radio, not FM. One option is to see if they stream over the Internet, which may even be better quality, though receiving them over the radio eliminates having to have a compatible client and any irregularities with your network. With a little help from an unusual USB device, you can do that too:

This is the Griffin radioSHARK, which is nothing less than a terrestrial radio receiver bolted onto a USB HID. It receives AM and FM and transmits back to the Mac over USB audio or analogue line-level out. How do we hook this up to our Mac radio station? One option is to just connect its audio output directly, but you should have already guessed I'd rather use the digital output over USB. While you can use Griffin's software to tune the radio and play it through (which is even AppleScript-able, at least version 2), it's PowerPC-only and won't run on 10.7+ if you're using an old Intel Mac for this purpose, and I always prefer to do this kind of thing programmatically anyhow.

For the tuner side, enterprising people on the Linux side eventually figured out how to talk to the HID directly and thus tune the radio manually (there are two different protocols for the two versions of the radioSHARK; more on this in a moment). I combined both protocols together and merged it with an earlier but more limited OS X utility, and the result is radioSH, a commandline radio tuner. (You can also set the radioSHARK's fun blue and red LEDs with this tool and use it as a cheapo annunciator device. Read the radioSH page for more on that.) I compiled it for PowerPC and 32-bit Intel, and the binary runs on anything from 10.4 to 10.13 until Apple cuts off 32-bit binary compatibility. The source code is available too.

For USB audio playthru, any USB audio utility will suffice, such as LineIn (free, PowerPC compatible) or SoundSource (not free, not PowerPC compatible), or even QuickTime Player with a New Audio Recording and the radioSHARK's USB audio output as source. Again, I prefer to do this under automatic control, so I wrote a utility using the MTCoreAudio framework to do the playback in the background. (Use this source file and tweak appropriately for your radioSHARK's USB audio endpoint UID.) At this point, getting the G4 radio station to play the radio was as simple as adding code to the radio daemon to tune the radio with radioSH and play the USB audio stream through the main audio output using that background tool when a playlist wasn't active (and to turn off the background streamer when a playlist was running). Fortunately, USB playthru uses very little CPU even on this 450MHz machine.

I mentioned there are two versions of the radioSHARK, white (v1) and black (v2), which have nearly completely different hardware (belied by their completely different HID protocols). The black radioSHARK is very uncommon. I've seen some reports that there are v1 white units with v2 black internals, but of the three white radioSHARKs I own, all of them are detected as v1 devices. This makes a difference because while neither unit tunes AM stations particularly well, the v1 seems to have poorer AM reception and more distortion, and the v2 is less prone to carrier hum. To get the AM stations I listen to more reliably with better quality, I managed to track down a black radioSHARK and stuck it in the attic:

To improve AM reception really all you can do is rotate or reposition the receiver and the attic seemed to get these stations best. A 12-foot USB extension cable routes back to the G4 radio station. The radioSHARK is USB-powered, so that's the only connection I had to run.

To receive the radio on the Quad G5 while I'm working, I connected one of the white radioSHARKs (since it's receiving FM, there wasn't much advantage to trying to find another black unit). I tune it on startup with radioSH to the G4 and listen with LineIn. Note that because it's receiving the radio signal over USB there is a tiny delay and the audio is just a hair out of sync with the "live" analogue radios in the house. If you're mostly an Intel Mac house, you can of course do the same thing with the same device in the same way (on my MacBook Air, I use radioSH to tune and play the audio in QuickTime Player).

For a little silliness I added a "call sign" cron job that uses /usr/bin/say to speak a "station ID" every hour on the hour. The system just mixes it over the radio daemon's audio output, so no other code changes were necessary. There you go, your very own automatic G4 radio station in your very own house. Another great use for your trusty old Power Mac!

Oh, one more followup, this time on Because I Got High Sierra. My mother's Mac mini, originally running Mavericks, somehow got upgraded to High Sierra without her realizing it. The immediate effect was to make Microsoft Word 2011 crash on startup (I migrated her to LibreOffice), but the delayed effect was, on the next reboot (for the point update to 10.13.2), this alarming screen:

The system wouldn't boot! On every startup it would complain that "macOS could not be installed on your computer" and "The path /System/Installation/Packages/OSInstall.mpkg appears to be missing or damaged." Clicking Restart just caused the same message to appear.

After some cussing and checking that the drive was okay in the Recovery partition, the solution was to start in Safe Mode, go to the App Store and force another system update. After about 40 minutes of chugging away, the system grudgingly came up after everything was (apparently) refreshed. Although some people with this error message reported that they could copy the OSInstall.mpkg file from some other partition on their drive, I couldn't find such a file even in the Recovery partition or anywhere else. I suspect the difference is that these people encountered this error immediately after "upgrading" to Because I Got High Sierra, while my mother's computer encountered this after a subsequent update. This problem does not appear to be rare. It doesn't seem to have been due to insufficient disk space or a hardware failure and I can't find anything that she did wrong (other than allowing High Sierra to install in the first place). What would she have done if I hadn't been visiting that weekend, I wonder? On top of all the other stupid stuff in High Sierra, why do I continue to waste my time with this idiocy?

Does Apple even give a damn anymore?

Categorieën: Mozilla-nl planet