mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 8 uur 4 min geleden

Eric Shepherd: The Sheppy Report: August 15, 2014

za, 16/08/2014 - 06:07

I’m quite satisfied with how well the past week has gone. It’s been incredibly productive despite a few distractions and a great many meetings. Here’s my report on what I’ve been doing, and what I will be doing in the near future.

What I’m up to

I’ve been busy optimizing my own work processes, as well as setting up information so others know what needs to be done as well. I’ve also done a lot of copy-editing and organizational work in content, and have been touching up stuff ranging from the MDN inbox to the Learning Area to doc plans. It’s been a wonderfully productive week, and it feels good to be getting back into the swing of things.

What’s up next

Next week, I intend to dive into WebRTC, and to start putting together sample code so I can begin work on writing guides to working with WebRTC. It’s going to be really exciting!

As usual, of course, I have a number of other, smaller, tasks I want or need to accomplish, too.

What I did this week
  • Moved the main page on DocShell from the top level of MDN to its proper home, and filed a bug on getting it fully documented.
  • Dealt with infrastructure failures at my office: the air conditioning completely failed (working in a swelteringly hot office is not fun), and I discovered standing water in the restroom. The A/C is now fixed; the water problem has not been figured out yet, although the water has evaporated for now.
  • Helped test the new GitHub login support on the MDN staging server, and filed a few bugs regarding quirks I noticed.
  • Reviewed and had nothing but nice things to say about the new welcome email sent out by MDN to new members.
  • Got involved in the discussion about disabling styled pasting in the MDN editor. I’m opposed to this; I would much rather we solve the problem from the user’s end — contributors should learn to be sure they don’t include crufty styles when they paste into MDN. But ideally we can come up with a solution that doesn’t break existing workflows, punishing people who aren’t making this mistake.
  • Moved the page Write a new entry in the Glossary to the right place; it had accidentally been given an obsolete URL due to a couple of MDN bugs. Reviewed and copy-edited the content.
  • Filed a bug for a feature suggested by biraj: content from one page on MDN that’s presented inside another page should be reflected in the displayed contributor list. I don’t know how likely this is to be addressed (it certainly won’t happen soon). It’s a big project and there are many unanswered questions.
  • Copy-edited the new Glossary entry for the term “i18n“.
  • Added the word “Glossary” to the list of tags that MDN offers auto-completion for.
  • Followed-up on a bug asking me to write some copy for the Github login experience.
  • Did some tidying up of the MDN style guide, including moving Chris Mills’ excellent new section on our policies on gender-neutral terminology to be among the language and grammar topics rather than in the markup and wiki usage topics area.
  • Minor changes to the Learning Area page on CSS. This page needs a lot of work still but I saw low-hanging fruit.
  • Converted the Learning Area into a zone. Its landing page needs finishing, but this is a nice step.
  • Finished an extensive review and copy-edit of the Learning Area page Write an article to help learn about the web.
  • Removed a page that was actually just a set of Firefox problem reports, and emailed the author information about how to properly report issues.
  • Found an MDN “Linking Guide” lurking in a dead part of the site, and moved it into the MDN user guide, with major updates and copy-edits.
  • Updated the MDN user guide’s landing page to use the LandingPageListSubpages macro, so it looks a little better.
  • Adapted Luke’s screenshot/diagram about how to enable a page subscription on MDN into a new page in the MDN how-to guide.
  • Tweaks to the Inbox page in preparation for expanding its visibility.
  • Integrated the first round of feedback into the WebGL documentation plan.
  • Updated my Geeksphone Peak to Firefox OS 2.0 nightly for use in upcoming WebRTC sample code tests.
  • Filed a bug about iCloud.com saying “Android not supported” on Firefox OS 2.0′s browser.
  • Pinged developers about reviewing the WebGL documentation plan.
  • Created several new basic (that is, mostly empty) MDN development project plan pages:
  • Copy-edited the Learning Area’s How to contribute to the Learning Area article.
  • Filed a documentation request bug for documenting the NavigatorFeatures (hasFeature/getFeature) API. This API is low-priority privileged API, documentation-wise.
  • Added notes to a couple of pages in the MDN contributor guide about being careful when pasting, to avoid pasting unwanted styles and classes into MDN.
  • Created the DocPlanHelpUs macro, which inserts text inviting participation in a project and describing how to get started. Added it to the appropriate place in all extant doc plans.
  • Took some notes, sent some emails, and added links to the project planning page for the on-site messaging project.
Meetings attended this week Monday
  • #mdndev bug triage
  • MDN development planning
Tuesday
  •  1:1 meeting with Jean-Yves
Wednesday
  • MDN Community meeting
  • 1:1 meeting with Ali
Friday

As you see, it was an intensely busy week! I’ve started moving into using OmniFocus to track what needs to be done and by when and I think it’s going to help, but we will see how it plays out over time. I have a history of not doing well at keeping up with my organizational systems, as you’ve possibly noted if you read my posts over history about my various attempts to get organized.

At any rate, it’s been a good week, and I can’t wait to get more done!

 

Categorieën: Mozilla-nl planet

Hannah Kane: 1/3 of a year

za, 16/08/2014 - 02:33

Four Months (just remembered this blog is for (H)an(n)a(h)grams, so: Fonts Humor)

I’ve been here for four months. I think the famous Mozilla firehose is finally starting to slow down. A main difference between me now and me three months ago is that now, on most days, I actually know how to do the things on my ToDo list. SuperBonus: I can usually follow what’s happening in meetings now!

Significantly, I’m starting to add things to my ToDo list that are more than just the bare minimum of program maintenance. I’m starting to understand where I might be able to innovate and add value.

About a month after I started, I inherited the job of maintaining @Mozilla social channels, and about a month after that, I inherited the job of managing the relationship with our Maker Party PR company. Together these things took up a good chunk of my time over the past two months, largely because they’re outside my area of expertise (I helped launch a social media program at my last job, but that was back when Twitter was brand spankin’ new, and things have changed tremendously since then).

While I think both of these tasks ended up providing me with a great platform for learning about the organization (I have to know what’s going on so I can tweet about it!), I am looking forward to focusing more intently on the aspects of the program manager job I feel I’ve been neglecting.

I Feel Good (I Do Elf Ego)

Some of the things I feel good about from the past few months:

  • I think the Maker Party engagement updates and analyses (some of which I’ve posted on this blog) have been helpful in sparking some good conversation at our daily “Peace Room” meetings. Also, charts make me seem smart.
  • Our Salesforce for Partners implementation is a tiny bit behind schedule, but I feel good about where we are in the process. I was glad to be able to take this partially off of others’ plates and share the burden, because no one should have to face Salesforce alone.
  • Working with Dave, Erika, Mavis, and Sabrina on the Advocacy site has been a pleasure, and I think the end product is going to be great.
  • Yesterday’s Tweetchat was pretty fun.

Can Do Better (rent taco bed)

Some things I want to work on in the months ahead:

  • I want to operationalize what it means to be a Clockmaster, and refine the suite of tools we use to manage our work. Now that we have Sprinter, I feel a lot better about Bugzilla (which, I admit, I headdesked about for the first couple months I was here). But I don’t think it fully meets our needs, so we’ll need to supplement with other tools and processes.
  • I want to help reduce the pain in our grant reporting process. Gettin’ paid shouldn’t hurt so bad.
  • I want to crack the nut of social media. I was inspired by a recent conversation with Michaela Smiley, and I believe we can do a much better job of engaging and serving our community, while also better expressing the Mozilla brand and growing the Webmaker community. Hashtag win.
  • I want to make sure Maker Party 2015 is even more full of awesome by capturing and acting upon our learnings from this year. In general, I’d like to help create a culture of reflection and continuous improvement. Not to get too existential, but isn’t this what life is about? </insight into hannah’s worldview>
  • I want to improve our systems for distributing knowledge across the organization. I’ve seen really good examples of this (Andrea’s email-fu workshop, the Fundraising workshops that happened a few months ago, Geoffrey’s trendlines workshop from this morning, and probably many more). I don’t think Encyclopedia BestPractica is working as a tool for knowledge sharing, so I’d like to come up with something that meets people where they are (rather than making them come find it).
  • I want to keep improving our cross-team collaboration. Even in my short time here, I’ve already seen great strides in this, but there’s more to do. This project brief template is one of my first direct efforts toward that, in addition to just building relationships with many of my super rad co-workers.

Finally, I just want send a big ol’ shout out to said co-workers for making my first third of a year so enjoyable.

Onward!


Categorieën: Mozilla-nl planet

Christian Heilmann: Creating a set of icons in various sizes in the browser

vr, 15/08/2014 - 22:37

Hooray, I did do some coding again for a change! One of the issues I had with submitting apps for the Firefox Marketplace is that the validator of the manifest always complains about me missing out on certain icon sizes. That’s why I thought it’d be sweet to have an in-browser tool to generate all of the icons one needs from an image. And here it is:

icon generator in action

You can see a demo of it working on YouTube:

That’s all there is to it – it uses Canvas and the fileReader API to convert the images and create the files. JSZip, a neato library to create Zips was also in use.

For now your original image needs to be square and 512×512 pixels or the generator will just paste the first 512×512 pixels in. Images are automatically resized to 512 pixels and centered on a transparent background. A later version might allow you to move the image around. Let’s see when I get the time.

Categorieën: Mozilla-nl planet

Soledad Penades: Audio for the masses

vr, 15/08/2014 - 19:51

The video above is from LXJS – the Lisbon JavaScript conference, which happened more than a month ago. I gave this talk past week again at VanJS, so I decided it was time for that belated write up on this talk.

If you want to follow along, or play with the examples, the slides are online and you can also check out the code for the slides.

As I’ve given this talk several times I keep changing bits of the content each time depending on what the audience seems more interested in, plus I also sometimes improvise stuff which I don’t remember when writing the final write up, so if you were at any of the talks and see that something’s missing or different now you know why! I’ve also added a section at the end with frequent questions I’ve been asked, hope that’s useful for you too.

I work at Mozilla

red panda

I work at Mozilla (the above is a red panda, which we love), but that’s not what I want to talk about today. I want to talk about music instead.

I ♥︎ music

ukulele and things

I’ve been interested in music since forever, but I have no formal training at all–it’s all self-taught. For example, past year I was walking in Greenwich Park (note for the Americans: Greenwich is where time begins) during one nice summer afternoon, and I got this idea that I should totally learn to play the ukulele. As soon as I got home I went to buy one online, and the store had an offer where you’d get free shipping if you spent at least £25… so I added more instruments to the order: the tambourine! the shaker! the harmonica! And all that even if I didn’t know how to play any of those, because I thought: I can learn with videos or tutorials!

But it wasn’t always this way…

Learning from old books

old books

At the beginning, my only source of musical information was old books. I would go to my grandma’s and find books from my mum or my father, and I’d try to read them even if I didn’t fully understand it all. Or I would maybe go to the local library and look at books on music, but it was really hard to learn how to play musical instruments this way because those were mostly books on the history of music, and besides that, I didn’t have any musical instrument to play with. So it was all really frustrating.

Learning playground

casio pt-100

Things got a little bit better when I got a CASIO keyboard as a gift. I was finally able to play sounds! I really enjoyed playing with it and getting lost in the melodies and the chords and the different atmospheres I could create. And when I say I enjoyed it, I mean it literally. I don’t think anyone from my family was really enjoying it as I didn’t have any sense of rhythm whatsoever and I would just hang on notes or chords that I particularly liked for as long as I felt like hanging, which was probably driving them all nuts.

At some point I was given a booklet with the scores of popular songs, but even then it was hard to play anything that resembled the original songs, because I didn’t know how to interpret note lengths–again I just stayed on interesting notes for as long as I wanted. If there had been Internet back then, I could have listened to what the song was supposed to sound like, but there wasn’t, so I didn’t have any reference that would let me understand where I was doing it wrong.

Computers + trackers

impulse tracker

Everything really started to accelerate when I got access to one of those “multimedia” computers and tracker software. For those who don’t know, trackers are a type of software that allows you to sequence music and store it together with the sampled audio data, so later on it is relatively easy to reproduce the song and make it sound like the way the author intended, unlike what happens with MIDI files, which mostly just contain the score of the song, not the actual sounds contained in it.

Despite there being no internet (or being accessible to just a few people in academia and big cities in Spain), there were lots of distribution networks that got these files copied between enthusiasts. There was people that loved to trade these songs by snail mail (sending actual floppy disks in the post), others used BBSs, and finally there was a monthly tracking contest in a computer magazine that I used to buy–they would put all the participating songs into a directory in their CD with goodies, and this is how I got into the whole tracker scene.

A great thing about trackers was that you could see all the notes and effects used and also edit them, so effectively they were open source music, way before that term even existed. We all could learn from the ways of the masters, and that’s how I got a lot better.

The most hilarious part was how people ‘hacked’ the names of the samples in the trackers so that together they could form a bigger message, and that way an strange communication channel with other people in the contest was created, and everyone started “sending messages” to each other using the sample names. Of course, as in any sufficiently popular channel, there were flamewars! People would fiercely fight over issues such as which style of music or which tracker software were better. Looking back, it’s both hilarious and amazing that all this happened inside a directory in a CD.

Music communities

traxinspace

A while after, when I finally got access to proper Internet, I learnt about online music communities like Traxinspace. Suddenly there was this new universe of music and trackers from all over the world, and we could interact with each other–it wasn’t just speaking to people from Spain! Traxinspace had this feature where people could be artist of the day, or of the month, in addition to other rankings. If you got to the top of these or got to be a featured artist, it was nothing short from winning a Grammy or going to Oprah: it was a huge thing in these circles! The competition to be the best was fierce.

Demoscene

js1k minecraft

More or less at the same time I got into the demoscene as well. For those who don’t know, the demoscene is mostly about making the computers do impressive stuff. For example, size-limited coding: make a tiny bit of code do a lot of things. A modern example of this is the JS1K competition, where authors create something that runs in the browser using less than 1024 characters and does some impressive thing. The screenshot above is one of the entries from the latest edition, “here be dragons“, rendering a Minecraft-like world on the browser using WebGL.

Size limited sound generation

supersole @ breakpoint

One of the areas that piqued my coding curiosity in the demoscene was sound synthesis, where you could write software that generated lots of audio using a small amount of code. At some point I started coding “sorollet“, my own synthesiser with C and C++. One of the first incarnations was 410 Kb of C++ code that, when compiled, generated a 13 Kb executable file, so that is a good compression ratio here… but running it would generate 2:25 minutes of music (with only 13Kb of binaries!) and that is actually the equivalent to 25.5 Mb of stereo WAV data at 44.1KHz i.e. CD quality audio. All that with only 13Kb of code!

Web Audio

web audio modular routing

Even if I had had great fun building my synthesiser in C++, that path wasn’t without issues. I was mostly a web developer, so coding in C meant lots of dealing with memory allocations and management instead of having fun with pure audio code. I jumped at Web Audio as soon as I could because I was quite experienced with JavaScript, and it seemed so easy in comparison with C!

Sorollet.js

sorollet

The first thing I did was just porting my C++ synthesiser to JavaScript, using one of the provided Web Audio nodes that allowed me to generate audio on the fly with JavaScript (the ScriptProcessorNode, formerly JavaScriptNode).

I was really happy to have my code running in the browser, in realtime! But… I quickly realised that was not what the web is about. The web is not about telling someone to visit a page, wait until some code loads, and then spend their next minutes listening to some music Sole composed. The web is about interacting and connecting APIs together, and I was failing quite miserably at that.

Sorollet.js UI

sorollet ui

I started building a UI for my synthesiser, so people could change the parameters and experiment with the different output they would get, and play some notes using their keyboard or clicking on the keys on the screen. They would also get a visual representation of what was being played, thanks to an oscilloscope of sorts that I would draw using the Canvas API.

But the best of all was that each time they changed any parameter for the synthesiser, the URL hash would update automatically, and if they then copied and pasted that URL and sent it to a friend, their friend could get a synthesiser “copy” with those settings applied. Likewise, that person could make more changes in the settings and send the new URL to another friend, which is way more “web-like” than what I had built initially.

Web Audio === easy (?)

modular

Now I was really happy about the output and super excited because Web Audio was so easy to use! After all it’s all about connecting modules together!

OK… easy? I know not everyone finds it easy or has had even a tiny bit of knowledge about it, so I built some examples for progressively introducing its features and explaining how to combine them together with other Web APIs.

I also built a web component for helping me with these demonstrations–so I wouldn’t be purely livecoding, but I would still be able to run things step by step instead of running it all in just one go. If I were to be demonstrating this in front of you in a talk, I know how to operate it, and so you wouldn’t need to do anything, but that’s not the case here, so these are the instructions for using the demos:

  • Command + E executes either the selected piece of code, or if nothing is selected, the entire code in the editor
  • You can toggle showing the code or not
  • You can also run the whole thing pressing the run button
  • Some examples have the autostart attribute so you don’t need to press anything in order to get things going

For more details have a look at the source code of the component. I’m working on making it an independent component but I still haven’t quite figured how to do it in a way that doesn’t involve using bower, so stay tuned for more news if you’re interested in bower-free web components.

That said, let’s move on to the examples!

Oscillator

oscillator

Oscillators are one of the basic units to generate sound in Web Audio. But before you get to have an oscillator instance, you have to create an Audio Context. If you’re familiar with Canvas 2D or Web GL contexts, Audio contexts are very similar: once you have one, they give you access to methods and constants to generate stuff within that context. It’s where everything happens, but it is also akin to a painter’s toolbox, since it provides you with tools you will need to deal with audio.

Here’s how you create the context:

var audioContext = new AudioContext();

and once you have it… well, there’s nothing happening yet! We create an oscillator with this:

var oscillator = audioContext.createOscillator();

nothing’s playing yet… and in fact the oscillator is not even connected anywhere, it’s just floating in the “web audio context nothingness”. Let’s connect it before we start using it:

oscillator.connect(audioContext.destination);

audioContext.destination represents the final output for the audio context, or in other words: your computer’s sound card, and ultimately, the speakers or headphones–whatever you use to listen to audio!

We are now ready to generate some sound using our newly created oscillator:

oscillator.start();

We can also change the frequency the oscillator is playing at. By default it starts at 440.0 Hz, which is the standard A-4 note. Let’s make it play A-3, i.e. 220 Hz:

oscillator.frequency.value = 220;

That change is immediate. But we could also schedule the change to happen in two seconds from now:

oscillator.frequency.setValueAtTime(440, audioContext.currentTime + 2);

or even smoothly ramp to that value for two seconds:

oscillator.frequency.linearRampToValueAtTime(220, audioContext.currentTime + 2);

And that’s how we can create basic sounds and manipulate them with quite accurate timing with relative ease.

Another great feature of Web Audio is, as mentioned, its modularity. You can connect the output of one oscillator to the parameters of another oscillator, so you make the value of that parameter oscillate, and build more complex sounds. But what is a parameter? It is any value you can change in nodes. For example, the frequency is a parameter in OscillatorNodes (they are technically known as AudioParams).

Suppose we create one oscillator which we’ll use to play sounds, as we did before. Now we create another oscillator but give it a very slow frequency value, 10 Hz. That’s why we call it an LFO: Low Frequency Oscillator.

var lfo = audioContext.createOscillator();
lfo.frequency.value = 10;

Now we create a Gain Node, which is another of the nodes that Web Audio provides to us. The purpose of these nodes is basically to multiply their input value by the value of their gain parameter, so you can use it, for example, to reduce loudness (with gain values less than 1.0) or to amplify very quiet sounds (with gain values higher than 1.0):

var lfoGain = audioContext.createGain();
lfoGain.gain.value = 100;

So if we connect the output of the LFO oscillator (which changes from -1 to 1) to the input of the gain node (which is set to multiply everything by 100), we’ll get values from -100 to 100:

lfo.connect(lfoGain);

If we can connect this to the frequency parameter of the initial oscillator, it will be added to its frequency value–if the frequency is 220, it will start oscillating between 120 and 320 (220 – 100, 220 + 100), creating a funny spooky kind of sound:

lfoGain.connect(oscillator.frequency);
lfoGain.start();

This is just a small sample of what the Web Audio API can do, but still it’s just Web Audio, and we agreed before that the greatness of the Web relied in connecting multiple APIs together. So let’s look at an example that does more things at the same time:

Drag and Play

drag and play

We want to load a sample to play it in our example, and we want to be able to load it in two different ways

  1. dragging and dropping it from our file explorer to the browser window–we’ll use the Drag And Drop API, or…
  2. selecting a file using a file input (which makes more sense in touch devices where there is generally no way to drag items across currently running apps) –we’ll use the File API to read the contents of the file client side, instead of sending it to a server for further processing

Once we get the sample data as an ArrayBuffer, we’ll create a node of type BufferSource, and set its buffer to be the data we just decoded:

bufferSource = audioContext.createBufferSource();
bufferSource.connect(finalGain);
bufferSource.buffer = buffer;

We also want it to loop!

bufferSource.loop = true;

And then starting it is similar to the way we start oscillators:

bufferSource.start(0);

Another thing we want to do is to display a representation of the loaded wave. We have a canvas we’ll use for this, and the drawSample function that takes values from -1 to 1–exactly the same values we have in the buffer! So it’s just a matter of running the following:

drawSample(waveCanvas, buffer.getChannelData(0));

Note: getChannelData returns the first channel’s data. For monophonic sounds, the buffer will only have one channel, but stereo and 3D sounds will have more than one channel. I’m keeping it safe and using the first one which for stereo corresponds to the left channel. It’s not totally accurate as we might be discarding too much data (if the signal is very different between both channels), but for demonstration purposes it should be more than enough.

We also want to draw the wave that is being played on a canvas. To “hook” into the BufferSource output and get some already preprocessed data that we can then use on the canvas, we’ll use an instance of AnalyserNode:

var analyser = audioContext.createAnalyser();

This analyser is connected between the output of the bufferSource and the audio context’s destination, so that it can “inspect” what is going through:

bufferSource.connect(finalGain);
finalGain.connect(analyser);
analyser.connect(audioContext.destination);

Note: due to the way Web Audio is architected, bufferSources are meant to be disposed of when you’re done playing them–i.e., once you run their stop method, they’re over and calling start again has no effect; you have to create another BufferSource and assign it the buffer and all parameters and connections. And in this particular example, each time you load a sample you need to create a new BufferSource too.

But we do not want to be reconnecting the buffer source to the analyser every time, so we instead create a “finalGain” node that we permanently connect to the analyser, and we’ll connect the bufferSources to the finalGain node instead, and let Web Audio clean up the disposed nodes when it deems appropriate (via JavaScript’s Garbage Collector mechanism).

Back to the analyser node: we will create an array of unsigned integers to store the analysis data. We will also make sure it is big enough that it can hold all the values that the analyser will return:

analyser.fftSize = 2048;
analyserData = new Uint8Array(analyser.frequencyBinCount);

Each time we want to draw the wave, we’ll ask the analyser to have a look and return the results of its analysis into the analyserData array:

analyser.getByteTimeDomainData(analyserData);

These values are bytes–which means they go from 0 to 255. But, as we mentioned, our drawing function drawSample requires values from -1 to 1, so we just convert them and put them into a Float32 array we initialised earlier on:

for(var i = 0; i < analyserData.length; i++) {
        osciData[i] = analyserData[i] / 128 - 1;
}

And we’re finally ready to draw the wave:

drawSample(osciCanvas, osciData);

Just in case you wondered, we’re using requestAnimationFrame to drive the animation.

So here’s a moderately complex example that does a bunch of things using different Web APIs… and it’s just less than two hundred lines of code. This shows that the web platform is really powerful! Building the same thing using native code would involve lots more of code, plus longer development time, and lots of debugging. Plus it would work in only one platform, whereas this little but powerful example works in all the platforms, no porting required.

Can we go even wilder? Of course we can, let’s involve WebGL so we can smoothly draw lots of elements at the same time, and let’s also use live microphone input instead of a pre-recorded sample, and we’ll display a visualisation of what’s happening.

Realtime visualisations

realtime visualisations

We’ll use Three.js for dealing with the WebGL side of things–i.e. all the rendering with nice antialias, shadow, fog, etc.

We’ll use the getUserMedia part of WebRTC. This allows us to access both the webcam and microphone input, but for the purposes of this demo we just want to “hear” things, so we’ll request only audio.

We will be creating an analyser node again, but instead of connecting a bufferSource to it as we did in the previous example, we’ll connect the MediaStreamSource we just created using the stream we got from getUserMedia. MediaStreamSourceNodes allow us to take a MediaCaptureStream (the type that getUserMedia returns) and send it to another nodes, so we can integrate external sources of sound into our web audio graph–even sound from another peer if we’re using WebRTC for a call!

navigator.getUserMedia(
        { audio: true },
        function yay(stream) {
                source = audioContext.createMediaStreamSource(stream);
                source.connect(analyser);
        },
        function nope(err) {
                console.err("oh noes", err);
        }
);

Once the microphone is allowed access, we’ll start getting interesting data out from the analyser, and the bars won’t be boringly static but move in response to the changes in the input levels. Try clapping!

So we have a really smooth example that is drawing a lot of detail on the screen in response to live microphone input, and not only is multiplatform, but again is less than two hundred lines of code. Doing this for native platforms would be really, really long and tedious to build.

Browser makers have already put on a lot of work into unifying this kind of multimedia interfaces (sample decoding, live input streams, accelerated graphics) so you can take advantage of them and build awesome stuff instead of fighting with compilers and platform-specific frameworks for accessing these capabilities.

The web platform is really incredibly powerful nowadays, but…

We shouldn’t stop here

There’s still over two billion people who don’t have access to the Internet.

That’s right. 2,000,000,000+ people. It’s about two Europes and a half, or in terms of Canada (where I gave this talk too), about 16 times the population of Canada.

At Mozilla we believe the Internet must be open and accessible, so we are working on fixing this too. We partnered with some manufacturers to make a phone that would run Firefox OS and also be affordable, “the $25 dollar phone”.

Tarako

This phone is in the same price range than featurephones, but it runs Firefox OS, which can be upgraded periodically and also has a way lower barrier of entry for businesses and creators than iOS or Android, since apps are written using JavaScript. They also can be run in other devices and operating systems–not only Firefox OS.

We’re also working in new hardware APIs for accessing all these new sensors and features using purely JavaScript. The work that goes on this benefits all the platforms as these APIs are standardised and more vendors implement them in their browsers—so we get closer and closer to the Write Once, Run Anywhere “dream”.

We have a lot of incredible powers on the web, and as Uncle Ben would say, with great power comes great responsibility. It’s great that we can do so much, but we also should be thinking about doing good things–it’s our responsibility!

So I sat down and tried to think of ways in which I could use my new powers for good. What about…

Simulating instruments

Suppose you’re a kid just like I once was and want to learn music, but have no instrument. But you have a phone.

There is this huge list of instruments in Wikipedia. What if we built some sort of simulation software that could recreate different instruments using just a bunch of parameters and no sample data–just as I did with my sound synthesis experiments? Once you got the application code, getting “new instruments” would be just a matter of downloading parameters data, which would imply very little bandwidth requirements. That kid with no instruments but a phone now could have a bunch of different virtual instruments!

Also, since this would be running in phones with lots of sensors, we could make the most of out them and for example use touch and pressure where available, so we could build an engaging interactive simulation.

What about if instead of keeping our sets of parameters to ourselves we share them by uploading them to a patch database where other people could download patches too? We would be building an amazing resource–specially if we enable people to remix existing patches. And another great outcome would be that, by exposing your creation to people from a different background from yours, you’ll get unusual contributions, and that’s always great and enriching.

Unconventional composers

Once you’ve built an instrument simulator, what is stopping you from building some sort of composer so that you can write down your own songs? But we should be very careful and avoid building a conventional composer, in the manner of staff or drum machine based composers.

Why?

Because these composers are not suited to non-Western music. For example some music from Eastern Europe has lots of tempo and key changes, and all these things are lost when “translating” to a staff based music transcription.

Instead, I’d propose we start by recording everything that happens while playing a simulated instrument, and make the data available to “data scientists”–preferably local data scientists, so they experiment with the recordings and devise some sort of visualisation/manipulation interface that works well for that kind of music. Maybe they will come up with local symbols that will seem very strange to us, but that work really well in their settings.

And again, since we’re not storing the sample data but only the events, transmitting these songs would take way less bandwidth, time and money than sending an MP3 file with a 2.5G connection.

People might start composing their own songs using their own very local traditions and maybe share them afterwards, and what might happen is that we end up with a world wide library of local songs—a true treasure trove for mankind that anyone could access.

Too conventional

But still even if they sound fun, these ideas sounded very conventional. I had to think of something that went further and was more original. What about if I took the Web FM API (available in Firefox OS) and mixed it with the Web Audio API? What could happen?

Web FM API
+ Web Audio
---------------
  ???

I think we could maybe have “over the FM” data transmission. Granted, the bandwidth wouldn’t be specially amazing: only 0.65 Mb a day, but still that is like 4000 SMS messages. And because it is broadcasted, it won’t slow down if many users try to get the data at the same time.

There are some existing precedents, mostly focused on providing updates on things such as national news and weather or complementary information to the currently broadcasted programme, but what if communities used this to deliver local updates? For example, the status of water tanks, the area weather forecast—things that are really important to the people close to that FM station.

And although these ideas might sound clever and cool…

…these are just some examples that my ignorant Western mind came up with…

… but I’ve never set my feet outside of my bubble of privilege and thus I can’t predict what is really required in a place where an EDGE connection is the best you can get (if you can get anything at all). And while I humbly recognise that I might be severely wrong about this, I also believe that 3D games or the common reasons why “Web Audio is awesome” in the Western world are not what solves issues for people in those places.

However, that doesn’t mean that we should just give up on Web Audio and feel sad and miserable that we can’t help people because we don’t even know about their issues. Remember that we have great powers… and a great responsibility—a responsibility to teach and make this all as accessible and easy to use as we can. And to keep experimenting, devising new ideas, and creating more code over which lots of Web Audio stuff can be built in the future.

Let’s build stuff and let’s share it. Let’s speak about this and make it all better, for everyone, so they can build their own solutions to their problems—which they understand better than we do!

And let’s do it together! :-)

mozfest

Frequently asked questions

Each time I’ve given this talk I’ve got lots of interesting questions, so I figured they should be accompanying this post too because some of them are asked really often! There we go:

Where do I start learning about Web Audio? Which library should I use?

You could start by having a look at the Web Audio API book by Boris Smus–maybe even buy it if you find it useful!

Once you’re done with the book, the Web Audio API specification is also quite understandable, and it’s hosted in github, so if you find that something is not obvious you should file a new issue to get it clarified.

The Web Audio API is simple enough that you don’t need any library to get started. THANKFULLY.

How would you go about writing those instrument simulators?

There are many ways to simulate instruments. You can write new simulators using JavaScript or we could try to compile the core emulation from existing C/C++ libraries into JS using tools such as Emscripten into asm.js—we don’t need to spend our time rewriting things that already work well.

Of course you also have to take into account what can actually run in a phone. It’s not a full blown computer, so you have to be mindful of restrictions and adjust your code so it degrades nicely in less powerful platforms.

Have you written any sort of online composer?

Yes, but not the sort that I am advocating for. I built a drum machine demo that is included in Sorollet.js – online here. It has many issues, specially timing issues! but it was an early attempt, so I don’t go too heavy on the self-torturing department here. Still it has nice things such as the ability to store the whole song in the URL so you can share it. Sadly the URL is a bit too long for some places so you can’t actually share it, ha!

I started building something else later but it is not public (mostly because I broke something and it doesn’t work right now, but also because there’s nothing to see yet).

Can I actually connect Web FM with Web Audio today?

Turns out you can’t–so far the Web FM API speaks directly to the hardware and doesn’t go through JS, but there have been discussions hinting at being able to get a data URI for the media stream instead of just connecting to the speakers.

I’ve asked at the Web API list for clarifications. Let’s see what happens :-)

What about MIDI in the browser?

There is a Web MIDI API but it is not implemented in Firefox. I wrote about this a while ago, but in short, if you think you have what it takes, you’re more than welcome to help us implement it!

In the meantime you can “hack” temporary solutions such as running node in your computer to read the MIDI or OSC data and then forward it to the browser using something such as Socket.IO, which is what I did for my JSConf.EU 2013 project.

Can you do speech recognition with Firefox OS and Web audio?

Not yet… but here’s a page detailing a roadmap of sorts, and here’s a tracking bug with all the pending subtasks.

When are Firefox OS phones going to be sold in (put your country here)?

I can’t answer that with certainty because it depends on the operators in each country (for operator “branded” phones) and on selling restrictions (for developer phones sold directly–some stores won’t sell to some countries). Your best bet is either search with your favourite search engine or maybe contact your local mozilla community/reps to see if they are more aware of the current status than I can possibly be.

Otherwise I will refer you to the Flame developer phone page.

Also, no, I don’t have phones to give you away, sorry.

What sort of features will those phones have? Conductive vs capacitive screens?

Again, this depends on which sort of markets the operators are targeting and which sort of phones they’re willing to work with. I’m sorry I can’t answer :-)

flattr this!

Categorieën: Mozilla-nl planet

Pagina's