My last post was a delirious declaration that I had finally made something with Web Audio. Here it is, a drum sequencer in the browser. Now I want to explain how I did it. First, what is Web Audio, and why do we need it?
The Web Audio API is a versatile system for controlling audio in the web. It does this inside an audio context, which you declare as follows:
You then create sources, either by loading a song or sample via an AJAX request (though you can’t use jQuery since it doesn’t support responses of type ‘arrayBuffer’:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Once you have sources, you route sounds through a series of audio nodes to add effects, such as reverb, filtering, and compression. It is also modular, meaning it is simple to change the routes to add or remove effects. It is analogous to physical synthesizers and modular synthesizers. The last node in the audio routing graph is the destination node, which is responsible for actually playing the sounds. Here’s a routing example:
1 2 3 4 5 6 7
1 2 3 4 5 6 7 8
and you want to change tempo, what do you do? You can’t unschedule the notes once you’ve told them to play. You would have to add a gain node to control the volume, turn off the volume on the now off-tempo loop, and then create a new loop with the new correct tempo. And ad infinitum every time you want to change the tempo again.
That’s awful, and there is a better way. After reading A Tale of Two Clocks a few times until it finally made sense. The way to make a good scheduler is to schedule one note ahead of time, and to synchronize it to the Web Audio clock. Unfortunately, it’s easier said than done. Your scheduler function ends up looking something like this:
1 2 3 4
But scheduleAheadTime and nextNoteTime have to be tweaked according to your use case. In my case, I ended up using the same parameters as used in the Google Shiny Drum Machine and it seems to be okay. Let’s look at my code for the core scheduling functionality:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
Okay, there’s a lot going on here, so let’s break this down. Note that I didn’t include anything about playing notes or updating the visuals as I just wanted to focus on the timing/scheduling.
Then, in schedule, we first check if we need to sequence audio events, depending on noteTime and currentTime. Note that the 200 ms offset here represents the variable scheduleAheadTime in the previous example. The while loop inside of the function schedule() is run again and again until we’ve scheduled all notes that fall within our lookahead interval. Therefore, as we schedule more notes, and increment noteTime based on the tempo of the song (in advanceNote()):
1 2 3
Since we’re assuming a 4/4 time signature, to obtain noteTime we multipy secondsPerBeat by 0.25 since each note is a 16th note. Once we’ve scheduled all of the notes that fall within our lookahead interval, the condition of the while loop in schedule()
will fail. Lastly, what’s with the last line of schedule()?
Lastly, there’s the rhythmIndex variable in advanceNote():
1 2 3 4
This variable simply represents where you are in the context of the loop, and resets when the sequence reaches then end so that it loops.
Cool, so now we can sequence audio properly! Well, kind of. As is pointed out in this StackOverflow, synchronizing to the Web Audio clock does not mean that your audio is synchronized to the animation frame refresh rate. However, it’s simple to fix this. You can replace
requestAnimationFrame is awesome because the browser choses the frame rate based on the other tasks it is handling, and therefore the rate is more consistent, and if the current tab loses focus, requestAnimationFrame will stop running.
If you’ve got this scheduling thing down, and want to learn how to integrate it into our own application, I recommend checking out my code at Github to see how I did it.
Thanks for reading, and look out for a part two of this blog post, which will explain more in depth about audio routing graphs, and different types of nodes, i.e. how to add different types of effects.