Sounds and visual forms complement each other and enable us to create expressive and unique compositions. Pts simplifies a subset of Web Audio API to assist you with common tasks like playbacks and visualizations.
Before we start, let's try a silly and fun visualization using Pts's sound functions.
Let's get some sounds to start! Do you want to load from a sound file, receive microphone input, or generate audio dynamically? Pts offers three handy static functions for these.
Sound.load
to load a sound file with an url or a specific <audio>
element. You can check if the audio file is ready to play by accessing .playable
property.let sound = Sound.load( "/path/to/hello.mp3" );
let sound2 = Sound.load( audioElement );
Sound.generate
to create a sound.let sound = Sound.generate( "sine", 120 ); // sine oscillator at 120Hz
Sound.input
to get audio from default input device (usually microphone). This will return a Promise object which will resolve when the input device is ready.let sound;
Sound.input().then( s => sound = s ); // default input device
Sound.input( constraints ).then( s => sound = s ); // advanced use cases
Here's a basic demo of getting audio from microphone:
You can then start
and stop
playing the sound like this:
sound.start();
sound.stop();
sound.toggle(); // toggle between start and stop
sound.playing; // boolean to indicate if sound is playing
It gets more interesting when we can look into the sound data and analyze them. Let's hook up an analyzer to our Sound instance using the analyze
function.
sound.analyze( 128 ); // Call once to initiate the analyzer
This will create an analyzer with 128 bins (more on that later) and default decibel range and smoothing values. See analyze
docs for description of the advanced options.
There are two common ways to analyze sounds. First, we can represent sounds as snapshots of sound waves, which correspond to variations in air pressure over time. This is called the time-domain, as it measures amplitudes of the "waves" over time steps.
To get the time domain data at current time step, call the timeDomain
function.
// get an uint typed array of 128 values (corresponds to bin size above)
let td = sound.timeDomain();
Optionaly, use the timeDomainTo
function to map the data to another range, such as a rectangular area. You can then apply various Pts functions to transform and visualize waveforms in a few lines of code.
// fit data into a 200x100 area, starting from position (50, 50)
let td = sound.timeDomainTo( [200, 100], [50, 50] );
form.points( td ); // visualize as points
In the following example, we map the data to a normalized circle and then re-map it to draw colorful lines.
sound.timeDomainTo( [Const.two_pi, 1] ).map( t => ... );
In a similar way, we can access the frequency domain data by freqDomain
and freqDomainTo
. The frequency bins are calculated by an algorithm called Fast Fourier Transform (FFT). The FFT size is usually 2 times the bin size and they need to be multiples of 2. (Recall that we set bin size to 128 earlier). You can quickly test it with a single line of code:
form.points( sound.freqDomainTo( space.size ) );
Or make something fun, weird, beautiful through the interplay of sounds and shapes.
For advanced use cases, you can create an instance using Sound.from
static method. Here's an example using Tone.js:
let synth = new Tone.Synth();
let sound = Sound.from( synth, Tone.context ); // create Pts Sound instance
synth.toMaster(); // play using tone.js instead of Pts
The following demo generates audio using tone.js and then visualizes it with Pts:
If needed, you can also directly access the following properties in a Sound instance to make full use of the Web Audio API.
.ctx
to access the AudioContext
instance.node
to access the AudioNode
instance.stream
to access the MediaStream
instance.source
to access the HTMLMediaElement
if you're playing from a sound fileAlso note that calling start
function will connect the AudioNode to the destination of the AudioContext, while stop
will disconnect it.
Web Audio covers a wide range of topics. Here are a few pointers for you to dive deeper:
Creating and playing a Sound
instance
s = Sound.load( "path/to/file.mp3" ); // from file
s = Sound.generate( "sine", 120 ); // sine wave at 120hz
s = Sound.from( node, context ); // advanced use case
Sound.input().then( _s => s = _s ); // get microphone input
s.start();
s.stop();
s.toggle();
Getting time domain and frequency domain data
s.analyzer( 1024 ); // Create analyzer with 1024 bins. Call once only.
s.timeDomain();
s.timeDomainTo( area, position ); // map to a area [w, h] from position [x, y]
s.freqDomain();
s.freqDomainTo( [10, 5] ); // map to a 10x5 area