Web Audio API

The Web Audio API is used to manipulate sound. This API can make web pages make sounds.

Basic usage

The browser natively provides the AudioContext object, which is used to generate a sound context, which is connected to the speaker.

const audioContext = new AudioContext();

Then, get the audio source file, decode it in the memory, and then you can play the sound.

const context = new AudioContext();

fetch("sound.mp4")
  .then((response) => response.arrayBuffer())
  .then((arrayBuffer) => context.decodeAudioData(arrayBuffer))
  .then((audioBuffer) => {
    // Play sound
    const source = context.createBufferSource();
    source.buffer = audioBuffer;
    source.connect(context.destination);
    source.start();
  });

context.createBuffer()

The context.createBuffer() method generates a memory operation view for storing data.

const buffer = audioContext.createBuffer(channels, signalLength, sampleRate);

The createBuffer method accepts three parameters.

  • channels: Integer, representing channels. Create a mono sound, the value is 1.
  • signalLength: Integer, representing the length of the sound array.
  • sampleRate: Floating point number, indicating the sampling rate, that is, how many times are sampled per second.

The two parameters signalLength and sampleRate determine the length of the sound. For example, if the sampling rate is 1/3000 (3000 samples per second) and the length of the sound array is 6000, then the sound played is 2 seconds in length.

Next, use the buffer.getChannelData method to get a channel.

const data = buffer.getChannelData(0);

In the above code, the parameter 0 of buffer.getChannelData means to get the first channel.

Next, put the sound array into this channel.

const data = buffer.getChannelData(0);

// singal is an array of sounds
// singalLengal is the length of the array
for (let i = 0; i < signalLength; i += 1) {
  data[i] = signal[i];
}

Finally, use the context.createBufferSource method to generate a sound node.

// Generate a sound node
const node = audioContext.createBufferSource();
// Put the memory object of the sound array into this node
node.buffer = buffer;
// Connect the sound context to the node
node.connect(audioContext.destination);
// start playing sound
node.start(audioContext.currentTime);

By default, the playback will stop after playing once. If you need to loop, you can set the looping property of the node object to true.

node.looping = true;

filter

The Web Audio API natively provides some filters to process sound.

First, use the context.createBiquadFilter method to create a filter instance.

const filter = audioContext.createBiquadFilter();

Then, specify the type of filter through the filter.type attribute.

filter.type = "lowpass";

Currently, there are the following types of filters.

  • lowpass
  • highpass
  • bandpass
  • lowshelf
  • highshelf
  • peaking
  • notch
  • allpass

Then specify the frequency attribute of the filter.

filter.frequency.value = frequency;

Finally, the filter instance is connected to the node instance to take effect.

sourceNode.connect(filter);