Autotel

Worker + karplus synth example

Worker code

You can add all this code into one file, if you like.

Globals

I should fetch the samplingRate from the web audio context. But let's be lazy this time.

const samplingRate = 44100;

Main types

This way it's easier to extend the functionality. The use will become clear below.

class SampleBySampleOperator {
    operation = (inSample) => inSample;
}


class Voice {
    getBlock(size) { }
    trigger({ freq, amp }) { }
    stealTrigger = (p) => this.trigger(p);
    isBusy = false;
}

Faster random generator

We are not using this for a security application, so we can really use any random generator.

function mulberry32(a) {
    return function() {
      var t = a += 0x6D2B79F5;
      t = Math.imul(t ^ t >>> 15, t | 1);
      t ^= t + Math.imul(t ^ t >>> 7, t | 61);
      return ((t ^ t >>> 14) >>> 0) / 4294967296;
    }
}
const random = mulberry32(38913);

The actual synthesis

These are the operators that produce the sound. The modules, if you will. Note that one of these operate on each sample, while the other works for each block, hence the need for different classes: SampleBySampleOperator and Voice.

Delay line

It's like a guitar delay. An effect can be sidechained; so that the sound is modified every time it's looped back. (tape delays are famous for doing this, for example).

class RingBuffer {
    constructor(length) {
        let pointer = 0
        // let buffer = new Float32Array(length);
        let buffer = [];
        this.length = length;

        this.get = (key) => {
            if (key < 0) {
                let t_pointer = pointer + key;
                while (t_pointer < 0) {
                    t_pointer += length;
                }
                return buffer[pointer + key];
            } else {
                return buffer[(pointer + key) % length];
            }
        }
        this.push = (item) => {
            buffer[pointer] = item;
            pointer = (pointer + 1) % length;
            return pointer;
        }
        this.next = () => {
            pointer = (pointer + 1) % length;
            return buffer[pointer];
        }
        this.getCurrent = () => {
            return buffer[pointer] || 0;
        }
        this.resize = (newLength) => {
            if(newLength > length){
                // buffer = new Float32Array(
                //     buffer.slice(0, length)
                // );
            }
            length = newLength;
            this.length = length;
        }
        this.getAll = () => buffer;
    }
};

class DelayLine extends SampleBySampleOperator {
    /** @type {RingBuffer}*/
    memory = new RingBuffer(512);
    delaySamples = 400;
    feedback = -0.99;

    /** @type {SampleBySampleOperator|null} */
    sidechainEffect = null;

    operation = (insample) => {
        if (this.memory.length != this.delaySamples) {
            this.memory.resize(this.delaySamples);
        }

        // get delayed sample
        let oldestSample = this.memory.getCurrent();
        let ret = oldestSample + insample;

        if (this.sidechainEffect) {
            ret = this.sidechainEffect.operation(ret);
        }

        //write output into delay line
        this.memory.push(ret * this.feedback);

        return ret;
    }
}


Infinite impulse response filter

These are not vital for this synth, but in my opinion, using filters on a karplus synth improves the sound and the possibilities a lot.

Filters that have a theoretically infinite state, because they pick up their own output as input. I like to think of filters in three ways:

  • A filter is a sample averager. By averaging current and past sample, the effect is a muffling of the output.
  • A filter is a very short delay line. By mixing it's current output to some of past outputs, the result is an average of the last times.
  • A filter is akin to image processing's kernel-based filters. Depending on the coefficients of each kernel (in this case, one-dimensional), one might get an "edge detector" which would be a high pass filter, or a "blur", which would be a low-pass.
class IIRFilter extends SampleBySampleOperator {
    /** @type {Number}*/
    memory = 0;
    k = 0.01;
    amp = 0.99;
    operation = (insample) => {
        let ret = 0;
        let ik = 1 - this.k;
        ret = insample * ik;
        ret += this.memory * this.k;
        ret *= this.amp;

        this.memory = ret;
        return ret;
    }
    constructor(props = {}) {
        super();
        Object.assign(this, props);
    }
}

class IIRFilter1 extends SampleBySampleOperator {
    /** @type {Array<Number>}*/
    memory = [0, 0, 0];
    amp = 0.99;
    operation = (insample) => {
        let ret = 0;

        ret = insample * 0.01;
        ret += this.memory[0] * 0.2;
        ret += this.memory[1] * 0.3;
        ret += this.memory[2] * 0.49;
        ret *= this.amp;

        this.memory.pop();
        this.memory.unshift(ret);

        return ret;
    }
}

Karplus voice.

By making a "voice" abstraction, instead of implementing it directly on the worker, making it polyphonic becomes trivial.

A karplus synth is not much more than a delay, of very short delay time and high feedback (near 1, where 1 is infinite sustain). The delay time is adjusted so that it corresponds to a frequency, which represents the note. This makes it theoretically the model of a guitar string: when a guitar string receives an impulse, the impulse will travel through the string very fast, and bounce on the edges back. All the frequencies that are resonant to that string will meet their bounce from the opposite edge, enhancing them, whereas the non-resonant frequencies will disperse out very fast from the string. The same happens with the very short delay line: the resonant frequencies will be enhanced by the delay as it repeats the last samples, whereas the other frequencies will not.

I made a simple version, which is the bare bones of a karplus string, and another, more complex version that filters the sound a bit so that it sounds warmer.


class KarplusVoice extends Voice {
    envVal = 0;
    decayPerSample = 1 / (samplingRate * 0.01);

    /** @type {DelayLine}*/
    delayLine = new DelayLine();

    constructor(){
        super();
        this.delayLine.sidechainEffect = new IIRFilter({
            k:0.1,
        });
    }

    setDecay = (seconds) => this.decayPerSample = 1 / (samplingRate * seconds);

    trig({ freq, amp }) {
        this.delayLine.feedback = -1.001;
        this.envVal = amp;
        this.delayLine.delaySamples = Math.round(samplingRate / freq);
        this.isBusy = true;
    }
    /** @param {number} blockSize*/
    getBlock(blockSize) {
        const output = new Float32Array(blockSize);
        for (let splN = 0; splN < blockSize; splN++) {
            let sampleNow = (random() - 0.5) * this.envVal;
            sampleNow = this.delayLine.operation(sampleNow);
            output[splN] = sampleNow;
            if (this.envVal > 0) {
                this.envVal -= this.decayPerSample;
            } else {
                this.envVal = 0;
            }
        }
        return output;
    }
}

A polyphony manager, that multiplies the karplus voice in order to produce polyphony. It could be used for anyhting that extends the Voice prototype.

class PolyManager {
    maxVoices = 32;
    /** @type {Array<Voice>} */
    list = [];
    lastStolenVoice = 0;
    /** @type {ObjectConstructor} VoiceConstructor */
    constructor(VoiceConstructor) {
        this.getVoice = () => {
            let found = null;
            this.list.forEach(voice => {
                if (!voice.isBusy) {
                    found = voice;
                }
            });
            if (!found) {
                if (this.list.length > this.maxVoices) {
                    found = this.list[this.lastStolenVoice];
                    this.lastStolenVoice += 1;
                    this.lastStolenVoice %= this.maxVoices;
                } else {
                    found = new VoiceConstructor();
                    this.list.push(found);
                }
            }
            return found;
        }
    }
}

The worklet

This one does not much more than to present the synth to the worklet invoker. I implemented a very basic note-to frequency converter. For me it's a placeholder, because I'd like to experiment with tuning systems on this synth.

class RustWorklet extends AudioWorkletProcessor {
    constructor() {
        super();

        this.samples = [];
        this.totalSamples = 0;

        this.port.onmessage = ({ data }) => {
            // console.log(data);

            const freq = data.frequency?data.frequency:55 * Math.pow(2, data.note / 12);
            const tVoice = this.policarpo.getVoice();
            tVoice.trig({ freq, amp: 1 });
        };
    }

    policarpo = new PolyManager(KarplusVoice);

    process(inputs, outputs, parameters) {
        const output = outputs[0];
        const blockSize = outputs[0][0].length;
        const mix = new Float32Array(blockSize);
        this.policarpo.list.forEach((voice) => {
            const voiceResults = voice.getBlock(blockSize);
            for (let sampleN = 0; sampleN < blockSize; sampleN++) {
                mix[sampleN] += voiceResults[sampleN] / 10;
            }
        });

        output.forEach(
            /**
             * @param {Float32Array} channel
             * @param {number} channelN
             */
            (channel, channelN) => {
                channel.set(mix)
            }
        )
        return true
    }
}

registerProcessor("magic-worklet-2", RustWorklet);

Code to use the worklet

Audio context getter helper (optional)

/**
 * Creates an audiocontext and deals with no-autoplay policy nuisance by waiting 
 * user click before starting context. 
 */
class AudioContextGetter {
    constructor(){
        /** @type {AudioContext|false} */
        let audioContext=false;
        /** @returns {Promise<AudioContext>} */
        this.get = () => {
            return new Promise((resolve)=>{
                if(audioContext){
                    resolve(audioContext);
                }else{
                    document.addEventListener("mousedown",()=>{
                        
                        if(!audioContext){
                            console.log("creating audio context (user gesture)");
                            audioContext = new(window.AudioContext || window.webkitAudioContext)();
                        }

                        if(audioContext) resolve(audioContext);
                    });
                }
            });
        }
    }
}

let audioContextGetter = new AudioContextGetter();

Worklet loader

class Synth {
    operation = (event) => {}
    
    constructor(){
        audioContextGetter.get().then((audioContext) => {
            console.log("start audio");


            console.log("loading worklet:");

            audioContext.audioWorklet.addModule('MagicEngineWorklet.js').then(() => {
                console.log("worklet loaded!");

                /** @type {AudioWorkletNode} */
                let rustWorklet = new AudioWorkletNode(audioContext, 'MagicWorklet');

                // setInterval(() => rustWorklet.port.postMessage('yuiim'), 1000);
                rustWorklet.port.onmessage = (e) => console.log(e.data)
                rustWorklet.connect(audioContext.destination);

                /** @param {{note:number, val:number}} event */
                this.operation = ({note,val}) => {
                    if(event.isTrigger()){
                        rustWorklet.port.postMessage({
                            note,
                            val,
                        });
                    }
                    
                }
            });
        });
    }
}