Autotel

Sympathetic resonance experimentation

Using a Bela board, I set out to create a synthesizer that applied the sympathetic resonance feature present in piano physical models to other waves than piano strings. Sine waves, in this case.

If you are in front of a real piano, you can try this experiment: press one key slowly enough, so that it does not get triggered. Without releasing that key, press the same note in a different octave, as loud as you can, and release it fast. I will spoil the surprise in advantage of those who cannot do this test: the previously pressed note will have picked up some sound, which came from the short key stroke on a different octave. This phenomenon is a manifestation of sympathetic resonance.

For each note on the piano, you can learn which other notes are the most and least harmonic with respect to that note by trying the same exercise with different pairs. The sympathetic resonance, hence, is not the same for pair of frequencies.

For a long time I have been interested in modelling this in a program, in order to be able to change the parameters of the sound while having the resonance. Beside this, I have a strong attachment to the sound of real pianos, partly because of this feature. Note, however, that there are many software pianos that implement this too.

Basic synth code

The first step, was to create a simple sine-wave synth with an ADSR envelope, capable of polyphony. The design is simple, and largely based on Bela's example code:

namespace global{
    unsigned int sampleRate=0;
    unsigned int controlInterval=10;//how many samples it takes to update a control signal
    double secondsPerSample=0;
}
struct envelope{
    float attack=0;
    float decay=0;
    float sustain=0;
    float release=0;
    
    float attack_delta=0;
    float decay_delta=0;
    float sustain_delta=0;
    float release_delta=0;
    
    void set(float a, float d,float s, float r){
        float fps=(float)global::sampleRate/global::controlInterval;
        if(fps==0) throw "sample rate or control interval is zero";
        attack=a;
        decay=d;
        sustain=s;
        release=r;
        attack_delta=1/(attack*fps);
        decay_delta=1/(decay*fps);
        sustain_delta=1/(sustain*fps);
        release_delta=1/(release*fps);
    }
};

envelope synthEnvelope;

class Voice {
    public:
        static int count;
        unsigned int id;
        float currentPhase;
        float currentVelocity;
        float currentPhaseIncrement;
        int envState;
        float envLevel;
        int updateCount;
        bool isTriggered=false;
        bool restartPhase=true;
        float velocityToAttack=0.3;
        
        Voice() {
            id = count++;
            currentPhase = 0;
            currentVelocity = 0;
            currentPhaseIncrement = 0;
            envState = 0;
            envLevel = 0;
            updateCount = 0;
            
        }
        
        void triggerNoteOn(int note, int velocity){
            if(restartPhase)currentPhase=0;
            isTriggered=true;
            float gFreq = powf(2, note / 12.f) * 440.f;
            currentVelocity = velocity/127.f;
            currentPhaseIncrement = 2.f * (float) M_PI * gFreq / global::sampleRate;
            rt_printf("id: %d freq:%f, phaseInc:%6.5f, currentVelocity: %f\n", id, gFreq, currentPhaseIncrement, currentVelocity);
            envState=1;
        }
        void triggerNoteOff(){
            isTriggered=false;
        }
        float sample(){
            if(envState==0) return 0;
            
            currentPhase += currentPhaseIncrement;
            if(currentPhase > M_PI)
                currentPhase -= 2.f * (float)M_PI;
            if(updateCount>=global::controlInterval){
                envUpdate();
                updateCount=0;
            }
            
            float result=envLevel * sinf(currentPhase) * currentVelocity;
            /*
            float scope1=0;
            float scope2=0;
            float scope3=envState;
            
            scope.log(scope1, scope2, scope3);
            */
            updateCount++;
            return result;
        }
        
        void envUpdate(){
            switch (envState){
                case 0:
                    envLevel=0;
                    break;
                case 1://attack started
                    if(envLevel<1){
                        envLevel+=synthEnvelope.attack_delta+(currentVelocity*velocityToAttack);
                    }else{
                        envState++;
                    }
                    break;
                case 2://decay started
                    if(envLevel>synthEnvelope.sustain){
                        envLevel-=synthEnvelope.decay_delta;
                    }else{
                        envState++;
                    }
                    break;
                case 3://sustain started
                    if(isTriggered){
                        //sustainiiing
                    }else{
                        envState++;
                    }
                    break;
                case 4://release started
                    if(envLevel>0){
                        envLevel-=synthEnvelope.release_delta;
                    }else{
                        envState++;
                    }
                    break;
                default://release ended, or state is out of range.
                    envState=0;
            }
        }
};
int Voice::count=0;

Besides this code, the only needed things is triggering the voice attacks, and requesting the samples. This will be present in the code at the end. The Voice class manages a sine wave whose volume is controlled by an envelope, and the velocity. The envelope is a global, since voices represent polyphony on one same synthesizer, and I was not interested in generating a different envelope for each voice.

The wrong sympathetic resonance function

In order to generate the desired sympathetic resonance, I needed a graph that indicated how much to raise the volume of a certain "triggered" note according to it's frequency relation to some other note. I used a simple sin formula, multiplied by the x axis, which is also called a sync function. The idea would be to divide the two frequencies, and use this relation as function parameter.

The chosen sync function is not the correct formula to use: it will consider as "resonant" the closer the value is to an integer. This will provide resonance to the octave relation between notes, but will fail for fifths, thirds, etc. Additionally, it considers neighboring frequencies to be more resonant, which in the real model is not true. It was useful, however as a way to test the resonance, having these limitations in mind.

The resulting curve of the graph was produced using desmos graphing calculator.

The correct resonance function to use, I suspect, would be the result of doing FFT in a sine wave. This would take a bit more of time to program. The interesting thing that derivates from this idea, if true, is that other resonance models could be tested by doing FFT to different samples. As a matter of fact, the FFT could be applied to the very timbre that presumably could be the voice of the synthesizer; being able theoretically to make resonance models to any natural sound that is not resonant in nature.

Synthesizer code with resonance applied

With the wrong formula applied, the whole synthesizer code for a Bela board is the following:

#include <Bela.h>
#include <Midi.h>
#include <stdlib.h>
#include <Scope.h>
#include <cmath>


Scope scope;

namespace global{
    unsigned int sampleRate=0;
    unsigned int controlInterval=10;//how many samples it takes to udpate a control signal
    double secondsPerSample=0;
}

float sympathicResonanceTable[5000];//using step of 1/1000, hence getting to 5 times freq
//to make a lookup table of this function:https://www.desmos.com/calculator/cfu9rxzx1o
//which can be used to calculate how much a wave sympathizes with another.
//formula needs improvement. cirrently only octaves resonate
void fillSympathicResonanceTable(int positions, float hertzPerStep){
    for(unsigned int step=0; step<positions; step++){
        float x=step*hertzPerStep;
        sympathicResonanceTable[step]=(cosf(x*M_PI)/(x))+(1/x);
    }
}

struct envelope{
    float attack=0;
    float decay=0;
    float sustain=0;
    float release=0;
    
    float attack_delta=0;
    float decay_delta=0;
    float sustain_delta=0;
    float release_delta=0;
    
    void set(float a, float d,float s, float r){
        float fps=(float)global::sampleRate/global::controlInterval;
        if(fps==0) throw "sample rate or control interval is zero";
        attack=a;
        decay=d;
        sustain=s;
        release=r;
        attack_delta=1/(attack*fps);
        decay_delta=1/(decay*fps);
        sustain_delta=1/(sustain*fps);
        release_delta=1/(release*fps);
    }
};

envelope synthEnvelope;

//works weirdly. only some voices sound.
class Voice {
    public:
        static int count;
        unsigned int id;
        float currentPhase;
        float currentVelocity;
        float currentPhaseIncrement;
        int envState;
        float envLevel;
        int updateCount;
        bool isTriggered=false;
        bool restartPhase=true;
        float velocityToAttack=0.3;
        
        Voice() {
            id = count++;
            currentPhase = 0;
            currentVelocity = 0;
            currentPhaseIncrement = 0;
            envState = 0;
            envLevel = 0;
            updateCount = 0;
            
        }
        
        void triggerNoteOn(int note, int velocity){
            if(restartPhase)currentPhase=0;
            isTriggered=true;
            float gFreq = powf(2, note / 12.f) * 440.f;
            currentVelocity = velocity/127.f;
            currentPhaseIncrement = 2.f * (float) M_PI * gFreq / global::sampleRate;
            rt_printf("id: %d freq:%f, phaseInc:%6.5f, currentVelocity: %f\n", id, gFreq, currentPhaseIncrement, currentVelocity);
            envState=1;
        }
        
        void triggerNoteOff(){
            isTriggered=false;
        }
        
        float sample(){
            if(envState==0) return 0;
            currentPhase += currentPhaseIncrement;
            if(currentPhase > M_PI)
                currentPhase -= 2.f * (float)M_PI;
            if(updateCount>=global::controlInterval){
                envUpdate();
                updateCount=0;
            }
            
            float result=envLevel * sinf(currentPhase) ;
            /*
            float scope1=0;
            float scope2=0;
            float scope3=envState;
            
            scope.log(scope1, scope2, scope3);
            */
            updateCount++;
            return result;
        }
        
        void simpathize(Voice & otherVoice){
            float ratio=currentPhaseIncrement/otherVoice.currentPhaseIncrement;
            ratio*=1000;//mapping constant, has to be the inverse to hertzPerStep when map was built.
            envLevel+=sympathicResonanceTable[(int)floor(ratio)]/32;
            if(envLevel>1.2)envLevel=1.2;//prevent crazy resonance
        }
        
        void envUpdate(){
            switch (envState){
                case 0:
                    envLevel=0;
                    break;
                case 1://attack started
                    if(envLevel < 1 * currentVelocity){
                        //*currentVelocity compensates the fact that it'd take shorter to achieve 1*currentVelocity
                        //+(currentVelocity*velocityToAttack) actually makes attacks faster when more velocity
                        envLevel+=synthEnvelope.attack_delta*currentVelocity+(currentVelocity*velocityToAttack);
                    }else{
                        envState++;
                    }
                    break;
                case 2://decay started
                    if(!isTriggered){
                        envState++;
                    }else if(envLevel>synthEnvelope.sustain){
                        envLevel-=synthEnvelope.decay_delta;
                    }
                    break;
                case 3://sustain started
                    if(envLevel>synthEnvelope.sustain){
                        envLevel-=synthEnvelope.release_delta;
                    }else if(isTriggered){
                        //sustainiiing
                    }else{
                        envState++;
                    }
                    break;
                case 4://release started
                    if(envLevel>0){
                        envLevel-=synthEnvelope.release_delta;
                    }else{
                        envState++;
                    }
                    break;
                default://release ended, or state is out of range.
                    envState=0;
            }
        }
};
int Voice::count=0;

#define POLYPHONY 12
Voice testVoice [POLYPHONY];
Voice * noteOwners [128]; //contains which voice should be affected by a trigger off. there should be a better way.

void midiMessageCallback(MidiChannelMessage message, void* arg){
    // Note that this is called in a different thread than the audio processing one.
    static unsigned int useVoice=0;
    if(arg != NULL){
        rt_printf("Message from midi port %s ", (const char*) arg);
    }
    message.prettyPrint();
    if(message.getType() == kmmNoteOn){
        useVoice=0;
        unsigned int secondChoice=0;
        while(testVoice[useVoice].isTriggered || testVoice[useVoice].envState!=0){
            if(!testVoice[useVoice].isTriggered){
                //this voice is in release stage, could be used as a last resort
                secondChoice=useVoice;
            }
            useVoice++;
            if(useVoice>=POLYPHONY) {
                useVoice=secondChoice;
                break;
            }
        }
        
        int noteNumber=message.getDataByte(0);
        testVoice[useVoice].triggerNoteOn(noteNumber-69,message.getDataByte(1));
        noteOwners[noteNumber]=& testVoice[useVoice];
        
        
        //resonate!
        for(unsigned int vy=0; vy < POLYPHONY; vy++){
            if(testVoice[vy].isTriggered)
                testVoice[vy].simpathize(testVoice[useVoice]);
        }
        
    }
    if(message.getType() == kmmNoteOff){
        int noteNumber=message.getDataByte(0);
        noteOwners[noteNumber]->triggerNoteOff();
        //testVoice[useVoice].triggerNoteOff();
    }
}


Midi midi;

const char* gMidiPort0 = "hw:1,0,0";

bool setup(BelaContext *context, void *userData){
    midi.readFrom(gMidiPort0);
    midi.writeTo(gMidiPort0);
    midi.enableParser(true);
    midi.setParserCallback(midiMessageCallback, (void*) gMidiPort0);
    global::sampleRate = context->audioSampleRate;
    global::secondsPerSample = 1.00/context->audioSampleRate;
    synthEnvelope.set(1,10,0,0.7);
    scope.setup(3, context->audioSampleRate);
    
    
    fillSympathicResonanceTable(5000, 1.f/1000);
    
    
    return true;
}

enum {kVelocity, kNoteOn, kNoteNumber};
void render(BelaContext *context, void *userData)
{
    for(unsigned int n = 0; n < context->audioFrames; n++){
        float value=0;
        for(unsigned int vn=0; vn < POLYPHONY; vn++){
            value+=testVoice[vn].sample()/(POLYPHONY/2);
            
        }
        
        
        for(unsigned int ch = 0; ch < context->audioOutChannels; ++ch)
            audioWrite(context, n, ch, value);
    }
}

void cleanup(BelaContext *context, void *userData)
{

}

As a side note, the ADSR in this version is slightly different than in the last version, because the decay needed to be cancelled in case the note was released. This allows to use zero sustain and a very long decay, in the same way as it happens on a piano. This helps the model to work more accurately; applying the decay delta value to make the resonating waves fade out.

The code sometimes produces a segfault. I suspect this is because there are no thread-safety measures taken, although the MIDI event callbacks happen in another thread, as indicated. Implementing this is also simple, it only takes to isolate the event into a single variable that is thread protected, and delegating the noteOn() call to the main thread.

Also note, in the sympathize function, that I don't convert phase delta to actual frequency because we need to obtain a relation between the two. I considered the delta to be interchangeable with frequency in this case. I did not either multiply these deltas by the sampling rate (to convert the sample time to real time) for the same reason; we need the relative relation between the two frequencies rather than an absolute frequency.

The resulting synthesizer, although not physically accurate is quite fun and inspiring to play with. In reality this is more to the fact of the sound of sine waves than because of the sympathetic resonance. The latter feature only adds value in the longer run, to provide a richer spectrum of expressiveness.