Bela
Real-time, ultra-low-latency audio and sensor processing system for BeagleBone Black
|
This example demonstrates how to apply a feedback delay with an incorporated low-pass filter to an audio signal.
In order to create a delay effect we need to allocate a buffer (i.e. an array of samples) that holds previous samples. Every time we output a sample we need to go back in time and retrieve the sample that occurred n
samples ago, where n
is our delay in samples. The buffer allows us to do just this. For every incoming sample we write its value into the buffer. We use a so-called write pointer (gDelayBufWritePtr
) in order to keep track of the index of the buffer we need to write into. This write pointer is incremented on every sample and wrapped back around to 0 when its reached the last index of the buffer size (this technique is commonly referred to as a circular buffer
or a ring buffer
).
We go a bit further by applying feedback and filtering to the delay in order to make the effect more interesting. To apply feedback to the delay, we take the sample that occurred gDelayInSamples
ago, multiply it by our gDelayFeedbackAmount
parameter and add it to the dry input signal that we will write into the buffer. This way, there will always be a trace of the previously delayed sample in the output that will slowly fade away over time.
Next, we apply a low-pass filter. We have pre-calculated the coefficients that are required to apply a Butterworth (or biquad
) filter, which is expressed as follows: y = a0*x0 + a1*x1 + a2*x2 + a3*y1 + a4*y2, where x0 and x1 are the previous input (i.e. unfiltered) samples and y0 and y1 are the previous output (i.e. filtered) samples. We keep track of these previous input and output samples for each channel using global variables in order to apply the filter.
Finally we take the processed sample for each channel and write it into the corresponding delay buffer (gDelayBuffer_l
and gDelayBuffer_r
), so that in the future (after gDelayInSamples
samples) we can retrieve it again! Last but not least, we read the sample from the buffer that was written gDelayInSamples
ago and add it to the output.
Note that we have to ways of changing the volume of the delay effect. One way is to change the overall gain using the gDelayAmount
parameter: this will immediately raise or lower the volume of the delayed signal. The other option is to use the gDelayAmountPre
parameter, which will apply gain to the input of the delay line. The advantage of using this parameter is that when turning down the gain we can let the delay ring out while not letting any new input into the effect. Conversely, we can introduce the delay effect naturally without fading in previous output of the effect.