Академический Документы
Профессиональный Документы
Культура Документы
Warren L. G. Koontz
November 16, 2013
Abstract
Multiverb is an audio signal processing algorithm that provides a re-
verberation effect to a digital audio signal. The algorithm is implemented
as two C++ classes called Multiverb and DampedDelayLine. The current
EffectReverb class in Audacity is modified to use Mutiverb objects to add
reverberation to each block of audio data (mono or stereo). Multiverb
objects in turn use DampedDelayLine objects.
1 Theory
We begin with an overview of the theory behind Multiverb. For more detail
please see [1].
1
(X+ ), entering the wave-guides from the left junction (X− ), entering the right
junction from the wave-guides (Y+ ) and entering the wave-guides from the right
junction (Y− ). There is no incomming wave at the right junction (i.e., y + = 0),
but there is a secondary output x− at the left junction, which is ignored in the
current version of the reverberator.
The waves are related by the junction scattering matrices as follows:
X− = AX+ + Bx+
x− = CX+
Y− = AY+
y − = CY+
where
1
A= ON − IN
N
T
B = 1 1 ... 1
C = 1 1 . . . 1 /N
where dn , gn and Dn are the damping, gain and sample delay of the nth wave-
guide. The sample delays are prime numbers distributed over a range whose
upper limit is a user parameter. Given the sample delay, the damping and gain
coefficients are determined from the low-frequency and high-frequency reverber-
ation times (also user-set parameters) by solving the following equations:
gn LF
= 10−3Dn /(fs T60 )
1 − dn
gn HF
= 10−3Dn /(fs T60 )
1 + dn
where fs is the sampling rate.
The following table summarizes the user-set parameters for this algorithm:
Note that this algorithm produces a purely ”wet” output. The intention is
to combine this with the original dry signal to produce the final result. This
can be controlled by the user via wet and dry gain controls.
2
N number of wave-guides (a.k.a. the order)
Dmax sample delay upper limit
LF
T60 low frequency reverberation time
HF
T60 high frequency reverberation time
2 C++ Implementation
The algorithm is implemented using two C++ classes: DampedDelayLine and
Multiverb. DampedDelayLine basically implements the wave-guide difference
equations and Multiverb uses DampedDelayLine objects to implement the main
algorithm.
2.1 DampedDelayLine
This class models one direction of propagation in an acoustic waveguide. The
input and output are sequences of samples of an audio signal. The output y
and the input x are related by the difference equation
yn = dyn−1 + gxn−D
where d is the damping coefficient, g is the gain coefficient and D (an integer) is
the sample delay. The DampedDelayLine class includes the following methods:
void init(unsigned D, float t60LF, float t60HF, float fs)
Initialize DDL object. D = sample delay, t60LF = low frequency rever-
beration time, t60HF = high frequency reverberation time, fs = sampling
rate. Computes damping and gain coefficients and allocates and initializes
buffer.
void put(float x)
Enter new input sample. x = sample value.
float get(void)
Get current output sample. Returns sample value.
void step(void)
Advance DDL object through one sample period. Updates output sample
using difference equation and updates buffer.
2.2 Multiverb
This class models a reverberation mechanism consisting of two multi-port acous-
tic junctions connected by a bank of parallel wave-guides. The Multiverb class
includes the following objects:
void create(unsigned N, unsigned maxDelay, float t60LF, float t60HF, float fs)
3
limit on wave-guide sample delay. t60LF, t60HF and fs as above. Allo-
cates and initializes DDL objects (N eastbound and N westbound). Uses
simple algorithm to generate a roughly geometric sequence of N prime
sample delay values.
void processBlock(unsigned L, float *x, float *y)
Process a block of L samples. x = input (dry) samples, y = output (wet)
samples. Uses DDL methods and matrix multiplication to implement
algorithm described in Section 1.
References
[1] Warren Koontz. Multiport acoustic models with applications in audio signal
processing. J. Audio Eng. Soc, 61(10):727–736, 2013.