The sound localization with virtual channel mixing was just too poor, so while
it's more costly to do per-source HRTF mixing, it's unavoidable if you want
good localization.
This is only partially reverted because having the virtual channel is still
beneficial, particularly with B-Format rendering and effect mixing which
otherwise skip HRTF processing. As before, the number of virtual channels can
potentially be customized, specifying more or less channels depending on the
system's needs.
This new method mixes sources normally into a 14-channel buffer with the
channels placed all around the listener. HRTF is then applied to the channels
given their positions and written to a 2-channel buffer, which gets written out
to the device.
This method has the benefit that HRTF processing becomes more scalable. The
costly HRTF filters are applied to the 14-channel buffer after the mix is done,
turning it into a post-process with a fixed overhead. Mixing sources is done
with normal non-HRTF methods, so increasing the number of playing sources only
incurs normal mixing costs.
Another benefit is that it improves B-Format playback since the soundfield gets
mixed into speakers covering all three dimensions, which then get filtered
based on their locations.
The main downside to this is that the spatial resolution of the HRTF dataset
does not play a big role anymore. However, the hope is that with ambisonics-
based panning, the perceptual position of panned sounds will still be good. It
is also an option to increase the number of virtual channels for systems that
can handle it, or maybe even decrease it for weaker systems.
The current gain gets explicitly set to the target when the stepping is
finished to ensure the target is still used. This way, however, will allow for
asynchronously 'canceling' a fade by setting the counter to 0.
They are still there for auxiliary sends. However, they should go away soon
enough too, and then we won't have to mess around with calculating extra
"predictive" samples in the mixer.
This fades the dry mixing gains using a logarithmic curve, which should produce
a smoother transition than a linear one. It functions similarly to a linear
fade except that
step = (target - current) / numsteps;
...
gain += step;
becomes
step = powf(target / current, 1.0f / numsteps);
...
gain *= step;
where 'target' and 'current' are clamped to a lower bound that is greater than
0 (which makes no sense on a logarithmic scale).
Consequently, the non-HRTF direct mixers do not do not feed into the click
removal and pending click buffers, as this per-sample fading would do an
adequate job of stopping clicks and pops caused by extreme gain changes. These
buffers should be removed shortly.
This update allows for much more flexibility in the HRTF data. It also allows
for HRTF table file names to include "%r" to represent the device's playback
rate (e.g. if you set hrtf-%r.mhr, then it will try to use hrtf-44100.mhr or
hrtf-48000.mhr depending if the device's output rate is 44100 or 48000,
respectively).
The makehrtf utility has also been updated to support more options and input
file formats, as well as the new mhr format.