First time here? Check out the FAQ!
x

Why does the DynamicRangeController introduce a 140ms delay?

0 votes
501 views

I was using a compressor (DynamicRangeController) to condition the input to a distortion (InputOutputCharacterstic). I also had a wet/dry mix using a crossfader. But when cross fading between the wet and dry mix I could hear an appreciable delay in the wet signal of about 100 - 200ms. It's the compressor that is causing the delay and it's irrespective of the Attack and Release time settings.

Here's a recording demonstrating the delay: http://kyma.symbolicsound.com/qa/?qa=blob&qa_blobid=5713368294979155934

This is the Sound diagram of my simplified Sound just going through the compressor:

And my parameter settings:

Is there any way to avoid this delay?

Maybe I should ask this question differently. What I'm trying to do is implement an automatic gain control as the input to a distortion effect, so quiet signals get boosted and then clipped just like loud signals do. I thought the compressor would be a way of doing that but if it's got an inherent delay of 140ms perhaps there's a better way of doing that?

 

 

 

asked Sep 26, 2017 in Using Kyma by alan-jackson (Virtuoso) (15,840 points)
edited Sep 27, 2017 by alan-jackson

2 Answers

0 votes

Hi Alan,

Did you see the Delay parameter in the DynamicRangeController? By default it is set to 0.001 s, maybe that's what you are hearing? 

The reason for that parameter is to have the DynamicRangeController "look-ahead" so when used as a limiter (your settings are nearly brick-wall at -40 dB) it won't miss some samples. It takes the envelope follower at the sidechain input AttackTime s to reach the peak so usually you would delay the input by the same amount. 

Long story short: either set the delay to 0 or delay the dry signal by the same amount.

Cheers,

Gustav

answered Sep 27, 2017 by kymaguy (Virtuoso) (10,580 points)
Hi Gustav,

I should have said in my original question that I had set the Delay parameter to 0. I'm still hearing what looks like a 140ms delay (measuring the distance between the peaks of a recording, in audacity).

I've just edited my original question. I think what I'm really after is a kind of automatic gain control. Perhaps there's a better way of doing that?
How do you make an (extreme) Automatic Gain Control?
0 votes
Hi Alan,

In answer to your question about automatic gain control, if you search in the Sound Browser for Sound name "automatic gain", this one shows up:

"Automatic Gain Control on input"

found in Kyma 7 Folder/Kyma Sound Library/Feature extraction/Amplitude tracking*.kym

The basic idea is to use an AmplitudeFollower or PeakDetector on the live input, invert it, and then apply the inverse envelope to your input by multiplying the input by the inverse envelope.

You might also take a look at the NoiseGate prototype in case you want to ignore the open mic when there's no live input being generated.
answered Sep 28, 2017 by ssc (Savant) (127,060 points)
I had a look at "Automatic Gain Control on input".

I didn't understand it in two ways.

First, the output of the PeakDetector goes through a Level Sound with the Left and Right levels set to -1. So that would just negate the peak signal, not subtract it from 1. Sending that to the VCA wouldn't that just invert the original signal not change its amplitude?

I tried replacing the Level with a OneMinusInput sound. But that didn't really work either. Instead of subtracting the Peak from 1, don't we need to find 1 / Peak?

Is there a Sound that will give 1 / Input?
...