First time here? Check out the FAQ!
x

Understanding latency

+1 vote
332 views
Hi there

I’m slowly getting my head around Kyma and all the very wonderful things it can do.  It truly is spectacular.  One area where I’m struggling a little to wrap my head around is understanding latency and when ‘realtime’ really is realtime.  First, please forgive any abuse of terminology below or mis-understanding.

My audio setup: Mac>Pacarana (via FW800)

MotuTrack16>Pacarana (via FW800)

UAD Apollo>Mac (via TB)

MotuTrack16>UAD Apollo (via ADAT)

UAD Apollo>MotuTrack16 (via ADAT)

UAD Apollo slaved to Track16 clock (48k)

DAW: Ableton.  Using a send to a bus which has a ‘external audio effect’ (sending to/receiving from ADAT).  Using Kyma as a processing box.  Ableton buffer delay with UAD (10ms).

 

So, I understand that whilst Kyma computes on a sample per sample basis rather than a vector of samples, there will still be latency introduced by both the audio interface(s).  I believe if I use the ‘cue-mix’ software of the Apollo and not Ableton I can at least cut some delay on the return track.  I have the input/output buffer in the Kyma preferences at 10ms.  Is that fixed at 10ms, or is it up to a maximum of 10ms? Am right in thinking if a sound can’t compute within that 10ms it will run ‘out-of realtime’ and not play? I believe some prototypes/sounds impart a delay and so are not realtime…is there a way to determine the delay of a sound.  At present I find I keep having to change the latency on my ‘external audio effect’ for each different Kyma sound…

Any help appreciated.  I have found some similar posts but none which quite answer.  Forum posts regular talk of Kyma’s realtime FFT ability…the FFT imparts a delay though right (the window size)…so how is this different to say Max? Or is it because after the delay is taken into account the processing is sample by sample?

Thanks (from a Kyma noob).
asked Dec 7, 2017 in Using Kyma by ghood (Master) (3,020 points)

1 Answer

+3 votes

I have the input/output buffer in the Kyma preferences at 10ms.  Is that fixed at 10ms, or is it up to a maximum of 10ms?

The input-to-output time that you set in Kyma Preferences is constant (fixed). For example, having a fixed input-to-output time allows you to quickly learn and adapt to that fixed delay while performing live with Kyma (just as you can adapt to the propagation delay of sound through the air when you are performing with other musicians on stage).

Am right in thinking if a sound can’t compute within that 10ms it will run ‘out-of realtime’ and not play?

If the entire signal flow graph cannot be computed in a single cycle (1 / SampleRate), that is considered 'out of realtime'. Some algorithms may have 'bursts' of heavy processing followed by intervals of less intensive processing; these algorithms can benefit from the output buffer because they can start filling the buffer during the less intensive intervals so when you hit a burst of intense computation, you can fall a little bit behind realtime while still being able to output samples from the buffer at a regular, periodic sample rate.

I understand that whilst Kyma computes on a sample per sample basis rather than a vector of samples

One of the benefits of this is in the construction of long chains of processing in the signal flow editor: adding a module to the chain does not increase the delay time through that chain. You can add as many modules as you like; the entire signal flow graph is always computed on each cycle (the duration of a cycle is the inverse of the sample rate, so for example, if your sample rate is 48 kHz, a single cycle lasts 20.8 microseconds).

the FFT imparts a delay though right (the window size)

You're right, the concept of window length is built into the very definition of spectral analysis. In order to know the frequency of a signal, you have to wait until you've seen at least one full cycle of the lowest frequency of the time-domain waveform. For spectral analysis, the length of the window is inversely related to the frequency resolution of the analysis you produce. In other words, the longer the window (sometimes called a spectral frame), the lower the fundamental you can detect, the more frequency bands you can analyze, and the closer together those frequency bands can be. This principle is fundamental (no pun intended) to spectral analysis, no matter what algorithm or which software implementation you are using to perform the analysis.

Or is it because after the delay is taken into account the processing is sample by sample?

In other words, before you can use spectral features in the other Sounds, you have to wait long enough for the time signal to be "observed" and for those features to be extracted from the observed signal. Once the content is known, it can be utilized by other Sounds on a sample-by-sample basis.

 

answered Dec 7, 2017 by ssc (Savant) (126,620 points)
Thanks very much for the detailed answers. Very helpful.
...