First time here? Check out the FAQ!

Density of Stochastic Triggers or: Where do the magic numbers in the CloudBank density calculation come from?

0 votes

In the CloudBank-Resynthesis Prototype the density of the triggers is provided by a calculation in the CentreValue of a Noise Sound. 



I understand what this expression says and I understand that by moving the CentreValue of the Noise we're changing the density of values that will be greater than 0 and therefore trigger the CloudBank. But I don't understand why this expression is the way it is and where the two magic numbers 1.5015014821509d and -6.66 come from. 

Do those numbers have a special meaning and how does this work?

If you want to know why I'm asking, I'm making a Sound with stochastic triggering and I want to create a density of triggers that roughly equates to a frequency. 

I've made a little Sound for comparing the density of a regular pulse train with a random trigger source.


asked Jan 10 in Using Kyma by alan-jackson (Virtuoso) (14,360 points)
edited Jan 10 by alan-jackson

2 Answers

+1 vote
Best answer

To summarize the discussion in the comments, in your test example, the GateToPulse Sound ignores consecutive triggers from the Threshold, thus causing the difference between the pulse train and the noise density measurements (because truly random triggers can occur on consecutive samples).

If you reduce the Noise trigger probability by the likelihood of two consecutive samples in a row being above the threshold, then the two branches (noise and pulse train densities) track each other a lot more closely. The compensated CenterValue for the Noise would be:

(1 - (2 * !Frequency hzToSignal)) sqrt negated

answered 6 days ago by ssc (Savant) (112,450 points)
selected 6 days ago by alan-jackson
+1 vote

The overall effect of this expression is to "warp" the !Density fader such that is spends more of its throw at smaller numbers and only a small proportion of its distance is devoted to larger numbers. 

The expression is actually (!Density * (0.666 inverse - !Density * -6.66) twoExp * 2 - 1), which would be equivalent to (6.66 * !Density) twoExp / 1024 * !Density (which is then scaled and offset to be in the range of [-1,1] so it can serve as the center value of the Noise).

For Density = 0, this would be 0 / 1024 * 0

For Density = 1, this would be 6.66 twoExp / 1024 * 1 = 0.098755153537997d

so the range is approximately [0, 0.1]

In other words, it changes the range of the !Density fader to 0.1, and warps it so more of its throw is spent at the smaller values.

I know you're already familiar with this, but for anyone else reading this who hasn't seen it yet, a good way to get a feel for what a Capytalk expression does is to:

  • Select the Capytalk expression
  • Use Ctrl+Y to evaluate it
  • Move the faders so you can see the relationship between input and output values while watching an oscilloscope trace of the results.
answered Jan 10 by ssc (Savant) (112,450 points)
Thinking more about it, any density over 1/4 sample rate is becoming less random as 1/2 sample rate density is only achievable with no randomness at all. One way I've tried to fade from 1/4 sample rate up to 1/2 sample rate density is to XOR the random triggers with a PulseTrain of regular triggers at that frequency for anything above 1/4 SR.

I don't know if that's mathematically justifyable... but it sounds good.

Anyway so the density based on CenterValue only needs to track up to 1/4 sample rate.
The GateToPulse Sound ignores consecutive triggers from the Threshold, causing the difference between the pulse train and the noise measurements. Truly random triggers can occur on consecutive samples.

You get the same measurement from the noise and the pulse if you remove the GateToPulse.

If you want to include the GateToPulse then the center value of the noise should be:

(1 - (2 * !Frequency hzToSignal)) sqrt negated

and the range of !Frequency should be 0 to the quarter sample rate.
Ah yes that is true that the GateToPulse is preventing triggers on consecutive samples but that's because any consumer of a trigger would also do the same. I'm using these random triggers to trigger a TriggeredSampleAndHold. The TSAH triggers on the leading edge so won't resample if the triggers are on consecutive samples...

...if there was a TrackAndHold that would probably work, wouldn't it? Coincidentally I've just made one. Have I just answered my own question?

Thank you for the counselling session, much appreciated!
We modified our response above to adjust the probability to account for triggers on two consecutive samples. Center value should be

(1 - (2 * !Frequency hzToSignal)) sqrt negated
Ah wicked! Yes that's the one. That tracks great, thank you.