First time here? Check out the FAQ!
x

Humanoid Voice

0 votes
437 views

Hi everyone,

I have some questions regarding this Sound (Spectra file used) which works in a simple way.

By pressing a key(s) on the Kyma Control keyboard, the Generic Source Live prototype is triggered and by talking through a microphone, the amplitud/ frequency information is extracted, which then, is applied to a spectrum file. The idea came out from this question


My questions would be:


1.- Do you know any way to make the sound "crunchier" or maybe more staccato?  

2.- In the linear time prototype: The OnDuration parameter has a 12.7139 s length, which is related to the sample's length(Metal s256 betterFreq mono.spc), used in the SpectrumInRAM. I wonder if that value should be instead 0.368707 s, which is the length of the multiwave sample, used in the Wavetable?

3. In case of wanting to use a recorded dialogue line instead of a microphone, what is the best suitable sample prototype to use by replacing the Generic Source Live prototype, but being able to trigger it through the Kyma Control Keyboard? 

Thank you very much for your time and help!

Best,

Marco

  

asked Feb 15, 2020 in Sound Design by marco-lopez (Practitioner) (800 points)

4 Answers

+1 vote

Hi Marco,

Starting with your last question first,

3. In case of wanting to use a recorded dialogue line instead of a microphone, what is the best suitable sample prototype to use by replacing the Generic Source Live prototype, but being able to trigger it through the Kyma Control Keyboard? 

In GenericSource, if you change Source from Live to RAM, it reads from the specified recording instead of from the live mic input (and continues to be triggered from the keyboard).

If you have a folder filled with recorded dialog snippets, then you could replace the GenericSource with the prototype called Multisample KBD (cmd+B to search for it). Set the Samples field to something like:

{'Recorded dialog folder:line1.aif' sampleFileNamesInSameFolder}

If you want the line to finish after you trigger it from the keyboard, leave the ReleaseTime at its default (long) value. Otherwise, if you would like it to release when you release the key, change it to something shorter like 1 s.

If you leave AutoLabel checked, then, in the VCS, the Index fader is labeled with the names of the files in the folder. You can right-click (shift-click) on the Index in the VCS to change that to a Selection from list or Radio buttons if you prefer.

answered Feb 15, 2020 by ssc (Savant) (128,120 points)
+1 vote

Working backward to question 2...

2.- In the linear time prototype: The OnDuration parameter has a 12.7139 s length, which is related to the sample's length(Metal s256 betterFreq mono.spc), used in the SpectrumInRAM. I wonder if that value should be instead 0.368707 s, which is the length of the multiwave sample, used in the Wavetable?

Since you're using this as the timeIndex into the spectrum file, my sense is that you probably want this to be close to the duration of the original speech sample (so it has the pacing of the human speech). The OnDuration is current set to:

12.7139 s + (!KeyDown  * 100 s)

so it slows way down while a key is held down and then reverts to normal speed when the key goes up.

answered Feb 15, 2020 by ssc (Savant) (128,120 points)
+1 vote

Hi Marco, 

1.- Do you know any way to make the sound "crunchier" or maybe more staccato?  
 

You could try inserting the AR Drag&Drop between the MIDIVoice for KBD Polyphony and the Gain. This would give you some controls over Attack, Release times and Legato. If you set small attack and release times, you could make the keyboard more staccato. (Is that what you had in mind?)

answered Feb 15, 2020 by ssc (Savant) (128,120 points)
0 votes
Thank you very much for your suggestions, I just tested them and that's exactly what I was looking for!
answered Feb 16, 2020 by marco-lopez (Practitioner) (800 points)
...