The general strategy would be to first find some modules for extracting feature envelopes from your live input and then to apply those envelopes to the parameters of a different sound source.
To extract feature envelopes from a live input, there are several options. For example, in the Prototypes:
- Amp Follower to Global Controller
- BrightnessTracking to Global Controller
- FrequencyTracker (2c to 4c) Global Controller
extract an amplitude, a brightness, or a frequency envelope from the input and create EventValues that follow those envelopes.
To apply those envelopes to a different sound source, you could use the generated EventValues in a parameter field of that other sound source. For example, search in the Prototypes (Ctrl+B) for
- Oscillator w/FrequencyTracker
To find more examples, try searching in the Sound Library for Sound class name FrequencyTrack.
Another approach is to analyze the spectrum or spectral envelope of the live input and apply only the amplitudes or only the frequencies of that source to another sound. For example, you could use LiveSpectralAnalysis to analyze the frequency and amplitude envelopes of the live input. Then use those amplitude envelopes (but not the frequencies) in the additive resynthesis of a different sound source. Some examples from the Sound Library to look at might be:
- Freq of input, spect of vowels
- Joan of Arc Talking Bells (Spectral Cross synthesis)
- Rap crossed w/ Non-sine resynth of Drum
The Vocoder is another way you could apply the amplitude envelopes from multiple frequency bands to those same bands in a different sound source. In the Kyma Sound Library, search for Sound class name: Voco. See for example:
- Talking dog growl
- Talking dolphin Vocoding Water sample
- Talking wolves Vocoder
Hope these examples give you some good starting points. Please keep us updated on your project and followup questions. Looking forward to hearing your results!