First time here? Check out the FAQ!

Applying analyzed amplitude and pitch from source to target file?

0 votes
Hi all,

I'm not sure if I know how to ask this, but here goes.

I'd like to build a patch that would analyze the amplitude, pitch or frequency and possibly formant from a source file (wav or spc) or live input, then use the analysis output to modify another signal or file. So if the source file was a person talking, the target file would get the volume envelope and dominant pitch and formant shape of the source speech.

The goal would be to hear none of the source file but only the target file and it's sound being modified by the source.

It would also be good to be able to tweak the influence of the source analysis on the target.

Does this exist already? I know the Tau editior will allow me to do a lot of this manually, but if there is some way to automate the processing, that would be great.

I know I could do the amplitude with a Bidi Follower, Gustav helped me with that. But the rest I am not so sure about.

References to existing patches, manual pages numbers or previous posts that cover this greatly appreciated.

Thanks in advance,

asked Jan 19 in Sound Design by jason-wolford (220 points)

1 Answer

0 votes

The general strategy would be to first find some modules for extracting feature envelopes from your live input and then to apply those envelopes to the parameters of a different sound source.

To extract feature envelopes from a live input, there are several options. For example, in the Prototypes:

  • Amp Follower to Global Controller
  • BrightnessTracking to Global Controller
  • FrequencyTracker (2c to 4c) Global Controller

extract an amplitude, a brightness, or a frequency envelope from the input and create EventValues that follow those envelopes. 

To apply those envelopes to a different sound source, you could use the generated EventValues in a parameter field of that other sound source. For example, search in the Prototypes (Ctrl+B) for

  • Oscillator w/FrequencyTracker

To find more examples, try searching in the Sound Library for Sound class name FrequencyTrack.

Another approach is to analyze the spectrum or spectral envelope of the live input and apply only the amplitudes or only the frequencies of that source to another sound. For example, you could use LiveSpectralAnalysis to analyze the frequency and amplitude envelopes of the live input. Then use those amplitude envelopes (but not the frequencies) in the additive resynthesis of a different sound source. Some examples from the Sound Library to look at might be:

  • Freq of input, spect of vowels
  • Joan of Arc Talking Bells (Spectral Cross synthesis)
  • Rap crossed w/ Non-sine resynth of Drum

The Vocoder is another way you could apply the amplitude envelopes from multiple frequency bands to those same bands in a different sound source. In the Kyma Sound Library, search for Sound class name: Voco. See for example:

  • Talking dog growl
  • Talking dolphin Vocoding Water sample
  • Talking wolves Vocoder

Hope these examples give you some good starting points. Please keep us updated on your project and followup questions. Looking forward to hearing your results!

answered Jan 19 by ssc (Savant) (64,970 points)
Thanks so much for the detailed reply! I'll do my best to work through all of this and will report back, most likely a few more questions and hopefully some cool results to share :-)

Thanks again,