First time here? Check out the FAQ!
x

using spectral data to control fm synth parameters

+1 vote
422 views
hello everybody,

i just wonder if somebody already experimented with using spectral analysis data/ descriptors to control parameters of a fm sound ?

my idea is not to recreate a specific sound via fm but use it properties to control a fm synth, for dynamic parameter modulations -  in additional to the standard envelopes and lfo´s.

i guess the main task is the mapping and scaling of the spectral data/ spectral descriptors to the fm parameters.

any ideas or feedback on this topic is very much appreciated.

thanks, johannes
asked Jan 4, 2017 in Using Kyma by x (Practitioner) (590 points)

2 Answers

+1 vote

Its an interesting idea. There are strong mathematical underpinnings to FM synthesis (see Bessel Functions) so it would seem possible to transform the results of traditional Fourier-style spectral analysis into the parameters to drive FM synthesis. However it is a complex problem. I suggest some basic Internet searching on the terms “FM” and “Bessel”. Here are two example, somewhat random links:

http://www.johndcook.com/blog/2016/02/17/analyzing-an-fm-signal/

https://ccrma.stanford.edu/software/snd/snd/fm.html

One of the challenges is that FM synthesis by its nature is an example of a "parameterized" synthesis technique. What I mean by "parameterized" is that a small number of parameters has great influence over the detail of the resulting sound. With the simplest 2 sinusoidal oscillator FM configuration two parameters, the carrior-to-modulator frequency ratio, and the amplitude of the modulation signal (which determines the “modulation index”), define the resulting spectra. So two parameters, and how they evolve over time, define a very rich space of harmonic possibilities. This is actually at the heart of why FM synthesis is so powerful and compelling (an part of John Chowning’s original motivation to explore it). 

Spectral analysis, on the other hand, deals with the details of a sound's spectral evolution. So it is more fine grain, providing control of the spectral details down to individual frequency components.

So the mapping challenge is how to transform the numerous spectral components into the minimal FM parameters.

Another challenge is that there is no one configuration of FM synthesis. Carriers and modulators can be connected in a wide range of configurations, each of which is capable of producing very different spectra. Its kind of like regional dialects of a language that each have their own common vocabularies. The original DX7 offered 32 possible configurations of 6 “operators” (each basically a sinusoid oscillator with multi-segment pitch and amplitude envelopes). So one recommendation is to select one FM “patch” and work within its potential spectral space.

This is a problem that has fascinated me for over three decades. My belief is that the most general solution is best pursued with some type of machine-learning approach. “Train” the neural network (or equivalent) to perform the mapping of time-varying spectral components to FM synthesis parameters. I think this is the most promising avenue if the goal is to take the recording of a sound and from that re-synthesize it using FM synthesis.

But its unclear that approach provides a path to what I expect is the true goal, being able to analyze a sound and then create variations, transformations, and other new sounds from that analysis using FM synthesis. The resulting mappings produced by the machine-learning would be somewhat opaque and may well defy easy additional direct manipulation.

So perhaps a “just try and see what happens” experimental approach is more fruitful if the goal is creating new, interesting sounds from existing source materials. Something like somehow combining the time-varying spectral amplitude information and using that to control a simple FM-pair’s time-varying modulation index. Or do the same thing but use multiple FM-pairs each of which is controlled by specific sets of the original spectra; for example three FM pairs control by low-frequency, mid-frequency, high-frequency spectral components respectively. None of this is likely to give you a true re-synthesis of the original sound but that is OK. It should produce some new sounds, and likely sounds you would not have encountered otherwise. I think Kyma would be a great playground for such exploration!

answered Jan 4, 2017 by delora-software (Master) (3,800 points)
hey, thanks for your detailed answer:
its a lot of useful information and great ideas. thanks for taking the time. ( i guess doug?)

i will def have to try some of your ideas/recommendations. specially the split bands to multiple fm pairs attempt.  

in the meanwhile i used ampfollower and freqtracker to modulate the lfo freq and modindex. this definitely goes into the right direction. it sounds very lively. i will have to try some averaging on the amp follow output too, to get a smoother contr. signal.

anyway, one aspect where i think that parameter control via spectral data could help the fm computation process is at the envelopes domain.
for me fm is really getting interesting as soon as envelopes on the output of the individual operators are used. but as soon as i want to control all the params of these envelopes its getting pretty complex - imagine a 6op fm. with an adsr it would mean 24 params to handle.
so i would be interested in a approach where the envelopes or “envelope like signals” are controlled/created by analysis data.
for now i will start with smoothed ampfollower output and see where i get.
if you have any other ideas …


thanks a lot, johannes
Machine learning to set the parameters - that sounds excellent, is that kind of what the Hartmann Neuron did?

There are exciting possibilities for morphing. If you use machine learning to "sample" two different sounds into compatible FM (or any) architectures, then morphing between the sounds, by morphing the parameters of FM architectures, could sound very different from fourier resynthesis... perhaps.
@johannes: That is a good observation about the challenge of controlling so many envelope parameters and now some type of application of other data to that may produce interesting results. This "explosion of parameters" problem was always the challenge for additive synthesis so perhaps there are some techniques (other than analysis-resynthesis) from there that could be applied to your FM concept.
@alan-jackson: The Neuron's approach was shrouded in mystery so we can only make guesses based on observations. There was some type of analysis phase to the synthesis approach because samples had to go through a preprocessing step before used in the Neuron. There were certainly claims that it was based somehow on neural networks but I never saw any discussion of what that really meant. Parameter mapping though is kind of a classic use of neural networks. The more interesting question I think about the Neuron was what was the underlying synthesis model.

And yes the behavior under morphing between two "learned" sounds if you could create such a system of analysis-resynthsis used on FM would be really different and to me that is pretty exciting. Its a different map to explore timbre space!
+1 vote

i would be interested in a approach where the envelopes or “envelope like signals” are controlled/created by analysis data.

Hi Johannes,

One approach would be to use the pattern: spectrum --> spectrumTrackSelector --> FMOsc --> Replicator

where the SpectrumTrackSelector has ?VoiceNumber as its track number, and the Replicator makes multiple FM oscillators, each one controlled by the amplitude and frequency envelope of one partial from the analysis.

Here's an example of controlling FM oscillator parameters from a spectrum where the amplitude and frequency of the carrier are controlled by the amp and freq envelopes of one partial from the spectrum; the frequency of the modulator is some ratio of the carrier's frequency, and the amplitude of the modulator is controlled by the overall brightness envelope of the spectrum multiplied by an EventValue called !MI for modulation index.  The spectrum and the brightness extractor are in the SharedSouds field of the Replicator since they are shared by all the partials.

Perhaps you could use this as a starting point for further explorations, using the amplitude and frequency envelopes to control different FM parameters or extending it to do live analysis resynthesis with FM.

Looking forward to hearing what you come up with!

answered Jan 8, 2017 by ssc (Savant) (108,940 points)
thanks for the quick answer ssc.

these are two great example sounds. i don't  understand everything yet…but really like the results. will have sit back and learn …
Very interesting results while experimenting using just a few replications and a high MI value. Very very low processor overhead with so few reps too
...