First time here? Check out the FAQ!

How does the FrameSynchronizer work?

0 votes

Can someone tell me how to use the FrameSynchronizer to record and playback spectral data in sync?

Here's a little test environment I've created: FrameSyncTest.kym



asked Jan 2, 2015 in Using Kyma by kymaguy (Virtuoso) (10,580 points)
Nobody knows?

3 Answers

+2 votes
Best answer

You can use a FrameSynchronizer with a SyncOutput trigger to align a spectrum with an asynchronous event (like an external trigger or a Capytalk expression) and you would use another FrameSynchronizer with a SyncInput trigger to align an asynchronous spectrum with the start of Kyma's spectral frame boundaries.

For recording, you would use a FrameSynchronizer in the signal flow between the spectral source and the MemoryWriters that will capture the spectrum and the same trigger for SyncOutput and for triggering the recording. SyncOutput causes the output of the FrameSynchronizer to start at the very first spectral component, guaranteeing that your recording begins on a spectral frame boundary.

For playback, you would use another FrameSynchronizer immediately after the Sample (or other memory playback Sound) and use the same trigger for SyncInput and for triggering the playback. SyncInput tells the FrameSynchronizer that the very first spectral component is at the input; the FrameSynchronizer will delay the spectral data so that it stays in phase with the spectral period of all other Kyma modules.

Here is an example.

answered Jan 13, 2015 by ssc (Savant) (120,590 points)
selected Jan 15, 2015 by kymaguy
We edited the answer to include an example for clarification.
Great! Thanks!
0 votes

I asked this question to Pete Johnston not so long ago (he originally wrote a FrameSync delay for the Capy320 in his custom DSP prototypes)... I will save him some time by sharing his response to my question here.

Start with a live spectral analysis with live input or recorded input to suit your taste. Feed this into an oscillator bank so that you can resythersis and hear the results, but also feed the same signal (from the live analysis output not the osc bank) into  the frameSyncronizer which in turn feeds a DiskRecorder set to stereo, 24bit, aiff. Leave the fields as they are except put !StartRecord in the sync input field. Also put !StartRecord in the trigger field of the DiskRecorder. Make sure the DiskRecorder goes through a mixer with the level set to zero as you don't want to hear this sound in the speakers. Now you can play the sound and make recordings of the spectral control information. If you change the file name in the DiskPlayer and restart the sound again and again to make a bunch of different files (or keep changing the file name after each recording so that it doesn't get overwritten).

Now make a new sound with a sample player, playing one of the files you made before, feeding another frameSyncroniser which then feeds an Oscillator bank. This time put !Play in the gate filed of the sample player and put !Play in the SyncOutput field of the framesync (leaving the other fields set to 0). You will need to enter the FrameLength in samples to match the recording. Now you can play the sound you had before. But you could make a few sample players each with their own file and framesync and use interpolater modules to merge between/morph them into one osc bank. Make sure the loop is turned off in the sample player as it will not be in sync (unless you made files with an exact multiple samples of the frame rate.

So what is the whole point of this when you could have achieved the same thing by making harmonic spectrum files with the analysis tool and using spectrum in RAM modules to play them back. The advantage here is that you could do funky processing things like spectral smoothing, freeze, emphasis etc etc before you record the results to a file. You cannot do this with the analysis tool. You can then do more processing and merging of a quantity of pre processed files while playing them back and only then putting the result through a single oscilatorbank to get the final sound.

answered Jan 13, 2015 by cristian-vogel (Master) (8,370 points)
Note that when Pete says input he means output and vice versa ;) His Capybara-320 microsound had the parameter names swapped with respect to the Kyma 7 built-in Sound.
0 votes

Seems like SSC and Pete don't agree here surprise

  • SSC way:
    • Recording using same trigger for Record and SyncOutput
    • Playback using same trigger for Play and SyncInput
  • Pete way:
    • Recording using same trigger for Record and SyncInput
    • Playback using same trigger for Play and SyncOutput

‚ÄčI tried both ways and both of them occasionally work...

So either my test setup is wrong or we are having a bug here frown


answered Jan 14, 2015 by kymaguy (Virtuoso) (10,580 points)
We are both correct. Pete was referring to his Capybara-320 microsound, which had the parameter names swapped with respect to the Kyma 7 built-in Sound.