First time here? Check out the FAQ!

How to use a standalone mediating module to transform note-rate parameters at performance-time.

0 votes
How might one utilize a standalone mediating module to transform note-rate parameters (pitch, amp/vel, duration, extras, etc.) at runtime?

I recognize there's more straightforward ways of approaching this, which may be more efficient, where these parameter-values are transformed at their source (StepSequencer, etc) or destination (Oscillator, MIDIOutputEvent, etc). For now as an alternate approach though, I'm interested in exploring the possibilities of using a multi-module graph in a mediating position to transform note-parameters as they're passing through in realtime, to then be able to experiment with sending control signals across the graph orthogonally. For example, using a symbolic expression interpreted in binary as a series of Boolean values to toggle individual members of the graph on and off -- this might be done literally within Kyma, or as the result of an external process such as an L-system or cellular-automata algorithm (via OSC or a Kyma-tool), to explore possibilites of emergent behavior from this graph of modules, where each module performs a simple transformation, and any complexity is the result of activation patterns sent across the graph.

Most of the above explanation is to provide a context and rationale for seeking to use intermediary modules for parameter transformation. In order to start building a first example according to the description above, I'm just looking for information that might help me understand how to solve the subject-line query alone. Once I understand a correct way of how to approach that, I'll be able to start choosing from a large space of ideas, to start learning more of Kyma through practice in realizing some of them.

I've been trying to figure this out for awhile now, which has been great for learning more of the Kyma environment. I haven't yet produced a complete working solution though, so I'm asking for some advice at this point. Thanks !
asked Oct 31, 2016 in Using Kyma by thom-jordan (Practitioner) (660 points)
edited Oct 31, 2016 by thom-jordan

1 Answer

0 votes

It sounds like you would like to treat EventValues as signals — modifying them in real time using various (and switchable) combinations of modules in a signal flow diagram.  The key to moving from Capytalk to the domain of signals is the CapytalkToSound module (alternatively, you can paste your EventValue into a Constant in the Value field or use a SoundToGlobalController with Silent switched OFF).  One more detail: the range of signals is (-1,1) so for EventValues with wider ranges, such as !KeyNumber, you should divide by 127 in the CapytalkToSound and, wherever you actually use the result in a Frequency field, remember to multiply by 127 and use units of nn.

There's an example in the Kyma Sound Library that might serve as a starting point:

Kyma:Kyma Sound Library:Scripts, constructors, sequencers & composition:

The Sound file called: Compositions without notes.kym

And the Sound is called Improviser KBD-1.

To find it quickly, search the Sound library by Sound name and look for 'Improviser'

answered Oct 31, 2016 by ssc (Savant) (113,530 points)
Thanks for the explanation..  I finally got the real-time constrain-to-scale thing working now using a STGC to the left of a StepSequencer, with Value:

((((!KeyPitch + !Transpose) removeUnits mod: 6) of: #(0 1 3 7 8 10)) + (12 * (((!KeyPitch + !Transpose) / 12 ) rounded))) / 127.0

Now I need to see about providing the scale array in real-time, or setting the interval values in VCS.

Everything sounds amazing, so much tighter than the jittery DAWs I've been working with for years.