Python is running on the robot listening to the live sound, running a hive of neural networks, and co-creating in realtime. Once its made a decision to move it then makes a sound. So instead of simply playing samples (as it does in the video), I’m looking at sending an array of data from the hive of nets to Kyma, and for it to sculpt a sound from the whole array.
So far, I have python-osc in the robot script running (replacing the sample triggering func) and communicating with my Kyma machine (validated using OSCulator). Just need to get it into Kyma, then will be able to share it with y’all.
Finger’s crossed we have a KISS soon, so I can perform these as an autonomous orchestra.