It would probably take a lot of work, but I wonder if the NSynth (https://www.theverge.com/circuitbreaker/2018/3/13/17114760/google-nsynth-super-ai-touchscreen-synth) code could be brought into the ER-301? Training the models might have to be one on a laptop, and then brought into via the SD card. It’s all open source as far as I can tell.
(On that note, I really need to dig into how to make units on here with the SDK. Wondering similarly if various Mutable Instruments units can be ported over)
Saw the NSynth articles pop up earlier this morning - looks interesting!
NSynth looks unreal. Would be crazy cool in the 301 but I doubt it would work. Maybe a new unit that can take two sound and blend them on a binary/sample level to create new sounds not just mixed two sound together?
The SDK is not really available yet, just the middle layer, which allows for ‘patching’ existing functionality via code. My understanding is that once the full SDK is released, maybe somebody could be able to port some mutable code, but I would think the google synth is more involved.
Keep in mind that the hardware in the video is (quite disingenuously) just a rompler playing back samples generated with extremely computationally intensive algorithms on another computer.