Wow, coming from you, that makes me so happy ! In all seriousness, I think the patching is a bit clumsy, but as long as it works…It’s not like it was a performance instrument.
I still wonder about one important design choice : currently, regardless of the sequence length, the generated file is 1024 samples long and the data is “stretched” into it. I chose the smallest value that can be displayed on the er-301, and also, it keeps files very small. But I am thinking of having a fixed number of samples per step instead. That would make the patching cleaner and would allow to create very long CV sequences (songs ?) without losing precision in timing.
Pros of the fixed total amount of samples :
- very small files
- same length for every file so they could be easily be synced when read with sample players.
Cons of the fixed total amount of samples :
- patching a bit messy (when 1024 is not a multiple of the sequence steps, timing of the events has to be adjusted + or - 1 “sample”)
- resolution would be not enough for long CV sequences.
Pros of the fixed sample amount per step :
- simpler and cleaner MaxMSP patching
- resolution would not be a problem for long sequences
Cons of the fixed sample amount per step :
- longer files (but still quite small, we’re talking around a 1-second file in 44100Hz for a 64 step sequence)
- best synchro between files, according to me, would be done by global saw oscillators driving “sample scanner” units to read the “cv” files.
If you guys have opinions about these concerns let me know !
I might do this as well, good idea.