Techniques for Real-time Data-driven Synthesis

I’m starting this thread now because

  1. @Joe was posting very interesting results of his experiments with using loopers to polyphonically stack an external oscillator in real-time but in the bug thread.
  2. There are so many ways to do this and I have some of my own ideas :smiling_imp: that I want to try.
  3. It’s just plain interesting.

Moving relevant posts over to this thread now…

3 Likes

Here’s a video of what I’ve been up to. Trying to use the ER-301 to clone an external OSC so that I can play it polyphically, even while making live adjustments to the sound source. Ideally, I’d like to be able to punch out of recording so that the sound can be saved and free up the external OSC.

I am probably not the best video maker, but I thought it might be easier to give an overview explanation this way. At least you can see the 301, so that should help. I will try to clarify if not clear. :slight_smile:

Essentially this is very close to working, but I am getting some unwanted snap, crackle, pop. I think where I’m having trouble is perhaps that the buffer length does not match the waveform length? The crossfade on the sample player reset does not seem to be quite smoothing things out enough.

Any thoughts appreciated!

6 Likes

I thought we agreed on polyphonically? :joy_cat:

Oops typo. :slight_smile: I think we said we were trying to “polyphonify” an external oscillator.

Polyphonification. Sounds kind of dirty…

Oh yeah - I thought when I typed it it didn’t seem quite right :smiley:

Wait… was it not polyphonify?

It sounds like a bad crossfade and jump in the zero crossing. But, I think that’s the nature of looping and recording a source that pure and short in realtime vs using a DAW to edit it perfectly in non-realtime…

you can hide it a little easier if you record a long sample i.e.: 10 seconds: So you won’t hear that crossfade as often. And, because the sample is longer, you can have a long fade time parameter to hopefully mask the loop.

2 Likes

Thanks for the thoughts, @NeilParfitt. I thought that might be what I was running into. I guess there may be no way around it.

I have actually tried some longer buffer sizes, and you’re exactly right about that. It’s a bit of a balancing act though, as longer buffers lead to more lag in adjustment of the external OSC So for example a 10s buffer leads to (up to) 10s delay between wiggling a knob on the external OSC and hearing the result.

One of the things I was hoping for was real(ish) time adjustment - vs. having to choose a specific sound, record and slice a wave form for perfect loop points…

1 Like

I thought it was polyphonize :wink:

1 Like

Love the video. That’s a really amazing use-case you have put together there… taking any mono source and making it poly! Brilliant!

1 Like

yeah - for it to be perfect, the punch would have to be at the start cycle of a wav, and the punch out would have to be exact as well when the wav finishes.

2 Likes

This is right - I wouldn’t be too surprised if you could sync the two and actually achieve this!!

It is a super cool idea!!!

2 Likes

I have been trying this actually - using a tap tempo unit with it’s tap modulated by the source wave form, and then using the tap tempo to reset the sample loopers and players. No love though.

Have also tried the pitch shifting delay instead of sample players at 100% wet and using a V/O to control the speed, though this does not seem to be set up for 1 V/O (and I’m not sure it would produce a good result if it were).

Nothing’s really working 100% seamlessly. Probably the best compromise for now if you wanted to do this is to do like I did in the video, set a moderate buffer length (maybe 0.5-1s) to balance between masking the artifacts and minimizing the lag time of adjustment of the external OSC, and when you get something you like, record the source wave use a DAW to trim it to an exact cycle length.

I’ve done a few experiments here, and like Neil mentioned, the purer the waveform, the more noticeable the reset event is. Using it with some complex phase modulated signals (which really was the goal - no need to clone a sine wave here), its less pronounced. I did actually get some results that I was having fun playing on the keyboard. Though if I were to use them in a recording I’d have definitely trimmed them to a cycle length and used them directly in a sample player.

I will probably move on for a while, and maybe come back later. Lots more fun things to explore :slight_smile:

I guess this is what I would have suggested to try first. It’s a workaround but for now 1V/oct control is possible via

1V/oct --> Sine Osc(1V/oct) --> Period-o-meter --> Pitch-shifting Delay(delay)

EDIT: Actually scratch this. I had karplus on my mind when I wrote it. The Pitch Shifting Delay has a speed input (linear FM) and at the moment there is no way to convert 1V/oct to a speed in the ER-301. So I should really provide a 1V/oct input on the Pitch Shifting Delay.

Another alternative is the Manual Grains unit sharing buffers with a Sample Recorder. You might need some creative control of the grain position to advoid the discontinuity created by the record head in the looper.

1 Like

So one of my personal goals with the ER-301 is to support the process of composing with (long) field recordings. I am especially interested in removing the need to pre-process or label or edit long field recordings before they can be used effectively. This encompasses a whole bag of techniques but one relevant to this discussion is the ability to move a pointer around in a long field recording and use nearby(*) audio to generate tuned amplitude-normalized oscillators. I guess you could think of it as real-time wavetable generation from an audio buffer whose contents are constantly changing. Whether it actually uses wavetable techniques under the hood or something closer to granularization is something that I am still in the process of evaluating.

(*) I use the term nearby in the generic sense. One naturally thinks that nearby means a local neighborhood in the sample buffer. However, you can also group the audio so that parts of the sample buffer that “sound similar” are also considered nearby. This way you have additional degrees of freedom in which to move the pointer: not just later or earlier in the buffer but also to another temporally unrelated but similar sounding section of the buffer. Of course, the real challenge will be creating a total ordering on top of this audio similarity metric so that browsing it makes as much as sense to the user as scrubbing a play head over a time line. This is the part I’m not sure I will succeed at.

26 Likes

:open_mouth: I’m stunned…this sounds amazing…definitely pursue it!

1 Like

this is awesome and i’ve been inching towards it mentally so it’s good to see. :slight_smile:

@Joe I would be more inclined to use an external source for sync - it could just be the trigger for each note sent to both the sync input on your oscillator and the ER-301 - that way the oscillator and the recorder may just work?

It would definitely be something I would try anyway :slight_smile:

@odevices Joe is a very clever chap and has endless ideas that are both inspirational and challenging - we have worked together on a variety of projects - I really must get my act together and finish off the big one we did earlier this year and release it into the wild :wink:

1 Like

Thanks for splitting this topic - it was starting to feel very misplaced to me too. :slight_smile:

That’s a very nice compliment. Thank you, @anon83620728! I tend to think of myself more as “even a blind squirrel gets a nut once in a while” and I am a lucky squirrel. :stuck_out_tongue:

This whole idea sounds amazing. I think I sort of loosely get what you’re talking about with total ordering applied to a field sample. It sounds very mathy. But I believe in you, Brian-san! Will be interested to watch this unfold and hopefully be part of it.

Do you mean the gate signal from, in this case, my keyboard?

When I tried using the pitch shifting delay before, I was definitely hearing something strange, so I thought I’d explore it a little further and shot another video. It seems to be affecting the amplitude of the input. This may be user error, but if not then it the pitch shifting delay might need a few more tweaks before it could be used for this purpose. As I think about it though, if it had a V/O control and didn’t have the behavior shown below, it actually is probably more ideal for the live cloning idea.

EDIT: Geesh, wherever I say “variable delay” in the video, just mentally replace that with “pitch shifting delay”. I’m off to the coffee pot now.

Sure, why not?

I suppose it will depend on how your oscillator implements sync, but a gate should work, if not a gate to trigger before the sync?

That would be phase cancellation where two grains are overlapping but with different phases. It’s especially obvious with simple waveforms. One possibility is to phase lock the grain onsets but then you will probably hear some amplitude modulation from the grain envelopes that are no longer exactly lined up. Actually, I’m not aware of a granular method that has no artifacts.

1 Like