Generative music approaches

Continuing the discussion from Some new Units to share:

I start a new thread with this topic as I don’t want to steal Joe’s “Some new Units to share” thread.
Thanks for the quicksave iPassenger. I will dive into that one later. In the meantime I figured the basic concept out by myself. Here is a short video of a quick setup I did, with two independent generative lines, just sharing the same external clock pulse. The notes are generated with a pingable scaled random followed by a scale quantizer. The pings for each channel are made from a tap tempo used as a multiplier and a weighted coin toss for the rests.

Thanks to all in this great forum for sharing the wealth of knowledge!


And here is a quicksave of the patch.
You need Joe’s bespoke units to get this running. I built it running firmware 4.03.
Just add an external clock on C3 and it will start doing it’s magic on OUT 1&2. (56.5 KB)

1 Like

Very odd, like computer jazz. I wonder if putting some clocked loopers in there, either after the audio or the melody generation could lead to more patterny stuff.

Been thinking a lot about generative patching lately. I’ve also done stuff like this, a few voices all being played by random sources into a quantizer with a clocked random gate driving the envelopes. Really hope we get some turing machine units soon! Would definitely add some musicality to all this randomness. With pattern length, randomization, etc. Maybe in the next bespoke pack, Joe? :wink:

I’d love to get some ideas for a patch I want to prepare for a gallery show I have coming up… The basic idea of the patch is to make a nature soundscape from a bunch of different samples thrown together. I.E. birds, streams, trees swaying in the wind, crickets, etc. and through a series of envelope followers and motion sensors, have various elements in the patch react in real time to the nature sounds.

My only problem is I’m having a hell of a time dialing in envelope responses, slew limiters, limiters, or grid quantizers, to hit the scale quantizer just right. It all just gets very jumbled and sometimes it just gets stuck on one or two notes of the scale. Anyone have any advice for using envelope followers with field recordings to get nice musical results? I was thinking as a last resort I would just use random sequences from my keystep to control the musicality of the voices, and the field recordings would just affect the volume of the elements. But wouldn’t it be cool to get ascending and descending scales from the dynamics of the wind blowing through trees?

Also hope one day the quantizer gets some more attention. It’s cool already but would be nice to have more complex scales and the ability to change the inversion and root of the scale. One day…

Sounds like a really neat patch you’re making. From what experience I have with this (not a ton), getting it dialed in right can be kind of fiddly. Be interested to hear if others have good tips too!

Filtering the audio going into the EF might help to isolate the most important part of the signal to track? Perhaps try a couple of EFs in series?

You are trying to generate pitches that follow the volume of the field recording right? Not trying to actually track the recording’s pitch?

1 Like

Yeah I tried using the EQ to carve out problem frequencies but didn’t do anything too extreme to keep it sounding somewhat natural. A better way to go about it that I’m realizing now would be do some extreme EQ-ing to notch out the exact signal I want as control and then into an envelope follower or two. What would be the advantage to using a second envelope follower vs a slew limiter? Also now I’m thinking using an EF on the volume of the actual sample I want as control would further isolate it… All these ideas are flooding to me and I’m away from my modular for 3 days!

But preferably yeah the pitch would follow the volume of the recording. There isn’t really any harmonic information in most of the samples I’m using besides the bird calls. Although that would be really cool to derive pitch from the bird calls… Hmmm… :thinking: So many possibilities, so little time! :grimacing:

Search in the forum for my pll patch. Still very raw but i had limited success in pitch tracking with it!

1 Like

TBD. :slight_smile: I don’t know if there is any, or if more than one slew limiting device makes any sense. Just something I’ve played with, with mixed results. Kind of like gain staging - staging multiples to smooth out a derived control signal more than a single one can.

Great topic, @TheMM, and nice work on the patch. Quite a few of the bespoke units I’ve built recently are really intended to be generative enablers. Definitely a topic of interest for me. Forgive me for what will probably be a lengthy post.

So here’s a generic generative recipe for a generative patch that I’ve been playing around with since I assembled my Eurorack system. I don’t think it’s just me - you can find countless examples of some flavor of this recipe on youtube and modular forums.

  1. Start with 1 or more sources of random
  2. Assign them to CV destinations like pitch, filter cutoffs, other modulators, to modulate your voices
  3. Possibly have additional randoms modulate the original randoms
  4. Dedicate the rest of your modules and tweaking to trying to tame the random and make it something musical.
  5. Wait, tweak, and hope something nice pops out

It’s good fun. You can get some happy accidents. You can record it and call it a song if you want to.

But on the whole (outside of having some fun), this recipe is not really working for me. It never really results in what I would call a “composition” that has repeat listening value over a long period. More often than not for me it results in what I call “blip music.”

So my hope is that modules from Orthogonal Devices, monome, and the like will help me to break out of this recipe and get into some next-gen generative types of patching. A recipe that looks more like this:

  1. Start with 1 or more deliberate ideas
  2. Assign one or more voices to execute these ideas
  3. Have “the system” generate ideas based on the original idea - follow it somehow
  4. Start to introduce changes to the idea - possibly from a source of random, or possibly deliberate, discrete, human injected change
  5. Have the system as a whole react to the change in a musical manner and generate something new

To further clarify, let’s set the modular aside for a sec and imagine that I play guitar (I do - we could debate how well). Let’s say I’ve worked out an idea. You could call it a pattern. Outside of the electronic music context you’d probably call it a riff, or a hook, maybe.

I invite some other musicians over to play bass, drums, and keys. I start playing my riff and in doing so I’ve defined some things - key, tempo, time sig, and some other things that define a “feel”. From that the other players are able to jump in and construct some additional parts to accompany it. Everyone is listening to each other so what gets generated is cohesive.

Before you know it, we have a really nice thing going, and we’re playing it in a loop. It’s cool but it will get boring after a few minutes if it just keeps repeating.

At that point in this jam, anyone could introduce a change. It can’t be a drastic change, or the whole thing will fall apart. But if a change is introduced a little at a time by one player at a time, the others can listen and react to that and modify what they’re playing. Over time, the piece may change a significantly from the original idea due to repeated introductions of small, manageable changes.

So, I don’t know if this is a pipe dream or not - to start to model something like this in a modular synth. Modules like the ER-301 and Teletype spark ideas that make it seem like this next-gen generative patching might be feasible.

OK, I’ve rambled quite a bit. Any thoughts?


Loopers can be an interesting part of this. Put a random source in a mixer channel. Put an arbitrary offset genrrator in another mixer channel. After those two put a looper. Be sure you are comfortable controlling relative levels of mixer channels, like controlling them with an offset generator (i use nanokontrol thru fh-1). then put a looper after them. control the record\overdub with a random gate (clocked or not: clocked with joe’s bespoke, unclocked with velvet noise, for example). keep feedback to non -zero
now you have a looper that randomly picks up voltages. the good part of it is that if you set your random mixer channel to -inf and not input anything on the arbitrary cv source, the looper’s buffer will gradually empty out, fading out in an organic manner back to zero. this is especially interesting on gates.
so you can tame random and mix it with arbitrary gesture, and also have a very organic random buffer.
send it to wherever in your patch.

tip: try to sometimes substitute random with chaos. a good source of chaos is feedback. send the audio out of a VCO thru a filter or a non-linear processor, then use the output to control the same VCO frequency, or PW or whatever. find the sweet spots where it trembles between all out batshit crazy and patterny.


this is a private share, i ask you not to share it with others because i’d like to properly release this.
i think i did a pretty good job, a mixture of generative and arbitrary, started with a really weird loop i got out of doepfer a-189-1 bit modifier and processed it with granular and non granular players, many many weird feedbacky routings and modulations…all performed with mixer gestures on my desk (dynamic soundstaging, messing with pan, levels, reverb\delay sends, additional granulation thru clouds etc…). i think it sounds really good spectrum and dynamic-wise. of course it is totally abstract and might or might not appeal.

1 Like

thanks for the lengthy post, much appreciated. I am quite aware of both approaches to “generate music”. I am a trained piano player, do a lot of free jazz, so generating impromptu composed “improvised” music with other musicians is all natural for me.

Looking for something to inspire me a few years ago, I had the idea of building a “inspirational feedback loop” for my grand piano using a modular synth. I wanted to do a solo piano concert with this. I play something on the piano, the modular picks it up, modulates it, spits out something based on what I played before, then I take this as an inspiration and feed the system with new music based on the machine-generated music. So much for the theory…

I am far from a working system. A system based on envelope followers, following the dynamics of my audio and transfer them to a CV to do something with it is not enough. Pure random is too much random at all and generates only the beep music you mentioned. Building something, that sounds like music with a system like this is not a trivial task, because you have to define for yourself “what is music” and transfer this as good as possible into algorythms.

So I am still in the phase of experimenting with different approaches. The most promising, but fairly untested, is to sample the audio, fragment it based on tempo and waveform analysis, and the put it together using pitch shifting, delays, loops, whatever. I think the ER-301 is the best tool to build a system like this, but my knowledge is too limited and - boy, do I have a lot to learn.

So I am sucking up every piece of info in this forum, hoping it would lead me to the perfect inspirational feedback system one day…

1 Like

my personal rambling should begin with a disclaimer:

  • i am sucker for random sources, period
  • though i am not trying to achieve completely generative pieces in any tradition
    i am also a sucker for ‘generative elements’ in my own music.
  • thus i am very pleased to see more and more er301 solutions emerging
    from all these threads, that help us all embracing the generative possibilities
    of our beloved beast, i.e. the er-301
  • i only intend to enrich those conversations here

i find the comparison to “jam sessions” very fruitful. and i’d like to add another observation.

i once had an interesting conversation on the difference between ‘composition’ and ‘improvisation’.
Our findings were somewhat shocking. if you consider the actual processes of production the main difference
seems to be the time between having a musical ‘idea’ and that idea “actually coming to existence”.
but this (rather common) idea of composition vs. improvisation leads to a problem that resembles very much
the heap (or sorites) paradox (which i believe is not really a paradox, at all).
for when is the time between idea and actual sound short enough to call the
process and the result an improvisation… don’t want to dive deeper here.

but to me, there were consequential lessons i could draw from both observations:
a) from Joe’s you can learn that you absolutely have to start with SOMETHING. and also that the
elements (modules) of a given ensemble (system) have to listen to each other in particular ways
in order to achieve changes that result in a piece that Joe called ‘cohesive’.
b) from my conversation about heaps of time i learned that i will have to learn and practice making
musical changes regardless of the time i have.

Though my personal recipe does imply both of the recipes Joe has suggested, it tries to leave
even more space for whatever…

  1. ALWAYS press the record button first or go onto a stage of your choice (or both)!
  2. start with at least one idea and apply any number of
    ‘compositional’ rules you can think of or you feel like.

obviously, this two steps apply to every effort to produce canned or live music.
now, whatever you think ‘generative music’ could or should be pretty much will
decide which compositional rules you will choose!
random sources will naturally flourish in this context. interactive uis are great, too.
and please, also consider periodic cv sources in your generative patch ideas!
it might not be obvious that ‘always-repeating & never changing’ sources such as sines, saws, triangles
and squares might help you. but they absolutely can! (conversations on generative arts sometimes
tend a bit to underrepresent the chaotic possibilities of periodic occurences :slight_smile: )

and imho, it doesn’t really matter whether you start with a premeditated idea or not,
which is an absolutely good thing to do. but it might also turn out to be fantastic, if you just try to develop
an idea during a recording or a live situation. i call that the tabula rasa situation, which is a legitimate
idea on itself. and you obviously would then practice developing ideas from the scratch
which doesn’t necessarily have to be or incorporate musical changes (though my personal taste
absolutely resembles @Joe’s. in my personal efforts i absolutely do aim for culturally cemented
anchors, while leaving space for anything else…) before you go onto a stage,
though NOT before you press the record button in your studio…

with a friend i once made an experimental session that might illustrate to you
what i was rambling on the whole time:

let the only premeditated idea be that you would radically change the subject
every 10 minutes.

  1. we prepared a (long-play) DAT-tape with only a bell sound every 10 minutes.
    on the display of the recorder we could then estimate when the change will/should occur.
  2. though we did not even prepare any connection to mixers, mics or electrified instruments
    we pressed the record button on a second recording device.
  • since we still had to connect some hardware that would ban any sound to the recording
    in the beginning of that ‘piece’ you would obviously hear those popping and cracking
    sounds that most of us would not consider ‘musical’. (don’t do this at home :wink: ) and i believe
    the first thing we came up with was some more or less ‘musical’ game with ping pong balls
    and rackets
  • some of the 10-minute-compositions were mere soundscapes using older field recordings
    and freshly squeezed noises. some of them turned out to be pretty hard sounding techno tracks.
    and if i recall it correctly, we even had some loud jazz-rock session in there.
  • due to the minimalistic approach we were absolutely aware that the result would only turn out
    to be ‘good’ if and only if we would listen to each other considerately.

just saw that @TheMM posted more of his approach.
i don’t think that i’d have to change a bit and i hope he and others
find something for their own stuff in here…

“ramble on”!

1 Like

The nice thing about music is, that there are no rules. The whole universe is made from vibration. Music is made from vibration. Life is vibration. Everything is allowed, but as humans we tend to limit ourselves with rules. Based on out limited ear, based on our cultural background, based on our taste, based on the taste of our friends, you name it.

I did a band project called NOTOPIA, which was basically five players in a studio, starting to play without any premade rules. We had no rootnote, no chord sheet, no melody, we just started playing and everyone tried to get into the flow. After this one day long session we selected the best pieces and made a record from it called “Celebrating Life”.

Here is one of the tracks. We filmed the whole process in the studio:

If you want to dive deeper into the music, you can find the whole record here:

The music we got from this sessions showed, that non-spoken rules were all around. We got some very experimental stuff, whenever we just played and forgot to think in songs, and we got some song-like things that almost sound “composed”, whenever all the players switched on their internal rule-playing. I hope you will enjoy the stuff above and find some inspiration for your own “generative music”!


Very impressive video! Always a treat to watch a room of very skilled musicians do their thing…

I’ve been working away at the patch I described in my earlier comment and I’m very pleased with the results so far! I spent a few hours last night dialing things in for the main element of the “forest scene.” Added some more parts this morning but I’ll wait to post those until I do the actual show. Want to keep some of it a surprise! :wink:

The main things I did to help with the envelope following are: choosing samples with more isolated audio (duh), and some more drastic pre-EF EQ-ing. Those things plus using an external pitch follower and quantizer. I don’t know what it is but I cannot get the on board quantizer to work with any kind of taste (with complex sources like this at least). Will probably just take some getting used to but for now my disting is doing me just fine!


Reminds me of Visible Cloaks - Valve.
Beautiful stuff.


really beautiful

YES!! I love Visible Cloaks so much.

@hyena Thank you!!!


1 Like