I just had a play with global chains, and would like to get some input on whether I set this up correctly:
Pre global chain:
Channel 1: Two mixer units, one doing some sample playback, one a synth.
Channel 3+4: Clocked delay, input from channel 1, wet/dry adjusted to taste, channel 3+4 was my stereo output.
Using global chains:
I created moved the two mixer units from channel 1 to a mono global chain each. The Clocked delay was moved to a global stereo channel. So at this point all the non-global channels were empty, so in order to get output i inserted three mixer units on channel 3+4 (still linked for stereo output), each grabbing the output of the strings, the synth and the delay, the idea being this is where I mix how much of the strings, synth and delay I want to hear.
Then in the global chain for the delay I placed two mixer units before the delay, one with the strings global channel as input, one with the synth global channel as input, acting like individual sends to the delay.
Obviously this gives much more flexibility (actually I was doing all this to be able to record things individually with the 6 channel recorder), but I was surprised to see the CPU usage go up from 42% to 49%. The only point where I felt “now I’m doing something more expensive” was when adding inputs to the mixer units acting as effect sends (so on the global stereo chain), I had to assign both left and right to be coming from the appropriate global mono chain.
So did I miss something here? Anyway to lower the CPU penalty? Any thoughts or advice?
BTW: I hope the above is clear, it’s a bit hard to explain in words, but I hope it comes across