Latency questions

I was looking at this very interesting snippet of code Programming Lua and thinking of ideas for new units, but I have a few questions that I couldn’t find answers to in the forum :wink:

  • What is the overall latency of the system (e.g. if I were to create an empty chain from IN1 to OUT1 and send a pulse to the input, how many milliseconds delay would there be until the corresponding pulse is output?)
  • What is the expected data chunk size (ie how many output samples would we generate each call)?
  • How many simultaneous output chunks are buffered before output, and input chunks buffered before processing?
  • Do any factors affect latency? Is it constant during operation?
  • Can we expect input signals to be synchronised (e.g. if the same pulse is input to IN1,2,3&4, would the pulse occur at the same corresponding sample in all 4 input data streams?), or is there some difference? And outputs (e.g. if the same signal is routed to multiple outputs, will it be in phase after reaching the analogue realm?)

Sorry for so many questions!

1 Like

I used a logic analyzer to measure the time between an external trigger arriving at G1 and a signal (produced from the sample player) appearing at OUT1, and found it to be roughly 9ms (with the 48kHz firmware). This suggests a buffer size of about 384 samples (8ms at 48kHz). As a point of reference, my Akai S5000 has a trigger-to-signal delay of about 3ms. I didn’t measure the input-to-output delay for audio signals, but I would suspect that it’s also in the neighborhood of 8ms if my buffer size estimate is correct. Eventually it would be nice to be able to set the buffer size as a means of controlling latency.

Thanks for investigating, very interesting!

Can’t contribute much… but i did notice the latency when combining internal and external modulation on a snappy fm patch. Guess thats part of the game…

Strange. I’m confused how the latency could be 9ms (i think that’s about what the radio music module is, using a teensy).

Also, from this thread(linked below) where brian stated the specs of er-301 to someone, it is quoted as :

"Bumply wrote:
I contacted Brian and asked him about it. He shared the following about the hardware, but kept the software end under wraps. Here it is:

1x ARM Cortex-A8 1GHz w/ 512MBytes of RAM (bare-metal and designed for low-latency audio, <2ms latency)

4x outputs AC-coupled up to 96kHz 24-bit high-end audio DAC (PCM4104)
16x inputs DC-coupled 60kHz 16-bit high-end bipolar precision ADC (ADS8688)
4x inputs DC-coupled 60kHz 12-bit unipolar ADC
2x uSD card (1 rear + 1 front)
1x rotary encoder
2x OLED monochrome displays
12x dual color LEDs controlled by 24 channels of 12-bit PWM
width: 30hp
depth: 2.5cm
current consumption: 300mA (peak) 200mA (avg)

Am i possibly misunderstanding the type of latency you are referring to ?

also, if it is indeed 9ms, is that a hardware limitation, or can it be lowered through firmware ?

1 Like


I recorded a source going directly into my interface while also being routed in parallel through the ER-301, and the actual input-to-output latency does indeed appear to be ~8ms:

I’m also rather surprised by this finding. Hopefully @odevices is cooking up ways to bring the latency down.

Not particularly at the moment. There are other higher priority items I want to get out of the way. I suspect the bulk of the latency is due to the resampling pipeline on the inputs (60kHz downsampled to 48kHz). The latency is not particularly optimized there. FYI, after the downsampling stage, processing buffer size is currently configured to 128 samples.

Interesting. Is that 128 samples total for input and output buffering, or is there 128 samples each for input and output? I’m trying to get a sense of how much ground there is to gain.

Brian, FWIW, this is an important issue for me and probably others who are using the ER-301 as a playback sampler. Especially for percussion samples, 8ms is definitely perceptible when triggering alongside other drum modules. I know beggars can’t be choosers, but anything you can do to bring down the latency would be greatly appreciated! I would be fine with an option to reduce sample buffer size in exchange for increased processing overhead.

@resynthesize, running at 96kHz does lower the input-to-output delay to about 5ms. It’s not a final solution, but it could help in the meantime.


Ah great, I will try that! Thanks.

Agreed, I gave this a whirl the other day and it almost halved the latency. Was a bit worried it might mess up my quicksaves, so I backed everything up first.

1 Like

I tried the 96khz firmware today, and there is definitely noticeable improvement for drum sample triggering. Thanks for the suggestion @miminashi.

@resynthesize Did you mesure the latency at 96KHZ?

Wanted to use one output of the ER 301 as simple drum machine, but that latency is too much at 48KHZ

It’s about half that of 48khz

I’m a bit shamed to ask cause @odevices you said in this thread reducing latency is not a priority. Which I understand. Must be lot of work.
But, can you think about it? :eyes:
It’s hard to use ER 301 as a drum machine without reducing it. (Yes even 4ms is too much).

really? i never found 4 ms too much for drums. how it is a problem for you? synchronizing with other elements or because you wanna go finger drumming or what? just curious!

I am still curious too…

I presume the problem arises when using drum sounds in the ER-301 alongside external modules, for example the same trigger routed to different sound sources and with very short clickly sounds. Which kinda makes sense, but I’ve asked for audible demonstrations of this and have yet to hear any.

I’ve seen demonstrations of this (see post above in this thread) by way of recordings in a DAW where the timeline is zoomed right in to the point where the latency is clearly visible, but that’s not the same thing as being able to hear it.

It would be really good to have a proper well presented argument for this, complete with source files, external modules used, the samples used in the ER-301, recording in DAW showing the difference, but most of all an example where the problem is clearly audible and the same example treated so that it’s not audible so the community at large has a reference to present whenever this kind of thing is questioned. It would make a fantastic blog post for example. Maybe this already exists out there?

Shrugs, either way there’s plenty of ways to handle this if it ever becomes a problem, but I find it rarely does because more often than not I want some sounds off-grid anyway for feel and swing. Also any self respecting sequencer should be able to delay triggers so that these kind of things can be compensated for, i.e. it should be possible to dial in a 4ms delay on everything being triggered outside the ER-301. I consider this just part of working with audio and not necessarily a problem per se.

Having said all that I don’t want to be discouraging and I am all for these improvements being made - every little helps!

1 Like

When I performed with the 301 as a beats machine in 48k mode the rhythm sounds sequenced from the 101 were definitely out of whack with bass sounds (non 301) also sequenced with the 101. It’s the transients in the sounds that makes them feel odd. For example if the kick drum is a dominant sound and it is on the same first 1 beat as a sharp transient bass sound, then the bass triggering first sounds odd and not in a good way.

Switching to 96k sounded much better but it still wasn’t perfect. I think 4ms is ok but the lower it goes the tighter it sounds.

Easy enough to try at home on say ableton, write a pattern with two dominant, transient sharp sounds and put them on separate tracks, and then start adding delay in ms to one track and see how far it goes before it subtley changes the feel or vibe. It’s probably at its most uncomfortable when the most dominant sound doesn’t arrive first. E.g. a late bass sound will feel happier than a late kick, genre forgiving. On rare occasions I have done this on purpose to tighten up rhythm tracks involving numerous machines with different internal behaviours in response to clocks. In my work with various Elektrons synced to each other and ableton by expert sleepers, I have different clock offsets in the tune of under 4ms amounts to get them as close as possible, the Machinedrum needed the most negative clock offset for example, allowing it time to spool up.

I don’t think until it is shorter than 4ms, I will use my 301 for mega critical sounds, e.g. main kick drum, if there are tighter machines/sounds playing in sync. Work around are possible but millisecond delays are not available on many Modular sequencers, my trigger riot can do it but that is a trigger only sequencer. The 101 can do it but only if you set the clock speed/multiplier very high.

What bothers me and others may not bother you and vice versa, I know we have debated tunings and the like on a similar vain before. I think a lot of this stuff can be less or more of an issue depending on the sound your looking for etc