Home | ER-101 | ER-102 | ER-301 | Wiki | Contact

Transit - MIDI to ER-301 via I2C [Development on Hold]


Heh, this sounds great! Only listened on laptop speakers so didn’t hear the clicks/pops, I remember spending quite a bit of time getting the slice points right, but it was done on quite an early version of the firmware and it may have been before some slicing functionality improvements.

If I had to do this again I think individual samples for each note loaded up using the concatenation function would be the way to go. There is a memory limit and I seem to remember about 3 seconds being the maximum length of sample for the full 88 notes being a reasonable compromise.

p.s. it still amazes me that modular can sound like this :wink:

edit: just listened again and I can hear some clicks in the very fast note sweep. Hard to say where the bottleneck would be without looking at it, but it may be that it’s just asking too much generally.



Just posting here to say how eager i am to see this project going forward ! As soon as i knew that the ER-301 had i2c, i began to dream about some way to use midi interfaces with it, without the hassle of cables and without using the jack inputs.
Actually i’ve been thinking about an OSC to i2c converter, based on a raspberry pi zero…



If you can do this, I’ll be the first one to buy your module. The 32HP Pod would be perfect for this, as it would be able to house both modules with no HP left. Very handy for live or just jamming with say, a Digitakt.

1 Like


More progress! A run of 3 prototype boards have been ordered from OSH park. All have been spoken for.

I’m going to make the schematic public now, and once I’ve verified the board to be working, I’ll make that public as well. Crossing my fingers that all the time I spent going over every detail in the schematic and PCB means less time chasing issues in the hardware prototype.

Also did a little, impatient, proof of concept to make sure that the code will work with this first hardware prototype without too much rewriting.

MIDI 2 ER301.pdf (90.3 KB)



Any compelling reason to use the Teensy 3.6 instead of the 3.2? Also, I’d have included a reset/panic button. I would have mentioned it sooner but I mistook the LEDs for buttons… Looks pretty slick, in any case!



Good call on the reset button. I’ll have to bodge one on for this prototype. I think subconsciously my brain kept seeing the program button in all the Teensy diagrams and thinking it was a reset switch.
The 3.6 is the only Teensy model with USB host capabilities, as far as I’ve read

1 Like


I think you can use the 3.2 in host mode, but it might involve additional circuitry. The SD card slot on the 3.6 should come in handy, too. Seems like a winner.

1 Like


That does look to be the case. With board space being at a premium (I’m hoping to keep it 2hp and have it fit in the 4ms pods, though I’m not sure how doable that second one is), I’d rather take the proven design with a known footprint, than try to combine a smaller Teensy with the schematic of the USB Host shield stuff, and deal with fitting that onto the layout. It seems like more potential points of failure in the hardware development process. As this is something that will have most of its functionality realized through software, I feel making the hardware as simple as possible is worth it. Adding $10 to the cost of the micro used seems more than a fair trade for dealing with incorporating the USB Host circuitry.

I do appreciate the feedback!



Looking great! It is awesome of you not only to take the initiative to do this, but to also publish everything as you go so that others can provide feedback and/or learn from it!!

So, what is the current thinking on the software front? Are you going to try to take @odevices up on the on the offer to create builtin units to handle the MIDI over i2c messages? That seems ideal to me. E.g. you’d drop in a MIDI.CC Unit and set it to CC#9 and possibly set the MIDI channel, rather than dropping in an Teletype.CV unit, and have to figure out to set it to port 34 in order to listen for CC#9.

I’m not sure what exactly would need to happen to create that spec. I couldn’t find any discussion on lines about it. Wondering if you’re thinking on going that direction if there’s anything I or others can do to help.



Thank you!

I would love to see MIDI specific units in the ER-301. I think that would be the best possible use case for this module I’m designing.

The units I can see be useful/necessary from the beginning are

  • Voice gate (channel assignable, voice # assignable)
  • Voice pitch (")
  • Voice velocity (")
  • MIDI CC (channel assignable)

The biggest question that needs to be answered, in my opinion, is what is going to be responsible for polyphonic voice allocation? This will determine the implementation of the “Voice” units.
1.) If the voice allocation is done on the 301, the Transit (my module) doesn’t need to have a setting for a voice limit, it simply has to pass through note on/off messages. Then the ER-301 can keep track of how many voices have been assigned to a MIDI channel, and allocate/steal voices accordingly.
2.) If the voice allocation is done on the Transit, then the user will have to rely on the implementation of one of the methods we’ve discussed previously in this thread, i.e., CC, program change, or SysEx web interface control over max voices per channel, which would have to correspond to whatever is currently loaded in the ER-301.

I think that the first implementation is preferable (easy for me to say, I know :sweat_smile:) as it would allow for custom units, and quicksaves to be loaded, and have polyphonic voice allocation happen dynamically based on what is loaded in the ER-301, rather than having to keep track of how many voices exist per channel in each custom unit/chain/quicksave, etc.
Though, if it were to end up being that the voice allocation responsibilities fall upon the Transit, I think that a MIDI CC reserved, per channel, or possibly a Program Change message, for voice limit, would make more sense than a SysEx web interface, as this would allow for people to load up sequences in hardware sequencers, or projects in a DAW, and set the voice limit for that channel within the project, and have it change upon beginning playback.

Of course I could be missing something obvious, but that’s my 2¢.



I was thinking about that earlier. Traditionally, this is done by the synthesizer, I suppose. I can connect my MIDI controller up to a VST, or a hardware synth, and the synth determines whether what comes out is mono or poly, and handles voice allocation in the case of poly. Of course VSTs and synths with a MIDI input don’t have piece of hardware like the Transit in between - they are designed to handle MIDI messages directly.

No idea how big of an ask this might be for Brian - I don’t think the ER-301 Units can talk to each other outside of patching them together via subchains, or that there’s any kind of OS code to coordinate something like a busy/free state amongst Units at this point, though I could be wrong.

It sounded like Brian would want some kind of spec for the i2c-based MIDI Unit implementation. I’m not sure if there’s any documentation on monome-flavored i2c at this point, or if we’d just have to infer it by looking at open source code. I’ve not really looked at any of the monome source code before.

Not sure if there are expectations for message length vs. any-length messages with some kind of terminator, etc. I guess those are some things that might have to be thought through.

In my ideal world, the Transit could work on the same i2c bus as Teletype or another i2c leader, so long as they didn’t send conflicting messages to the same devices/ports. Maybe @scanner_darkly would know if there’s any i2c documentation to start looking at in order to start trying to put together a straw man spec?



Me neither. I completely defer to him in regards to this.

Maybe I’m not understanding the request completely, but my thinking is something simple along the lines of the two MIDI message bytes preceded by the address. This is actually less information than is sent by the Teletype (command byte, port byte, 2x value bytes [if command has value associated with it]). If Brian is asking for something that is an interpretation of the MIDI commands on the part of the intermediary, I’d be curious what that might look like and why that is preferable. Pardon my ignorance in this regard.
I suppose if the answer to the question of the voice allocation falls on the part of the Transit, then the spec would look something like what’s going on currently, though instead of the I2C being sent as “SC.xx” commands and values, it would be sent as MIDI unit identifiers, and values. Though at that point is it even MIDI, or is it something entirely different from MIDI and the Teletype I2C?

This is doable. It’s a matter of implementing code in the Teletype, Transit, and all other devices that would connect to the bus concurrently. I don’t know specifically what that implementation would look like in software, but the theory makes sense.



It’s entirely possible I’m not either. Just trying to keep the dream and discussion alive. :slight_smile:

I guess my assumption was that Brian was looking for a spec that would easily plug into the current i2c stack, without a lot of new dev work (due to his dis-love of MIDI). So it’s currently expecting 4 bytes after the address, MIDI would normally be 2-3, and the meanings would be different. So it might be splitting out and re-packaging the MIDI commands to fit the existing i2c schema? For example:

command - select unit type (MIDI.Gate, MIDI.Pitch, MIDI.Velocity, MIDI.CC, etc.)
port - MIDI channel
2x value bytes, one of which might contain a voice # for poly

The MIDI spec is packaging a note on event with 2 data bytes: 1 for note, one for velocity. I guess in the current schema on the 301, a MIDI note on event would need to be split into 3 separate messages to send to 3 separate units, right? One with a SC.MIDI.Pitch command, channel, and value, another with a SC.MIDI.Velocity command, channel, and value. And a 3rd message with SC.MIDI.Gate, channel, value (on/off). Each possibly with a voice # .

That was my read on what the spec might contain. I hope I’m not muddying the waters. Maybe he’ll clarify.



That’s not too different from what’s going on currently, which is fine with me. Replace “MIDI.Gate” with SC.TR and “MIDI.Pitch/Velocity/CC” with SC.CV, and change channel/voice numbering to the SC.CV/TR port mapping, and that’s what the code currently does.

Any input from Brian (and @scanner_darkly) would be much appreciated, as I’d love for this to be as robust as possible, without putting undue development tasks on anyone who would rather be focusing on things that they will use personally.

Boy am I grateful that everything that’s currently being figured out is all software based, and (knock on wood) entirely independent of the hardware implementation!



I couldn’t resist…



Yep, and I think it could improve the fluidity of working with it quite a bit by eliminating the need to really ever look at a mapping document, as well as reduce the frequency with which you might need to hook it up to a WebMIDI config utility. It might actually streamline the code a bit in terms of reducing the conditionals?

Continuing to brainstorm potential commands/units:

  • SC.MIDI.NoteToTrigger - to cover what you currently have set up on SC.TR ports 33-64
  • SC.MIDI.Clock
  • SC.MIDI.Pitchbend
  • SC.MIDI.Aftertouch

One thing I think could be a useful feature - maybe this could be done via the config utility - would be an option to reconfigure the MIDI out to be a simple clock out, possibly with a divisor, rather than echoing MIDI in messages? That would allow to sync another Eurorack sequencer or other clocked module. Not sure if that might be possible? I admit I’m a little fuzzy on MIDI In vs. USB Host - will it listen to either/both for incoming MIDI?



These all sound great. Wonder if a SC.MIDI.ClkStartStop would be useful as well.

It’s possible, but would require a hardware change. Currently the MIDI out is driven straight from the 3.3v pin of the Teensy, so it would need a level shifter up to 5v. Also, the Ring connection on the MIDI out TRS jack would end up being shunted to ground, so the pullup resistor would have to be a high enough wattage to continuously dissipate the current from that being the case. Not the end of the world, a 1/3W resistor would do, and those exist in an 0603 package.

The plan was for it to only echo USB Host MIDI to the TRS MIDI Out, that way the TRS MIDI in could be used as a MIDI IN for the 301, while the USB Host to TRS/DIN MIDI could function independently for other uses.



This + everything mentioned before, wow :pray:t2: When the Command Bus is implemented, that’s gonna be crazy :slight_smile: I’m working on sequencers for Norns at the moment, I can already imagine what I’m gonna build for this 301 + Transit + Norns (+ TT ?) environment. I hope I can still use the faderbank with Transit, these faders are essential for the 301.

1 Like



I wondered about that. Wasn’t sure it would really require the full 5v for a clock in signal. I guess it depends on the target module?

Ah, ok, so this would just be kind of a completely independent bonus function - MIDI USB to TRS - as well as possibly used for configuration?

1 Like


And (sorry to interrupt again) I wish a 1u version of the module will also happen when the circuit has been tested. I don’t have much space available in the 3u of my intellijel but lot of space in the 1u row. If the module is open source I can try to do that, idk. Anyway, thanks for making this :pray:t2:

Nb: when the Command bus is implemented, we can start building things like MLR for 301, that’s really gonna be great, especially with a module like Transit

1 Like