Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There are a lot of reasons why this problem hasn't been solved, some technical and others artistic. But imho, a certain grade of equipment (namely, recording/reproduction, of which the MIDI protocol is a key component) should behave identically under the same conditions and be able to reproduce a performance exactly. It's a goal that borders on absurd, but I think we could do it!

why should this always be the case? or better yet, why should this often be the case? quite a lot of digital effects attempt to behave like their analog counterparts, not their digital counterparts - we should expect a non-deterministic result.

in digital:

I send 1, then I send 1, that makes 2.

in analog:

I sent 1, then I sent 1, that makes 1 + some other stuff + 1.

it shouldn't necessarily be a goal, bordering on "absurd" or not, it's just a different expression in a different medium.

having more bits (25 of them) shouldn't change the sound profoundly, when MIDI was introduced, analog was king, and a note pressed was typically different than the same note pressed a second later. this is the same environment that has been pushed forward with hardware (again) via eurorack, and emulated quite effectively in software.

all that we've really done is smooth the digital by adding more steps, which is fantastic, but to try to "solve" a "problem" with this, other than some smoothness, is just silly.

this said as someone on their nth career writing audio software (https://svmodular.com if you're interested).



It's not about quantization error (which is quantifiable as noise) or the kind of nonlinearities you're talking about, but timing concerns.

It's basically the difference between naive automation in a DAW and sample accurate automation, it's not about the granularity of your changes but the fact that sample accuracy allows your system to reproduce the same thing every time. Not so many years ago, online renders in certain DAWs were perceptually and quantifiably different than offline renders because of things like this - you want to be able to tell a user what they hear while they work is the same when they go back and render.

With MIDI 1 and 2.0 that's rather difficult when factoring in live input due to the fact that your production system has wack drivers on top of a non-realtime OS and can't provide guarantees. MIDI 2.0 goes a good step in the direction with synchronization, but I have doubts that it will be utilized to where we can guarantee received events are replicated accordingly, due to the accuracy of reception and synchronization of clocks. Maybe we'll get it, idk.


> Not so many years ago, online renders in certain DAWs were perceptually and quantifiably different than offline renders because of things like this

and how has moving from 7 bits to 32 bits helped with this? rendering the changes in values across 32 bits is going to take a bit more cpu power than doing it across 7 bits. that's not really relevant here.

moving from 7 bits to 32 bits allows for smoother transitions - which is fantastic, but remember that the sound coming out is the culmination of a lot of different factors: having more bits doesn't change the 1+1 behavior.

> With MIDI 1 and 2.0 that's rather difficult when factoring in live input due to the fact that your production system has wack drivers on top of a non-realtime OS and can't provide guarantees.

great, so adding possibly more instability. I guess that's a "change", but probably not unless the underlying protocol is changed - otherwise no real changes: indeterministic results. and I think I'm ok with that.

midi 2.0: great, but don't expect much different - the music world has moved beyond midi (again), it should be interesting to see how midi adapts past 2.0.


I'm not arguing with you, just agreeing in a different way :D

Nonlinearity is fun. I'm a big fan of it, and have spent a lot of time on the DSP side developing NLP that can be predictable and repeatable, and all the garbage associated with making it sound good.

My issue is more that MIDI 2.0 goes towards part of the issue - e.g. if I press N keys at the same time, N messages should be stamped at the same time and be able to be rendered by the synth at the same time - but I'm doubtful that systems will be able to handle this in a deterministic way, both in recording the incoming messages and replicating them in the same way the performer intended while playing.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: