> (In audio, there are other benefits to higher resolution, like latency; a 256-sample buffer adds 6ms at 44.1KHz, but only 1ms at 192KHz.)
But the buffer is there to allow you to gather data up before sending it across the pipe (physical or virtual) so you're not incurring latency you don't need to. Increasing the resolution of your data doesn't decrease latency, as you're just incurring that hit more frequently. If that worked, you could just do away with the buffer and send a continuous stream of data at whatever sampling rate you wanted.
I'm thinking of setups which often have some absolute minimum buffer size, maybe due more to the protocol (e.g. ASIO) than the converter design; my RME Digiface setup can't work below 56 samples, no matter what the sample rate.
Yep, which is why it's so amazing that bands play to a beat that doesn't really exist - it's an agreed-upon moment in time that we pretend is the same for all of us, but when you're standing 20 or 30 feet apart, it isn't.
But latency is critical for tracking through headphones, especially singers; a little bit of latency can screw up your pitch perception as well as your timing.
Well, that's why we have hardware monitoring. You only really need the low latency stuff for recording software synthesizers / samplers or if you have a virtual amp, etc.
But the buffer is there to allow you to gather data up before sending it across the pipe (physical or virtual) so you're not incurring latency you don't need to. Increasing the resolution of your data doesn't decrease latency, as you're just incurring that hit more frequently. If that worked, you could just do away with the buffer and send a continuous stream of data at whatever sampling rate you wanted.