Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

    The core of it is that speakers are bad at polyphony
You can look at intermodulation measurements for some popular and affordable home speakers here. The TL;DR is yeah, you're going to go from something like -60dB distortion to -40dB when doing synthetic multitone tests, but I would not remotely characterize this as being "bad" at polyphony nor the primary reason for the Wall of Sounds primordial "one vertical speaker array per instrument" design.

https://www.erinsaudiocorner.com/loudspeakers/kef_r5_meta/

https://www.erinsaudiocorner.com/loudspeakers/sony_sscs3_tow...

more: https://www.erinsaudiocorner.com/loudspeakers/

    The way to avoid it is more speakers, more stacks/arrays/etc.
If we're talking about his remarks here I'd characterize his take as "more vertical arrays" and not just "more speakers." I realize it's a bit pedantic of me, but some people think that simply adding more speakers equals more betterer sound and it's quite far from the case. In general, multiple loudspeakers arrayed horizontally (this includes MTM center channels in home theater setups) lead to comb filtering.

I'm sure you know that, just clarifying for others.

If he has written about this elsewhere I'd love to read more!



I'm simplifying things significantly because I don't expect most HN'ers to know the ins and outs of pro sound.

Dave Rat has a good youtube channel, he also has some sort of insider subscription thing that I've never bothered with. I wouldn't say his ideas necessarily translate to every situation, but, he really presents these ideas as tools to use in a toolbox, not as gospel.

One interesting thing is that, we generally view comb filtering as universally 'bad', when, in reality, our ears do an excellent job of sorting out comb filtering when it comes to natural sounds. In fact, comb filtering is how we can locate a sound in 3d space with only two reference points (and, if you try, only one reference point moved around a little). That's remarkable, and points back to how speakers comb filter instead of mere comb filtering itself.

In practice, say you have a rock band, and your sound system has two arrays with subs spaced 40ft apart. Now, you're going to get a less than ideal pattern from that in the ranges where the bass guitar and kick drum live. How do you fix it? The answer is fairly simple, you simply run bass/kick in stereo, then, you delay the bass on one side by a little, kick on the opposite side just a little, then add some kind of EQ difference to the delayed side of each, then play with the delays until it sounds right.

Why does that work? Or does it really work? It's odd, because the math says "no no, it'll sound bad", the reality is, it can tighten up that comb to the point you don't really hear it and you can get good bass coverage out of a fairly poor system design.


The opposite delays is an interesting touch. Basically steering the "power alley / valleys" (which of course are a gradient of lobes whose actual positions vary by frequency, but when you're in a 60-80Hz valley, you're in a bad spot) of each LF instrument so that instead of being uniform, the one with kick is angled a little left while the one with bass is angled a little right, so any given member of the audience finding themselves in a valley for one is in the alley of the other. And then the chaotic smearing from EQ tweaks makes the valleys even less stark. Hmm! Definitely one of those "it's good if it works" type things, I suppose. #808


One of the problems with modern PA is that it is hung in stereo. It'd be better if it were a mono stack right in the middle of the stage, but we can't do that because that's where the performers are and two objects can't occupy the same space at the same time.

So we do the obvious thing: We put speakers at each side of the stage, instead. This at least solves the obvious Newtonian physics issue.

But there's a problem with that. Now we have two sources of audio in one room, and the sounds that many in the audience hear are a sum these two sources, mixed together acoustically -- even though those two sources arrive at the listeners' ears with different amounts of delay (due to relative distance).

And when those two sources are playing exactly the same thing (an instrument or vocalist panned to the middle, say, as is quite common), that always results in predictable comb filtering for many people in the room. And it's the same comb filter across the whole spectrum, for all of the sounds (every instrument, every vocalist, every everything) that come from both sets speakers.

One approach that Dave uses is to try to make sure that each side of the PA is always playing something different from the other one. This might mean using two kick drum mics (placed differently), and two microphones in front of a guitar cabinet (also placed differently), and so on. One microphone is for the left speakers, and one microphone for the right speakers.

They still mix acoustically, but because they're different signals to begin with, they don't comb filter as efficiently as they would if they were identical signals.

And because each of these things are all mic'd differently (with one mic per speaker-stack), one instrument's inevitable comb filtering will be different from another instrument's inevitable comb filtering. This randomizes (ish -- it can be predicted and measured) the comb filtering that is involved in the sum of the entire PA, and that can make for a better listening experience for most people.

(And where two microphones can't be used, like for a vocalist, then the idea is to do something, anything different between the left and the right output channels for that singular source. It could be different EQ (what a wonderful phase-scrambler conventional EQ is), or delay, or different reverb settings -- but it should be different somehow coming from left and right speakers.)

The math works fine. If comb filtering is inevitable, and it is, then it is better to have a random-ish mix of comb filters of diminished depth and that each affect individual musical elements differently than to have one singular and very brazen comb filter that affects everything uniformly for any given listener.

The problems with Dave's ideas here are this: It doubles the channel count at the board (which can be worked around just by throwing more money at it), it more-or-less doubles the expenditure on microphones and snakes, and it more than doubles the complexity of getting an initial mix working well.

But most importantly: It defies the conventional workflow of trying to mix sounds on a stereo PA system in a huge room like it is the same as listening to a high-end stereo system at home and sitting in the sweet spot. Dave Rat's ideas generally don't/can't combine to make a central sweet spot, or to produce a perfect stereo image: There's a left mix and a right mix, and they're both very deliberately different mixes that are each made from sources that are split up to be as different as is reasonably possible.

So the biggest problem with Dave's ideas aren't that they don't make sense somehow (they do make sense), but that it goes against the conventional target.

(But the conventional workflow that aims for the idea of precise stereo image as a target is kind of inherently bullshit anyway, because only a small percentage of attendees can stand in that sweet spot to hear that perfect stereo image at one time, while the other 90+% of concert-goers simply can't physically be there (because Newton, again). Most listeners aren't anywhere near centered on the stereo PA, and will hear what psychoacoustically seems to come chiefly from either the right or the left stack (because Haas effect), and will have no ability to be in a place where a good stereo image is possible to begin with. The conventional target sacrifices the auditory experience of many for the benefit of few.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: