Last year, when 3090 GPUs were astronomically priced, I thought "screw it, I'll just buy an RTX A5000 for a couple of hundred bucks more." Which begat a second A5000 for "reasons." It was almost prescient. Now all these models are coming out requiring slightly higher VRAM GPUs than a 3090, i.e. more in the range of the A5000, and I get to run them. I am a kid in a candy store this past couple of weeks.
I should have mentioned linking the GPUs. I meant requiring slightly more VRAM than a single 3090 can handle. Or a 3080 as others have pointed out. The main difference between the way the 3090 works and the A5000 works is SLI vs NvLink/NvSwitch. I believe the 3090 uses NvLink, but not quite in the same way the A5000 does. I can chain together far more A5000's than I can 3090's. Eight A500's vs four 3090's IIRC. And even chain them across machines with the right h/w, though that's probably a bit of a stretch for my budget. Also, the A5000 will share VRAM, giving me a total usable heap of 48GB with two cards, whereas the 3090 will be limited to 24GB each. I can also share the A5000's with multiple VMs simultaneously, whereas with the 3090's I am stuck doing GPU pass through. All that for only a couple of percentage points drop in performance in video games.
Yup! I've never splurged on a GPU before, but a 3090 TI lets you do textual inversion, and fine tune GPT-J (neither of which you can do with <24gb vram), so now I've got one sitting on my study floor waiting to be installed :)