Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably any hardware that requires you to use some intermediate code. Nowadays probably some TPU or NPU or whatever.

This made me think of what "accelerators" I've come across (just as a regular consumer):

In the late 90s, that's what we called GPUs - "3D accelerators". For a short time, they were cards separate from the actual graphics card, and often would involve routing your VGA cable through that thing. Before it all merged into one device. I was very slightly disappointed as a kid that I narrowly missed that time, but bilinear filtering and higher resolutions and framerates helped me get over the fact I couldn't cram more cool weird tech into my PCI slots.

Then you had sound cards with "audio accelerators" using technologies like EAX. All that eventually migrated back to the CPU I think.

For a while you could by a "physics accelerator" for PhysX, then acquired by nvidia and moved to the GPU using CUDA. I never had one, but one time I kept around an older GPU after upgrading as a dedicated physx processor. Now that's the only way to run older 32bit games with physx turned up, since that's not supported in 5000 series GPUs.

Finally, I got this "Coral TPU", a USB device with 4GB RAM (and I think around 4 TOPS or something?) for very efficient inferencing. There are some open source projects supporting this, like frigate, which lets you process object detection with multiple surveillance camera streams on a raspberry pi. I never really used it though.

And of course now we have NPUs as sub-systems in CPUs and GPUs. I'd love to have a dedicated PCIe card with that, but of course having yet another computer architecture with dozens/hundreds of GB of redundant RAM is kind of a bummer.



A bummer yes, but two NPUs, one on the CPU, another one on the GPU, that just sounds silly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: