Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really cool project I love seeing HW projects like this in the open. But I'd argue that this is a SIMD coprocessor. For something to be a GPU it should at least have some sort of display output.

I know the terminology has gotten quite loose in recent years with Nvidia & Co. selling server-only variants of their graphics architectures as GPUs, but the "graphics" part of GPU designs make up a significant part of the complexity, to this day.



If it processes graphics, I think it counts, even if it has no output. There's still use for GPUs even if they're not outputting anything. My place of work has around 75 workstations with mid-tier Quadros, but they only have mini-DisplayPort and my employer only springs for HDMI cables, so they're all hooked into the onboard graphics. The cards still accelerate our software, they still process graphics, they just don't output them.


It's the shader core of a GPU. There are no graphics specific pipelines, eg: vertex processing, culling, rasterizer, color buffer, depth buffer, etc. That's like saying a CPU is also a GPU if it runs graphics in software.


> If it processes graphics, I think it counts, even if it has no output.

That's not a good definition, since a CPU or a DSP would count as a GPU. Both have been used for such purpose in the past.

> There's still use for GPUs even if they're not outputting anything.

The issue is not their existence, it about calling them GPUs when they have no graphics functionality.


Graphics functionality != display output What about laptop GPUs, which don't necessarily output to the screen at all times. Sometimes they don't even have a capability to do so. If it's coprocessor working alongside the general processor for the primary purpose of accelerating graphics computing workloads, it seems appropriate to call it a GPU.

Edit: perhaps your point is that it doesn't make sense to call a device designed primarily to accelerate ML workloads or just general purpose vector calculations. In that case I'd agree that GPU isn't the right name.


>> Graphics functionality != display output Exactly. Graphics functionality also includes graphics specific hardware like vertex and fragment processing, which this does not have. It has no graphics specific hardware, ergo not a GPU.


If it looks like a duck and it walks like a duck, why is it not a duck? If you are using a DSP to process graphics, then at least in the context of your system it has become your graphics processor.

Plenty of GPUs don't have (or aren't used for their) display output. It's a GPU because of what it does: graphics processing. Not because of what connectivity it has.


But it doesn't do graphics, so it shouldn't be called GPU. That's the whole point of this thread.


But it does - it just needs an application to retrieve the buffer and do something with it. For example pushing it to storage.


It does do graphics. Calculating graphics is different from handling display output. You can separate the two.

Like someone else mentioned, laptops often have discrete graphics cards that are not wired to display hardware at all, needing to shuffle framebuffers through the onboard graphics when something needs to make its way to a screen.


> Like someone else mentioned, laptops often have discrete graphics cards that are not wired to display hardware at all, needing to shuffle framebuffers through the onboard graphics when something needs to make its way to a screen.

Those are GPUs even if they aren't connected to a display because they still have graphics components like ROPs, TMUs and whatnot.


You're free to define it that way, but that's substantially different from GP's "if it's not a display adapter, it's not a GPU" that I was pushing against. It does seem pretty fragile to define a GPU in terms of the particular architecture of the day, though. There's plenty of things called GPUs that don't/didn't have TMUs, for example.


CPUs and DSPs are not primarily designed for graphics work, therefore they don't count as GPUs. CPU are general-purpose, DSPs might be abused for graphics work.

The "G" in GPU doesn't imply that they have to render directly to a screen. In fact, professional graphics cards are commonly used for bulk rendering for animating videos.

Datacenter GPUs are mostly used for AI these days, but they can nevertheless do graphics work very well, and if they are used for generative AI or if their built-in super sampling capability is used, the distinction becomes rather blurry.


But this particular one isn't designed for graphics work either, so it shouldn't be called GPU.


It's in the very name: "Tiny-GPU". Since it's a demonstration project by a hobbyist, the author probably didn't want to implement the whole optimized rendering stack yet.

On the other hand, they also left out some features that you'd expect to find on a general-purpose compute accelerator.

For example, they focus on tensor math. No support for bit wrangling and other integer math. No exotic floating point formats. Minimal branching capabilities.


The name is what I'm contesting. It's called Tiny-GPU but there's no mention of graphics functionality anywhere in the project.


Graphics pretty much boils down to matrix multiplication, and that's exactly what this thing accelerates. If it were a generalized accellerator, it would have to support other kinds of arithmetic as well.


Agree to disagree. I'll stop here because we're just wasting time running in circles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: