Hacker Newsnew | past | comments | ask | show | jobs | submit | stuxnet79's commentslogin

Not my area of expertise but what exactly is the difference between RISC-V and Power PC? Didn't Power-PC get a good run in the 90s and 2000s? Just wondering why there's renewed interest in RISC-like architectures when industry already had a good exploration of that area.

The interest is BECAUSE it's well explored territory. The concept is proven and works fine.

On the low end where RISC-V currently lives, simplicity is a virtue.

On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.


ARM is RISC and dominates x86 in most markets.

In 2026, RISC-V is not what I would call “low end”. Look up the P870-D, or Ascalon, it C950.

Do you think Apple spends more money than Intel on chip design?


> Do you think Apple spends more money than Intel on chip design?

Absolutely. Apple's R&D budget for 2025 was 34 Billion to Intel's ~18 Billion (and the majority of Intel's R&D budget goes to architecture, while for Apple, that is all TSMC R&D and Apple pays TSMC another ~$20 billion a year, of which, something like 8 billion is probably TSMC R&D that goes into apple's chips).

Sure not all of Apple's 34B is CPU R&D, but on a like-for-like basis, Apple probably has at least 50% more chip design budget (and they only make ~10-20 different chips a year compared to Intel who make ~100-200)


ARM is mostly RISC, and doesn't dominate x86 in desktop and servers.

Apple business is vertical integration, they have zero presence in the chip market.


It is Chinese companies looking for ARM alternative that push this otherwise mediocre ISA.

It is possible that ARM based CPUs will start eating x86 market slowly. See snapdragon X2 and upcoming Nvidia CPU. Maybe in 10 years new computers will be ARM based and a lot of IoT will run on risc-5.


Why "mediocre"? I've written production assembly language for a half-dozen different processor architectures and RISC-V is my favorite by far.

You should write an article on that explaining why you like it to the common man

Silly opinion that has no relevance to building competitive CPUs, but I like that RISC-V is modular and you can pick and choose which extensions to adopt.

Makes writing a simulator so easy (just have to focus on RV32I to get started), and also makes RISC-V a great bytecode alternative for a homegrown register-based virtual machine: chances are RV32I covers all the operations you will need on any Turing-complete VM. No need to reinvent the wheel. In a weekend I implemented all of RV32IM, passing all the official tests, and now I have target my VM with any major compiler (GCC, Rust) with no effort.

If there is any architecture that scales linearly from the most minimal of low-energy cores to advanced desktop hardware is RISC-V.

Disclaimer: I don't know much about ARM, but 1) it isn't as open and 2) it's been around enough to have accumulated as much historical cruft as x86.


"It is Chinese companies looking for ARM alternative"

The V in RISC-V represents iteration of the ISA, over the last 46 years, most of which occurred in the US, mainly at Berkeley.


They push it to save a couple nickels per core on the ARM licenses, not out of nationalistic fervor.

And it is the Chinese doing it because virtually 100% of all chips are made in China and Taiwan.


That's not really how it works. There are only a few companies on the planet that are licensed to create their own cores that can run ARM instructions. This is an artificial constraint, though and at present China is (as far as I know) cut off from those licenses. Everyone else that makes ARM chips is taking the core design directly from ARM integrating it with other pieces (called IP) like IO controllers, power management, GPU and accelerators like NPUs to make a system on a chip. But with RISC-V lots of Chinese companies have been making their own core designs, that leads to flexibility with design that is not generally available (and certainly not cost effective) on ARM.

Exactly without billions in investments this would yet another experimental ISA.

[flagged]


Maybe. People are free to partake in whatever cognitive misadventures they wish. I merely cite the incontrovertible fact that Berkeley RISC predates essentially all of the modern economic history of China, and also the rise of ARM. It came from academe in the US, for better or worse, whether it's crap or the finest ISA ever, and for whatever purpose these US academics had or or have. That is all anyone can truthfully say about its pedigree. The rest is just bullshit from the internet.

SiFive, Tenstorrent, and other big RISC-V firms are not Chinese.

Really? Didn't China pirate the entire ARM China company and start spamming cores like Star1

you realize that every WD hdd and every nvidia gpu from the past couple years has a Risc-v in it?

There are many more RISC chips than not. Apple Silicon is RISC. All ARM is RISC (eg. Raspberry Pi).

x86_64 machines are RISC under the hood and have been for ages, I believe; microcode is translating your x64 instructions to risc instructions that run on the real CPU, or something akin to that. RISC never died, CISC did, but is still presented as the front-facing ISA because of compatibility.

That's a common factoid that's bandied about but it's not really accurate, or at least overstated.

To start, modern x86 chips are more hard-wired than you might think; certain very complex operations are microcoded, but the bulk of common instructions aren't (they decode to single micro-ops), including ones that are quite CISC-y.

Micro-ops also aren't really "RISC" instructions that look anything like most typical RISC ISAs. The exact structure of the microcode is secret, but for an example, the Pentium Pro uses 118-bit micro-ops when most contemporary RISCs were fixed at 32. Most microcoded CPUs, anyway, have microcodes that are in some sense simpler than the user-facing ISA but also far lower-level and more tied to the microarchitecture.

But I think most importantly, this idea itself - that a microcoded CISC chip isn't truly CISC, but just RISC in disguise - is kind of confused, or even backwards. We've had microcoded CPUs since the 50s; the idea predates RISC. All the classic CISC examples (8086, 68000, VAX-11) are microcoded. The key idea behind RISC, arguably, was just to get rid of the friendly user-facing ISA layer and just expose the microarchitecture, since you didn't need to be friendly if the compiler could deal with ugliness - this then turned out to be a bad idea (e.g. branch delay slots) that was backtracked on, and you could argue instead that RISC chips have thus actually become more CISC-y! A chip with a CISC ISA and a simpler microcode underneath isn't secretly a RISC chip...it's just a CISC chip. The definition of a CISC chip is to have a CISC layer on top, regardless of the implementation underneath; the definition of a RISC chip is to not have a CISC layer on top.


I think you are conflating microcode with micro-ops. The distinction into the fundamental workings of the CPU is very important. Microcode is an alternative to a completely hard coded instruction decoder. It allows tweaking the behavior in the rest of the CPU for a given instruction without re-making the chip. Micro-ops are a way to break complex instructions into multiple independently executing instructions and in the case of x86 I think comparing them to RISC is completely apt.

The way I understand it, back in the day when RISC vs CISC battle started, CPUs were being pipelined for performance, but the complexity of the CISC instructions most CPUs had at the time directly impacted how fast that pipeline could be made. The RISC innovation was changing the ISA by breaking complex instructions with sources and destinations in memory to be sequences of simpler loads and stores and adding a lot more registers to hold the temporary values for computation. RISC allowed shorter pipelines (lower cost of branches or other pipeline flushes) that could also run at higher frequencies because of the relative simplicity.

What Intel did went much further than just microcode. They broke up the loads and stores into micro-ops using hidden registers to store the intermediates. This allowed them to profit from the innovations that RISC represented without changing the user facing ISA. But internal load store architecture is what people typically mean by the RISC hiding inside x86 (although I will admit most of them don't understand the nuance). Of course Intel also added Out of Order execution to the mix so the CPU is no longer a fixed length pipeline but more like a series of queues waiting for their inputs to be ready.

These days high performance RISC architectures contain all the same architectural elements as x86 CPUs (including micro-ops and extra registers) and the primary difference is the instruction decoding. I believe AMD even designed (but never released) an ARM cpu [1] that put a RISC instruction decoder in front of what I believe was the zen 1 backend.

[1]: https://en.wikipedia.org/wiki/AMD_K12


That's an excellent rebuttal to this common factoid.

Recently I encountered a view that has me thinking. They characterized the PIO "ISA" in the RPi MCU as CISC. I wonder what you think of that.

The instructions are indeed complex, having side effects, implied branches and other features that appear to defy the intent of RISC. And yet they're all single cycle, uniform in size and few in number, likely avoiding any microcode, and certainly any pipelining and other complex evaluation.

If it is CISC, then I believe it is a small triumph of CISC. It's also possible that even characterizing it as and ISA at all is folly, in which case the point is moot.


Thanks for the detail, that's very clarifying

I think that this is something of a misunderstanding. There isn't a litteral RISC processor inside the x86 processor with a tiny little compiler sitting in the middle. Its more that the out-of-order execution model breaks up instructions into μops so that the μops can separately queue at the core's dozens of ALUs, multiple load/store units, virtual->physical address translation units, etc. The units all work together in parallel to chug through the incoming instructions. High-performance RISC-V processors do exactly the same thing, despite already being "RISC".

Ah, PowerPC. For a RISC processor it surely had a lot of instructions, most of them quite peculiar. But hey, it had fixed-length instruction encoding and couldn't address memory in instructions other than "explicit memory load/store", so it was RISC, right?

Also load/store backwards, but no reverse the register instructions

> For those of us who are unlikely to make time to watch a 4-part documentary, are there any particular lessons about social/political dynamics that you learned that stuck out to you or felt particularly prescient?

I watched the entire 4-part documentary and loved it. In general the series gives you a raw look into the a-b-c's of primate politics. Chimps just like us and the rest of our ape cousins are preoccupied with hierarchy, status and accumulation of resources which guides every single action they take from birth until death.

What is different about Chimp Empire is that it is presented in a much more compelling way relative to the standard (dry) academic literature or popular science texts (i.e. Chimpanzee Politics by Frans De Waal).

Even after finishing the documentary I've found myself connecting events in the series with current geopolitcal issues. One event in the show that stuck out to me was a battle between two rival camps over a single fruit tree. Gaining control over that tree was a critical factor in determining the survival of the two rival groups. To us, post neolithic age and industrial revolution, it's an amusing watch. But to chimps, a single fruit tree in their territory is everything. It is life and death. While there's a difference in scale, the same underlying motivations - in my mind - currently explain what is going in the middle east and eastern europe.

Also, the documentary is great case study in how, loneliness and introversion can be absolutely lethal in the wild. The politics in each Chimp community can get quite toxic but participation isn't really optional. You either play the game or quite literally die.

If you really want a good intellectual exercise, I recommend watching Chimp Empire in its entirety and then The Expanse right after. Try to tell me they are not the same show :P


To be honest, we are fighting now over a 30kms wide strait ... also critical in a certain policitcal survival of sorts.

In the chimps’ defense, they don’t have the technical ability to make the fruit tree obsolete, or tactical framework to identify it as a chokepoint.

Was the fruit tree important for its fruit? Surely there are other fruit sources, no?

It's a forest, not an orchard, and most species fruit only once a year. The most important is the strangler fig tree as it produces fruit multiple times a year.

A clear demonstration of the value of knowledge.

> It's viewing the situation through the lens of Anglo capitalist opinions.

Yes and while I find the article to be quite insightful on the whole, I can't take it seriously as an anthropological study.

There is a strong ethnocentric bias that the author failed to declare / acknowledge, which reduces the credibility of his claims. Also there is little supporting data.


My G4 succumbed to this issue, and I was never able to revive it. I had some important documents and images there that I hadn't yet backed up to cloud that disappeared along with it. Still very sour about that. Other than that I enjoyed the phone, felt the dimensions were perfect and the camera was good for its time. But a defect of that nature is too serious to overlook so that was the last LG phone I ever owned.

> So timed that all pretty great. What worries me is my desktop is up for a full new buy somewhere around early '28

That's a very specific date / timeline. How do you decide to do a full new buy? I ask because I own a desktop that I built 15 years ago which I was flirting with replacing completely last year, but unfortunately I didn't pull the trigger ... oops :(

My old rig is still going strong. The motherboard can only take up to 32GB DDR3 though. CPU is an Intel i7-4790k which is still very fair today if you are not running a resource hog OS (looking at you Windows). Overall it is completely serviceable for my needs. Being honest with myself the only reason I wanted to upgrade was for nerd cred but I don't game much anymore and don't do any ML tasks that require lots of local compute.


My PC is similar. I upgraded it to a 4790k a few years ago (best CPU on the socket). What's funny is I also maxed out the RAM as well because I realised two more 8GiB sticks were like £30 so why not. I thought it was a funny thing to do at the time as I didn't really need that much, but glad I did now. It's going to have to do me for many more years to come, but I'm fine with that. I don't game at all. Just have to hope nothing fails. I did build it with solid foundations: good and overprovisioned PSU, Asus mobo, so here's hoping.

Unfortunately I do also have server gear now as well. I'm going to have to really think about what I actually need now...


>That's a very specific date / timeline.

It's aimed at roughly hitting Zen 6 and switching to high refresh rate 4K gaming.

Was really hoping I could hop to ddr6 and pcie 6 too, but that seems less plausible

>How do you decide to do a full new buy?

Gut feel, it's a 2019 build that had a mid life refresh so 2028 is near a decade, which for a gaming rig is good going. Gaming also means a big VRAM GPU makes sense for toying with LLMs because I get dual use out of it. Plus I don't think the 3090 much as I like it will drive 4K high refresh.

...If it wasn't for gaming I could definite keep this rig till 2030+. It's comically overspec'd for browsing and casual dev stuff. Hell it's even got an optane boot drive so that'll last till the end of time


I suspect there's a lot of people on the 2028 refresh train. If you bought a 1700 in 2017/2018 - which a lot of people did, because it was so good $/perf, you could ride the AM4 platform to a 5900(x/xt) now and be still pretty happy, but AM4 is a dead end now and the X570 motherboards are hard to find. So, if you want more PCIe, DDR5, etc. it'll be time to jump once it starts to feel sluggish for high end tasks (gaming, etc.) around that time.

> X570 motherboards are hard to find

They're alive and well in ebay land. The SSD NAS I mentioned was a x570 ebay build because they can do full ECC and ebay was full of old AM4 gaming builds aging out.

>more PCIe,

Yeah that's the sticking point, though a good x570 can do 7x nvme and 8x sata...so plenty for a NAS if you're up for colouring outside the lines a bit.


> I'd like to say a brief thank you to what the brief, golden period of globalisation was able to bring us.

Not everyone benefited. Market globalism wasn't particularly kind to the global south, and the specific mandates that the WTO enacted on countries in latin america / africa (Washington Consensus) greatly increased local wealth disparities despite visibly growing GDP for a time.

America profited handsomely because for most of the past 30 years, it was where the (future) transnational conglomerates were based. These companies stood to benefit from the opening up of international markets. Now that these companies are being out-competed by their asian counterparts, instead of going back to the drawing board and innovating they are playing the "unfair trade practices" card and of course the current administration is on-board with it.

Globalisation is not going anywhere, but America is increasingly alienating itself from allies who it could stand to benefit from.


> but America is increasingly alienating itself from allies who it could stand to benefit from.

We're a clown show and we don't deserve to have friends until we get our shit together.


And "your shit" is spreading further and further, press conference by press conference. In the last hour or two the great man dropped this quote regarding NATO:

"We would have always been there for them, but now, based on their actions, I guess we don't have to be, do we?" Trump told the audience.

"That sounds like a breaking story? Yes, sir. Is that breaking news? I think we just have breaking news, but that's the fact. I've been saying that. Why would we be there for them if they're not there for us? They weren't there for us."

I can't imagine how long it will take to get this shit back together enough that the US can be trusted again by the international community. One responsible government just means that everything that could be built within the four years of their administration could be torn up, burnt, shat on, and buried within two weeks of a new administration.

And yet there seems to be a base 30% support for the current behaviour.

I think the first thing to try and fix is the education system.


What is the best way to archive a JS heavy site like this? I reviewed OPs github and they haven't open-sourced these visualizations probably because they are tied to his employer.


An old classic. Thanks for reminding me about this article.


> Most of Nvidia's "strength" is the heavy lifting done by the folks over at TSMC.

The software side is also important. No discussion of Nvidia's moat is complete without also mentioning CUDA.


I don't really see it worth mentioning CUDA.

It does nothing that any other compute API uses, and the majority of enterprise compute software doesn't use it and/or works on ROCm HIP with minimal performance loss.

A lot of research projects (such as all the early LLM research, given the topic) are written in Python and use libraries to shim all of that as well; PyTorch and ONNX both run natively on AMD and is covered under AMD's commercial support.

And then we come to the case of llama.cpp, which supports more APIs than any other inference engine... not only does it run on Nvidia/CUDA, it runs on AMD/HIP, Vulkan on at least 4 different vendors, SYCL on at least Intel ARC, BLAS/BLIS, Apple/Metal, Snapdragon's quasi-NPU, and Moore Threads (that new Chinese startup for domestic GPUs).

There is no reason to write greenfield code with CUDA today, and most people aren't.


> One of the most challenging aspects of being a parent (at least for me) is dealing with the kids not interested in cooperating or listening/learning.

> And just in general, giving guidance and seeing that look on their faces that means they're just waiting for you to stop talking so they can go on with their lives.

Instead of seeing this as a challenge perhaps reflect on it and take it as an opportunity to learn more about your kids. Not everybody is the same, and not everyone will have the same aspirations.

I know of many who got the full ride in terms of music lessons but do not have any intrinsic passion for it. Overall it was a total waste of time in terms of their life satisfaction and fulfillment. Me on the other hand, I was very into music but didn't have the option of getting any lessons.

FWIW I was one of those kids who was very compliant and cooperative but in retrospect I can see that it harmed my development / self-concept. It's only now as an adult that I'm able to grapple with this.


I do try to keep abreast of where they're at. It's pretty much a one-way street. C'est la vie.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: