"Permissionless development" - want to develop a RISC-V core? Just download one from github or design your own from the freely available specs. No legal agreements to enter or licensing fees to pay. Very different model from Arm (sign a legal agreement before you start and pay for licensing) or x86 (you want a license? LOL)
RISC is hardly dead. It more accurate to say it won so comprehensively that it's pervasive. The design principles are now used in every significant chip (including x86).
Licensing innovation - The open licensing model, anyone can make a RISC-V implementation, there's some rules/costs involved if you want to use the RISC-V trademark with it: https://riscv.org/about/risc-v-branding-guidelines/ but certainly a lot cheaper and permissive than arm's architectural licensing model.
This enables far more competing RISC-V implementations, with more choice and more ability to modify things to your needs.
On the downside you could see eco-system fragmentation and lots of subtle incompatibilities causing issues for broad software support.
Ultimately anything technical happening is secondary to this, this is the reason RISC-V is taking off, not anything to do with technical superiority (it simply needs to be good enough).
RISC-V doesn't have to displace M2/Epyc/Xeon to be important. There are a trillion cores a year being shipped in embedded devices where "cheap and configurable" is more important than raw performance.
What surprises me is that we haven't seen more "built to emulate" designs yet. Many of those "trillion cores" are zombie architectures with terrible price/performance ratios, but nobody wants to rewrite their software or retool things (anything 8051, AVR or PIC seems vulnerable). I could see replacement modules using a cheap-due-to-scale RISC-V core that just emulates the old chip in baked-in firmware.
> The only pre-RISC legacy ISA in use is x86, and it is only losing market share.
And for many generations now, x86 machines are basically RISC processors with a CISC frontend.
Empirically it seems that CISC has 'failed' as a way to design processors, and it's better to let the compiler do that job when you're building a general purpose computer.
That is a meme. Even RISC processors use uops. uops are often wider and more complex than the ISA instructions that they are derived from.
The reason for that is that a lot of features in a CPU instruction are just the result of toggling some part of the CPU on or off so having four one bit flags is better than encoding the same value in two bits. What this means is that you can have more possible instructions available on the uop layer than on the ISA layer. When that is the case you can hardly call the internal design a "RISC" processor. Especially when the ISA wars were specifically about ISAs and not microarchitecture. Even if we say that uops are RISC instructions that still is an argument against RISC ISAs because why bother with RISC as an external interface if you can just emulate it? Your comment seems rather one sided.
Also the x86 was also the most riscy-ish of its cohort (ld/st architecture, single simple addressing mode etc) whether that was uncanny design looking forward decades, a result of Intel's internal architectural history, pure luck people still argue
Other designs from the same time followed on from the wunderchild of the time - the Vax which everyone loved and wanted to emulate.
The big change of the time though was changes in memory hierarchy, caches pushed closer to CPUs (eventually got pulled on-die) which favoured less densely encoded ISAs more registers and instruction sets that didn't require a full memory access for every instruction.
In my professional life time we've gone from 'big' mainframes with 1MHz core cycle times (memory cost more than $1M/megabyte) to those Vax's (actual silicon dram!), to what's sitting on my lap at the moment (5Ghz 8/16 cores 64Gb dram etc).
I don't think CISC 'failed', it was simply a child of it's time and the constraints changed as we moved things from giant wirewrapped mainframes with microcode to minimise ifetch bandwidth, to LSI minicomputers to VLSI SOCs with multimegabyte on-chip caches
Well, to be fair, this was easy to predict. RISK was only created because CISC did already fail at that time; people were already letting their compilers do the job and left the specialized instruction basically unused.
In hindsight, sure, it seems obvious, but I don't think it was that obvious that CISC performance wouldn't end up scaling. What wasn't obvious, to me anyway, is that even with all of that complicated decode, register renaming and whatnot, these CISC processors managed to stay competitive for so long. Maybe it's just the force of inertia, though, and if there'd been a serious investment in high-performance RISC machines in the 2000s, x86 would've been left in the dust.
Oh, I'd say it is still not really obvious. Sure, RISC has a complete win now, but there is no reason to thing that the ultimate architecture for computers won't be based on complex instructions.
We even had some false starts: when vector operations were first created, they were quite complex, then they were simplified into better reusable components; also when encryption operators started to appear, they were very complex, then they broken into much more flexible primitives. There is nothing really saying that we will be able to break down all kinds of operations forever.
But still, I wouldn't bet on any CISC architecture on this decade.
Anyway, the reason x86 lasted for so long was Moore's law. This is patently clear on retrospect, and obvious enough that a lot of people called it forward since the 90's. Well, Moore's law is gone now, and we are watching the consequences.
Won't CISC architectures always have the benefit of being able to have dedicated silicon for those complex instructions and thus do them faster than many smaller instructions? I understand RISC-V does instruction fusing, which provides a lot of the same benefits, but I'm surprised ARM gets around this.
Dedicated silicon for custom instructions is now quite favored actually, because of the well known "dark silicon" problem. I.e. most of the die actually has to be powered down at any given time, to stay within power limits. Hence why RISC-V is designed to make custom, even complex instructions very easy to add. ("Complex" is actually good for a pure custom accelerator, because it means a lot of otherwise onerous dispatch work can be pulled out of the binary code layer and wired directly into the silicon. The problem with old CISC designs is that those instructions didn't find enough use, and were often microcoded anyway so trying to use them meant the processor still had to do that costly dispatch.)
> The only pre-RISC legacy ISA in use is x86, and it is only losing market share.
The only pre-RISC legacy ISA in wide use is x86, but AFAIK the legacy ISA from IBM mainframes (s390/s390x), which is still supported by mainstream enterprise Linux distributions, is also a pre-RISC one.
Dead simple base ISA and easily extensible means it's an obvious choice if you want a custom processor with your own custom instructions.
Scalable so the same ISA (with/without certain extensions) can be used from the smallest microprocessor to high end CPUs.
Compressed instructions gives very high code density (and unlike ARM Thumb doesn't require weird mode switching and is available for 64-bit ISA)
It's not the first popular open ISA. There was OpenRISC before it, but it was fatally flawed (branch delay slots). So RISC-V is arguably the first good popular open ISA.
I have several OpenPOWER systems, including the POWER9 I use as my usual desktop. Besides IBM and other server manufacturers like Tyan and Wistron, you can get them as Raptor workstations and servers.
Last I checked, most hard drives contain ARM cores, so some x86 machines actually contain more ARM cores than x86 cores. An ICE car likely has one or more ARM cores managing the engine and one or more ARM cores running the entertainment system.
Not dead at all, its now a (conceptual) rocket ship
No ISA/architecture licensing fees, no restrictions on what you can do with it - you can build an open source core or you could sell your design to others. ARM/x86 doest have this, its all locked up behind lawyers.
In addition to the permissionless model as mentioned, another consideration is to standardize all the boring common bits. RISC-V incorporates a lot of industry learning about how to handle extensions, compression, etc. If a company is considering RISC-V vs a proprietary design for an accelerator or such, getting debugged versions of those features in the ISA vs whatever gets cooked up in house in crunch mode could be pretty big.
ARM, MIPS, Power are all RISC. The only real CISC processor in mainstream usage seems to be the PICO, and of course, the x86 and amd64 are currently RISC processors with a CISC interpreter in front of them.
This is a lost battle. People don't even remember what the word ISA means. It is an external interface. Whatever happens inside the professor has nothing to do with interfaces. You can run x86 code on Arm Mac's, does that mean they are a x86 processor that is internally implemented with RISC? If yes then the word has lost it's meaning. If no then you would have to explain why these are distinct cases.
RISC is not about the number of instructions but rather what the instructions do. The famous example of CISC gone to its logical extreme is the VAX's polynomial multiply instruction, which ended up being almost a full program in a single instruction. RISC tends to go the other way, focusing on things that are easy for hardware to do and leaving anything else to software.
ARM's JavaScript instruction is actually a pretty good example of RISC philosophy. The instruction is a single cycle floating point conversion to integer using x86 abi default rounding modes rather than the rounding modes in the flag register. It's a great example of the RISC filtration test of 'what's a single cycle instruction that can be noticed in instruction traces as something that will give an actual perf improvement'.