As someone who has followed tech for almost 40 years, I've watched a lot of things come and go. I've seen "the next big thing" go nowhere countless times while some obscure things surprised me. Well intentioned good ideas didn't make it because of external factors or they lacked enough momentum to push through developmental barriers. Having seen all of that, my assessment of RISC V is simple: If they complete the privileged spec as planned this year (and hopefully the vector extensions too) it's going to explode onto the stage like newly formed star. The ramp will appear slow to people with high hopes and unrealistic expectations, but in the context of my life it will be a sudden shift the likes of which are rarely seen.
That is IFF they get the specs finalized, and I hope they support Googles VM threads while they're at it.
I'm not sure about that. What matters most is the ecosystem. That's why people pay for x86 and ARM. The ecosystem that RISC-V offers will need to be at least as good or good enough for whatever users. They'll also have to be successful with lower volume despite high costs of ASIC development. Finally, remember that OpenSPARC was around for a long time with open-source implementations like Leon3 that mostly went nowhere outside established companies already using them.
I'm not so optimistic. I wish the best for them but precedents go against it.
OpenSPARC suffers from the same problem as most of the other alternate chips: I cannot buy a motherboard for a PC case and start going. Dev boards are cool but $1,000 motherboards[1] is not going to fly. When these alternate architectures get to the point I can buy a motherboard at a $500 or below price, I'll believe they are golden.
I really want it to succeed, but PowerPC and POWER have left their scars.
The reason those boards you buy are cheap is theres a boatload of people who pay for them. Need quite a bit of volume to get a custom, high-performance ASIC and board down to a few hundred. The problem is on the demand side. The suppliers would have to come up with millions of dollars for development and advertising then risk losing most of it. Most that do go bankrupt or get acquired after they fail (esp in FPGA or accelerator space).
The thing that has delayed Linux on RISC-V is simple enough: They keep changing and breaking ABIs. There was even one massive source break (changing #ifdef __riscv64 to __riscv) which broke several projects that I'd contributed RISC-V support to.
The reason that the Fedora port is on hold is because I'm waiting for everything to go upstream so that this sort of thing stops happening.
IMO this was entirely avoidable, and would have meant that we could have had both Fedora and Debian full RISC-V ports one or two years ago.
The way I understand it, those breakages happened because problems were found during porting. So it is kind of an egg-hen problem. Without porting problems cannot get fixed and in a stable state. Without a stable state people do not want to port things.
They could have maintained backwards compatibility (eg. by still defining #define __riscv64 as a deprecated definition), but chose not to. More of a problem is there was no real process for making the changes or informing anyone about the changes, they "just happened" and we only noticed later.
Anyway, once glibc & kernel changes are upstream, we'll have the stable ABIs we need. I'm very much hoping that upstreaming will be completed this year.
>> The reason that the Fedora port is on hold is because I'm waiting for everything to go upstream so that this sort of thing stops happening.
Fortunately that is happening. GCC 7 now has RISC V support and will be the compiler for Fedora 26. They really do need to get the priv spec finalized so kernel patches can go upstream as well. It's falling into place slower than people (both of us included) would like, but it is happening much more so than it did for Power, Sparc, or OpenRisc.
> I hope they support Googles VM threads while they're at it.
I saw the slides on the VM threads concept. I'm unsure as to the point. A process essentially sees a virtual CPU, multiplexed by a kernel. The point of a hypervisor could be understood as multiplexing things (aka OSes) that weren't written to share hardware.
What exactly is the point of a VM thread, which I understand as something which makes a VM look to a kernel as a process?
There doesn't seem to be any "win". Apart from perhaps making implementing Type II hypervisors, like KVM, easier. My critique of them is that their TCB is far larger than the Type I kind.
And all this for an architecture that doesn't have any legacy binary software... very strange.
I only say that not because I'm convinced at all about the concept, but because it seems interesting enough to ensure that it's not excluded. I'd like them to be able to run with it a bit and show us more evidence of the advantages. OTOH if supporting it is a bigger deal than I thought then no.
> If they complete the privileged spec as planned this year (and hopefully the vector extensions too) it's going to explode onto the stage like newly formed star.
Why do you think that? In which market? Who's going to pay to market it to SoC vendors and provide the very necessary pre-sales work? You aren't expecting it to put a dent in Intel, are you?
Which market gets it first is not something I can predict. The ground swell is deep and wide though. It's being designed in to products where the architecture will not matter to the consumer. nVidia is very interested and contributing - I would not be surprised if they made something like Tegra with RISC V in a year or two. The government of India is funding several designs (because they want their own processor IP), as are students around the world. Some of the vector implementations are power and area efficient enough that they compete with the best. Don't be surprised if it shows up in the Top 500 in the next few years.
On the software side you've got GCC, LLVM, core boot, Rust, and Google Go, all going there. There are thousands of Fedora and Debian packages already ported.
Several companies exist to support real world implementation by others - they'll help you put RISC V into whatever SoC or system you want.
This is all falling into place for an instruction set that has no production silicon yet. It's going to go somewhere. All it needs is finalized specs and some actual hardware.
Having said all that, one thing desperately missing is a free GPU. If SoCs for the public do arrive soon they will have to come from someone with graphics IP.
I don't expect Intel to be directly affected for a good while, but they will be missing out on whatever markets it takes for the next few years. I do see ARM in grave danger, but not immediately.
I wonder if the Larrabee approach would work here: many simple RISC-V cores running in parallel, with wide vector units and a little specialized hardware (eg texture sampling).
I've thought about that. Just 8 or 16 cores running LLVMpipe should be able to do quite a bit of basic graphics. Certainly enough for a RaspberryPi type SoC.
For 32bit micro controllers, you can buy the highfive1 today.
Some companies are designing RISC V into their products (nVidia, Samsung) but not for user facing applications yet. There are designs in process to bring about something like the raspberry pi, which will bring it to the masses (not the consumer masses). There is a ton of corporate and academic support behind the effort.
But let me offer simple slightly plausible scenarios: What if Google ported Android to it? What if Apple brought iOS to it? What if some other new thing that takes off is based on it?
What if my grandmother had wheels? Then she'd be a wagon.
> What if Google ported Android to it?
OK. So .. what advantages does this offer? Why would they do that? Why would manufacturers choose the unknown option?
> What if Apple brought iOS to it?
Why would they do that? What advantages does it offer?
I'm just asking for the really basic features/advantages/benefits stuff. Preferably including an explanation of why said advantages can't be cloned or overtaken by Intel or ARM.
I think it's low probability, but look at what Google has done in the past. They bought/created Android and made it free or low cost to get onto phones. They bought On2 to make codecs free - they give away hardware designs for VP8 and VP9. It would be right in line with that to port the OS to a free ISA so hardware makers could lower their costs by whatever that ARM license is costing them and reducing power (increased battery life). It just feels like something they might do.
Having said that, RISC V doesn't need something like that to happen. It's going places already because it's better in a number of ways.
Isn't risc/mips the only cpu that has a chance of being fully open sourced now? There is Power as well, but IBM has a funny way of "open sourcing" things.
The patents on the older ARM processors have expired in the past few years, so there are open implementations of the first few generations of architecture. AMBER [1] is one such project.
Does anyone know of good introduction for RISC V and why I should care about it from a technical perspective?
I get the Open Source argument. And I get that x86 is full of warts and weird decisions. But does RISC learn from those mistakes and make something useable?
I think the design technically is more elegant. The instruction format is better: there are only a few formats, register numbers are always in the same place, which makes instruction decoding easier. The instruction set is also properly extensible, you can add extensions in an official way. They've avoided making lots of special-case instructions, claiming that micro-op fusion can be used to the same effect.
The financial argument is: You want to license an ARM-like chip design, without paying any license fees. Lots of companies currently license ARM and use it to make everything from embedded controllers to tablet computers. They pay ARM both a one-off license fee and per core manufactured (actually the fees are not that large in the grand scheme of chip design).
There are several RISC-V designs, which are BSD licensed, or you can make your own without any licensing fees. However if you want to call it "RISC-V" or use the logo you have to join the Foundation. If you don't want to call it "RISC-V" then the license is completely free.
Of course what's missing is the ecosystem: Linux, Android, and massive numbers of tools. That's why this announcement of the Debian on RISC-V project restarting is interesting.
(Forget about licensing an x86 design, that is basically impossible).
The one that's most interesting to me is lowRISC. Mostly because it has something tangible to offer as a differentiator other than the "open ISA". Specifically, the "minion cores"[1] Their initial use case for them sounds a lot like the PRUs on a BeagleBone board.
Basically, it enables you to do hard real-time tasks like signal generation (controlling led matrices, audio) or fast data/signal acquisition, but without running some specialty os. The minion cores can read+write memory from the main cpu that's running normal Linux.
That feature also gives them a little breathing room on initial pricing. Their SoC board would be viewed as competition for a $70 board, instead of competition for a cheaper Raspberry PI or similar. So, initial pricing wouldn't necessarily hurt the popularity.
They seem to plan to have something out either this year, or sometime next year: "We are expecting to crowdfund an initial instantiation of the lowRISC platform during the course of 2017."
SiFive has been late with their U500 or it would exist already. I suspect they're waiting for finalization of the privileged instruction spec. Since they will have PCIe support I'm wondering if you could actually use it with commercial graphics cards and open source drivers. That would put real usable hardware out in the world late this year.
There are other companies on the verge of releasing hardware implementations. But in every case, I would wait for the final specs before jumping on any particular hardware even if it shipped now. Other than the micro controllers of course.
> What's to stop you connecting up an old SIMM card? The memory won't be fast, but doable.
There aren't enough pins. Certainly not enough pins accesible in single-cycle reads/writes. Maybe you could multiplex them with external hardware to talk to a SIMM but you'd be talking to memory at single megahertz effectively.
An SDRAM controller in an FPGA on the other side of the QSPI link would be faster, but it'd still be slow. You could get it to work to say you've done it, but it'd run at literal-days-to-boot speeds.
The board (and chip) is meant and marketed for freetards who like Arduinos - so I bought one and so did several of my friends. LEDs were blinked and better futures dreamt of. It's cool. But it's not a workstation chip by any stretch of the imagination.
Hopefully the next version will have a real external memory bus.
I reckon it would be more in the range of minutes (depending on how complete of a linux system we are talking about here).
The QSPI peripherial can run at 100MHz if I read the datasheet correctly (so you can read/write data at about 400MBit/s minus overhead).
For reference booting linux on an ATmega@24MHz running an ARM emulator and external SDRAM with an memory bandwidth of about 300kbyte/s seems to take about 6 hours [1].
Assuming the SiFive is just as efficient as the ATmega it would take less than a minute (assuming the memory bandwidth is the only bottleneck and not the slow CPU - the SiFive also has 16kB of L1 cache and has a clock rate 10x faster).
Just for the records, you can run ELKS, a Linux variant for small processors, on 8086 hardware. Ages ago I successfully booted it on a 8088 based ancient industrial system with less than one megabyte of RAM and two floppy drives (hard disk? what hard disk?!? :). But the Linux a "normal" user would run is a very different thing which would require some more power.
That is IFF they get the specs finalized, and I hope they support Googles VM threads while they're at it.