The problem is that Verilog/VHDL isn't a "programming language" in the sense that C, Lisp, Haskell, or Python are programming languages. So approaching them with a programming language mindset is asking for a lot of pain and misunderstanding.
HDLs like Verilog and VHDL describe digital circuits, not algorithms and instructions for manipulating data. If C code is akin to instructions for getting to a grocery store and shopping for vegetables, HDL code is describing the blueprint of a house textually. Maybe the solution is building some ultra high level abstraction that can somehow encompass both problem domains, but given how difficult hardware synthesis with existing HDLs is right now I don't know if that'll happen anytime soon. And the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies; if anything it's a limitation of the register-transfer level abstraction that's currently used to design digital hardware.
"To write Verilog that will produce correct hardware, you have to first picture the hardware you want to produce."
I think that's the crux of the issue. Most digital designers do have a picture of the actual hardware, as a block diagram, in their heads. When I write RTL, the process is very front-loaded: I spend hours with a pen and paper before I even sit down at the keyboard. The algorithms in question are only a small part of the work of building functioning hardware; where when designing software, I would let the compiler make decisions about how long it expects certain operations to take, and what to inline where, these are all things that I plot "by hand" when building hardware, before I even open a text editor.
I think, then, that the author kind of misses the point when he goes on to say that "you have to figure out how to describe it in this weird C-like [...] language" -- to be honest, that's the same for all types of programming: when I go home and write C, I have to take abstract concepts and express them in this weird C-like language, too! Arcane syntax is irritating, but is not something fundamentally 'hard' (unless it's too arcane, anyway).
By the way -- I also often wondered "why the hell does synthesis take so long?". I originally assumed it was because Xilinx's (and Synopsys's, and ...) software engineers were terrible and had no idea how to write efficient programs. This might be true, but I now believe is probably not the majority of it; if "why's it taking so long?" is a question that interests you, I recommend looking into the VLSI CAD tools class on Coursera.
By far, most of my time spent in FPGA dev so far has been envisioning the state machines and timing details involved. I use VHDL, so it's never been a question of "how can I make VHDL output what I want?"---for me, it's always the struggle of "why can't I just ask VHDL to make n of these units?", where n is some upstream provided number.
I think the author might need to step away and look at it from the other side: how can we take a working, but overly-verbose language like VHDL and make it more powerful? At least VHDL was envisioned from the beginning as a hardware description language, and it definitely shows.
VHDL generics and generate work nicely for 1d cases, but for 2d cases (systolic arrays), it's difficult to make the scripting really work without hard-coding a bunch of corner cases.
Another example is that defining barrel shifters is impossible to parameterize, because you need to hardcode the mux cases (see the Xilinx datasheet[1]). That's kind of insane, considering that bit-shifting is a very common and basic operation. This is particularly problematic if you're trying to describe something without having to resort to platform-specific instantiation blocks.
It's a little frustrating that VHDL doesn't have a higher-level standard instantiation library, because you're chained to a platform the moment you start doing anything other than basic flip-flops.
Well, I suppose you could say assembly is an analog of writing hdl that is very structural.
I haven't done it myself, but generates can be nested. You'd have to check to see if your tools support it or not though.
With the xilinx example, I'm not sure what you mean. Is it choosing to do multi-level muxing vs a naive muxing solution? I'd start by just writing a simple behavioral version, and only if that didn't meet performance constraints would I bother doing anything structural about it.
It's late and maybe I'm just not thinking it through. I'll take a stab at some of this and maybe it'll be clearer to me.
Thanks for the course pointer. It seems to me that there is no new offering of this course in near future. Do you have any pointers to similar online courses related to VLSI CAD tools ?
given how difficult hardware synthesis with existing HDLs is right now I don't know if that'll happen anytime soon
Synthesis sometimes feels like a great blind spot in the hierarchy of abstractions. It is hard, critical, and yet appears to be developed only by niche players.
the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies
IMO it has everything to do with the open-ended nature of synthesis. When you compile software, it's very procedural. You have a linear chain or network of paths. You construct it. You improve on it where you can. Hardware on the other hand- you have a cloud described in RTL, you construct it. That's not hard. But when you get to improving it? It's like the packing problem, with N elements, and to make things better every element can be substituted with a variety of different shapes!
I think the issue here is that Synthesis and Place and Route tools are squarely in the Computer Science Algorithms domain. Hardware engineers in general don't have the background for that kind of work.
And software engineers don't crossover to the hardware side often.
So the people suffering with the "slow" tools etc, are usually not in a very good position to do anything about it.
But really, the slow side is in the place and route. If you don't over constrain your design, this can go pretty quickly actually. It's when timing is tight, and first pass guesses aren't coming up with a satisfactory solution that things slow down.
IIRC, most of the time these just end up boiling down to 3-SAT, which will make the average Computer Science person throw up their hands in the air and say "it's NP-Hard, you can't make it more efficient" (even though NP is still an open problem).
I think there's one EE/CE professor at my university working on the SAT solvers that form the crux of the optimizers in most of these tools, but at the end of the day it's still a bunch of heuristics that, worst case, run in O(2^n) time.
And the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies; if anything it's a limitation of the register-transfer level abstraction that's currently used to design digital hardware.
If that's the case, then why are Chisel and bluespec much faster to simulate despite having less investment in tooling?
> HDL code is describing the blueprint of a house textually
... combined with features for simulating dynamic loads, say by modeling a party full of people jumping around.
From what I've seen of SystemC, I thought it was basically the same idea but with a different syntax - the entirety is available for simulation, but only a subset of the language constructs are synthesizable.
Much agreed. Verilog/VHDL are simply not programming languages. They are Hardware Description Languages. They describe parallel components that will actually be "wired" together.
My advice if you are a programmer or computer scientist and you get tasked with writing Verilog of VHDL "code" you need to be able to explain the difference -- You've just been offered a job as a hardware designer and engineer. Having spent 5 years doing hardware engineering and a lot longer doing software consulting I can say it's an entirely different set of skills if not an entirely different career path.
HDLs like Verilog and VHDL describe digital circuits, not algorithms and instructions for manipulating data. If C code is akin to instructions for getting to a grocery store and shopping for vegetables, HDL code is describing the blueprint of a house textually. Maybe the solution is building some ultra high level abstraction that can somehow encompass both problem domains, but given how difficult hardware synthesis with existing HDLs is right now I don't know if that'll happen anytime soon. And the fact that logic takes so long to synthesize and simulate really has little to do with Verilog's deficiencies; if anything it's a limitation of the register-transfer level abstraction that's currently used to design digital hardware.