I don't agree.
If it's non trivial, I don't have the more advanced verification tools such as UVM if I prototype via FPGA.
The ability to perform constrained randomised verification is only workable via UVM or something like it. For large designs that is arguably the best verification methodology. Without visibility through the design to observe and record the possible corner cases of transactions, you can't be assured of functional coverage.
While FPGAs can run a lot more transactions, the ability to observe coverage of them is limited.
I have worked on multiple SoCs for Qualcomm, Canon and Freescale. FPGAs don't play a role in any SoC verification that I've worked on.
This was my experience working on SoCs at Broadcom also where we didn't really use FPGAs at all.
But at another employer that did not work on consumer designs, I did use a lot of large FPGAs in final shipped products, and in those cases we did some of our heavy testing and iterating on the real FPGA(s). For example I built a version of the FPGA with pseudo-random data generation to test an interface with another FPGA. When I found a case that failed I could then reproduce it in simulation much more quickly.
That employer also built some ASIC designs and I remember some discussions about using FPGA prototyping for the ASICs to speed up verification or get a first prototype board built faster that would later get redesigned with the final ASIC. I don't know if they ever went down that route but it would not surprise me if they did. These were $20k PCB boards once fully assembled, and integration of the overall system was often a bigger stumbling block than any single digital design.
There are a lot of different hardware design niches so I'm sure there are many other cases.
All my information is also about 10 years out of date.
This reflects my experience. Many/most of the "nontrivial" issues nowadays are rooted in physical issues, not logical issues. And in those cases, simulation is often superior to dealing with the fpga software layer. Fwiw,I asked my co founder formerly at Intel, and he said that fpga involvement was "almost zero".
That's a false dichotomy -- you can do FPGA verification in addition to simulation-based verification. And yes, there are ASIC teams that have successfully done that.
The reasons are numerous. I already gave a few. I will give another. Once you have to integrate hard IP from other parties, you cannot synthesise it to FPGA. Which means you won't be able to run any FPGA verification with that IP in the design. You can get a behavioural model that works in simulation only. In fact it is usually a requirement for Hard IP to be delivered with a cycle accurate model for simulation.
I'll give another reason. If you are verifying on FPGA you will be running a lot faster than simulation. The Design Under Test requires test stimulus at the speed of the FPGA. That mans you have to generate that stimulus at speed and then check all the outputs of the design against expected behaviour at speed. This means you have to create additional HW to form the testbench around the design. This is a lot of additional work to gain speed of verification. This work is not reusable once the design is synthesised for ASIC.
I can go on and on about this stuff. Maybe there are reasons for a particular product but I am talking about general ASIC SoC work. I got nothing against FPGAs. I am working on FPGAs right now. But real ASIC work uses simulation first and foremost. It is a dominant part of the design flow and FPGA validation just isn't. On a "Ask HN", you would be leading a newbie the wrong way to point to FPGAs. It is not done a lot.
As another veteran in the ASIC industry: we are using FPGAs to verify billion transistor SOCs before taping out, using PCBs that have 20 or more of the largest Xilinx or Altera FPGAs.
It's almost pointless to make the FPGA run the same tests as in simulation. What you really want is to run things that you could never run in simulation. For example: boot up the SOC until you see an Android login screen on your LCD panel.
A chip will simply not tape out before these kind of milestones have been met, and, yes, bugs have been found and fixed by doing this.
The hard macro IP 'problem' can be solved by using an FPGA equivalent. Who cares that, say, a memory controller isn't 100% cycle accurate? It's not as if that makes it any less useful in feeding the units that simply need data.
I find the above pair of comments really interesting. I'm guessing there are parallels with differences of opinion and approach in other areas of engineering. There are always reasons for the differences, and those are usually rooted in more than just opinion or dogma.
In this case, I'd guess its got a lot to do with cost vs relevance of the simulation. If you're Intel or AMD making a processor, I bet FPGA versions of things are not terribly relevant because it doesn't capture a whole host of physical effects at the bleeding edge. OTOH for simpler designs on older processes, one might get a lot of less formal verification by demonstrating functionality on an FPGA. But this is speculation on my part.
"If you're Intel or AMD making a processor, I bet FPGA versions of things are not terribly relevant because it doesn't capture a whole host of physical effects at the bleeding edge."
Exactly.
When you verify a design via an FPGA you are only essentially testing the RTL level for correctness. Once you synthesise for FPGA rather than the ASIC process, you diverge. In ASIC synthesis I have a lot more ability to meet timing constraints.
So given that FPGA validation only proves the RTL is working, ASIC projects don't focus on FPGA. We know we have to get back annotated gate level simulation test suite passing. This is a major milestone for any SoC project. So planning backwards from that point, we focus on building simulation testbenches that can work on both gate level and RTL.
I am not saying FPGAs are useless but they are not a major part of SoC work for a reason. Gate level simulation is a crucial part of the SoC design flow. All back end work is.
Let me try to summarize part of this: When you're building an ASIC, you have to care about the design at the transistor level because you're going for maximum density, maximum speed, high volume, and economies of scale. When you're building an FPGA, you are only allowed to care about the gates, which is one abstraction level higher than transistors.
In an FPGA, you cannot control individual transistors. (FPGAs build "gates" from transistors in a fundamentally different way than ASICs do, because the gates have to be reprogrammable.) And that's okay because FPGA designs aren't about the highest possible speed, highest density, highest volumes or lowest volume cost.
Nobody in their right mind would produce an ASIC without going through simulation as a form of validation. For anything non-trivial, that means FPGA.