Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found this to be a pretty confusing article. I get that they are analyzing the new node, which is great, but the editorializing seems a bit premature to me.

I also don't think the author understood the TMSC presentation. TMSC clearly said that is used a "model" of a typical SOC of 60% logic, 30% SRAM, and 10% analog. Then they said that for each category of thing, you could expect 1.8x, 1.35x, and 1.2x of shrink. If you do the math, that means an overall shrink for a 'typical' SOC that conforms to their model would be 1.57x (approximately).

That Apple achieved 1.49x would suggest they got 95% of the process shrink effectiveness.

Then there is the cost per die and thus cost per transistor discussion. It is important to remember that this is likely the most expensive these wafers will ever be. The reasoning for that statement is that during a process node life-cycle the cost per wafer is set initially to capture "early movers" (who value density over cost). Much like any product where competition will emerge "later" there is a window early on to recapture value which can pay back your R&D and capital equipment investments. As a result the vendor sets the price as high as possible to make that repayment happen as quickly as possible. Once paid back, the price provides profit as long as it can be supported in the presence of other competitors (in this case, I would guess that role is played by Samsung). The GSA uses to publish price surveys of wafers on various nodes over time but they don't seem to do that any more. Anyway, so the cost per transistor will go down from this point but how much depends on how much margin is in the current wafer price.

So I agree that the cost per transistor is not going down as quickly as it has in the past, and its possible that this node may not get to be as low as the previous node. I'm curious how it compares when you look at 7nm introduction price per transistor vs todays price per transistor. And if you get the same ramp with the 5nm node what that would mean.



Apple is also probably using more SRAM than the "typical SOC", for example the A13 has much more cache than say the Snapdragon 855. I don't know the relative die areas but that would let you make a decent estimate.


TSMC has a monopoly on the highest performance processes - so the price won't drop till there is competition. Seeing how the competition is falling behind, that will be a while...


I was referring to this: https://www.anandtech.com/show/15538/samsung-starts-mass-pro...

Which has Samsung in production of their 7nm node this year as well.


Samsung's process seems to be consistently DOA.... by the time it is stabilized, its already behind everyone else.

Consequently, almost all of Nvidia's current troubles stem from being unable to do all of their fabrication at TSMC, and the yields for Samsung 8nm being very poor.

Nvidia recently canceled the 2x RAM variants of their 3070 and 3080 cards (and not because of insufficient GDDR6x, only the 3090 takes that), have stopped shipments of GPUs for new cards (they are only continuing to ship to fill existing orders), pushed the 3070 release date until after the RDNA2 announcement; most of this stems from Samsung's poor yields no matter how inexpensively Samsung is selling those wafers for.

This also isn't the first time Nvidia used the Samsung footgun.


3080 use GDDR6x. Yes, the 20gb 3080 is cancelled, but there is a new rumor about 12gb 3080ti. So maybe GDDR6x scarcity is the culprit, not Samsung's process.


Surely the ram variants use the same chips. Why cancel those because of yield? I wouldn't expect total demand to change by a significant amount...


If the GPU die yield was really that much of a bottleneck wouldn’t they push even harder to launch the VRAM variants to boost margins?


3080 also uses GDDR6x. The 3080/3090 shortage is because of GDDR6x availability from what I've been reading. We'll know when we see 3070 sales/deliveries.


If I understand correctly the current GPU supply shortage for Nvidia is caused by Samsung and they are forced to return to TSMC, meanwhile TSMC is supplying AMD and Apple just fine - so is Samsung really a viable alternative ?


TSMC is about to win on packaging too

https://www.digitimes.com/news/a20201026VL200.html


And the economic model now requires Leading Edge Fab to capture those value in a longer period of time. What used to be two years will be lengthen closer to three.

>I'm curious how it compares when you look at 7nm introduction price per transistor vs todays price per transistor. And if you get the same ramp with the 5nm node what that would mean.

First Gen N7 being ~10K+ per wafer while N5 being around ~13K with higher yield compare to N7 in the same stage. So N5 should still be cheaper.


The article quote ~17K for N5, do you have an other price reference or was it just a typo?


The column is explicitly labelled "effective vs theoretical" though. There is nothing misleading there.


This is where I believe the author goes astray: "Despite TSMC claiming a 1.8x shrink for N5, Apple only achieves a 1.49x shrink."

TSMC does NOT say they have a 1.8x shrink for N5, they say for LOGIC you can get that, but for SRAM and Analog the results are 1.35x and 1.2x. Had they summed that together for a "typical SOC", which they also discuss (and one presumes that Apple makes typical SOCs) then the "theoretical" shrink is 1.57x for SoCs.

The challenge is that whereas at one time the node size was that of a "gate" (which could be 4 transistors), in a marketing race for smaller numbers fabs started emphasizing "feature" size.

Because of this change, the "theoretical shrink" is a function of what kinds of circuits you're putting down. Pure logic? You get one number, two gates connected together for a flip-flop, you get another number, a voltage regulator, or ADC filter, you get another number.

So doing the analysis the author claims to do, can ONLY be done if you know what percentage of the part you are making on the new process is what. I am under the impression that they missed that.


Wait, when node sizes where based off of a gate that meant logic gate? I always thought it was the transistor gate width.


My experience with the marketing speak around process nodes.

1) In the way back times, (think Intel 8080A) the complexity of the chips was advertised in "logic gates". More gates = more impressive chip.

2) But logic gates weren't equivalent from one process to another, and so it switched from "logic gates" to "transistors." More transistors => more impressive chip. (this is when I left Intel for Sun Microsystems)

3) But not all transistors are created equal, and there were things (like copper metal layers) that made chips better even it it meant you couldn't fit as many transistors so "line size" was what was important. Smaller line size => more impressive chip.

4) But now people had redesigned transistors so that they could be packed more densely and the limiting factor was how much silicon you needed for the gate (NMOS/CMOS) and since that wasn't a whole transistor, it was just a "feature" of the transistor, "feature size" became the new marketing term. Feature size was measured in nanometers and so the smaller nanometers implied more features per unit area.

It has all evolved over time so that it is harder and harder for any sort of comparative analysis between processes seems to make any sense at all these days.

These days, much like the TSMC presentation that is excerpted in the original article, semiconductor fabs rely on comparative measures like "same stuff would be size <x> on this process vs size <y> on the previous process." All the really interesting parameters to me are things like how that effects leakage (thus idle power) and voltage thresholds (thus idle power and maximum frequencies).

I'd love it if there was some sort of SI unit you could demand which would give you a better comparison metric but I don't think we'll see that. Everybody wants to be "the best" and that is most easily achieved when you can dynamically define the metric for "best."


Thank you for this elucidating comment.



Was I the only one disappointed in the decreasing die size? In our solid state past, it was about cramming more in there. All this cost-cutting leaving gaps... it wouldn’t pass the Jobs’ fish tank test.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: