Hacker Newsnew | past | comments | ask | show | jobs | submit | j4hdufd8's commentslogin

What is non-unlockable secure boot?


State of the art SoC with silicon baked private keys laser fused for production configuration


"Silicon baked private keys" are really vague buzzwords. That's pretty much standard and it can be implemented in a variety of ways.

Not sure why you're calling this non-unlockable. Everything is unlockable with enough money.


You're not going to be decapping and recapping SoCs at scale... Lots of eFuse implementations are just writable ROMs with erasing lines disabled, and people aren't taking out particle accelerators to wipe them.


Whatever silly OTP implementation is involved is 99.9% irrelevant to unlocking a phone, and OTP for root-of-trust has been in use in phones for 15+ years anyway.

Maybe we use some hardware-level trick to get to some protected firmware initially to reverse engineer it, but almost universally it's what reads the state of the fuses (or something after it) that actually gets exploited. That's changing, too, but in general very slowly and at at the pace of hardware manufacturers learning how to make software (aka, glacial with a few notable exceptions).


> A tax on the mistaken believe that NVidia has an monopoly on putting transitions in a particular configuration which they obviously don't

NVIDIA doesn't place transistors in particular configurations. Foundries do that for them. And it is currently common sense that the software is the moat, not the hardware design.

Good luck changing the ecosystem to use AMD.


> that the software is the moat, not the hardware design.

For inference that’s hardly relevant, though?

For training its not exactly insurmountable either.


On huge GPU clusters running inferencing the utilization of GPUs is key.

Imagine you have 1 million GPUs and you have 99% utilization of theoretical performance in the system with inferencing. That would mean 10k of GPUs are basically idle and draw power. You could now try to identify which ones are idle but you won't find them because utilization is a dynamic process so while all GPUs are under load not all are running 100% performance beause of interconnects and networking not providing data fast enough so your whole network becomes a bottleneck.

So what you need is a very smart routing process of computation requirements on the whole cluster. This is pure SW issue and not HW issue. This is the SW Nvidia has been working on for years and where AMD is years behing.

This is also why Jensen is absolutely right to say that competitors can offer their chips for free because Nvidia's key in TCO performance is the idea of one giant GPU so SW and networking allowing for highest utilization of a data center. You can't build a GPU the size of 1 million GPUs so you have to think of the utilization problem of a network of GPUs.

In the real world utilization rates are way below 100% so every % better of utilization is way more worth than the price of single GPUs. The idea here is that the company providing 2-3x higher utilization can easily ask for like 5x higher pricing per chip and will still deliver a better TCO.


GPUs are also used to speed up inference (the math is virtually the same). You think your ChatGPT queries are running on x86 servers?


But do you think with the profit margins of NVidia, others won't be offering competing chips? Google already has their own for example.

From that perspective the notion that NVidia will own this AI future while others such as AMD and Intel standby, would be silly.

Im already surprised it took this long. The NVidia moat might he software, but not anything that warrants these kind of margins at this scale. It is likely there will be strong price competition on hardware for inference.


> You think your ChatGPT queries are running on x86 servers

What makes you think? Or are all non Nvidia GPUs x86?


...while selling you crap you don't need because they follow you everywhere.


> rather than to wave at the street (which is what I used to do to get a ride.)

Ummm taxis aren't everywhere like NYC or something. Broadly speaking Uber will pick you up in an arbitrary place and take you anywhere.


It definitely exists in Amsterdam, no? When I visit I just tap my card/phone on NFC enabled card checking machines (which exist in and out of the stations)


Train your own brain first. Or good luck with dementia


What are the benefits of those videos?


What are the benefits of producing any video?


What are the benefits of this comment?


Challenging the value of AI generated "art"


Then the purpose of those videos is to challenge the value of non AI generated "art"

(half sarcastic, but you could make the argument that most art has no benefit besides to the person that made the art)


Nice! I enjoyed this sub thread. I'm not sure what I conclude but I enjoyed thinking about this.


Android is a Linux distribution. 100% you can absolutely build C/C++ binaries that you run directly like fairly regular Linux programs.

Executables, daemons, CLI, sockets, whatever you want. Rust no problem. I have even run Python and Node.js


I run jetbrains remote development on my chroot (the backend). Its important to not take termux seriously and go the chroot way.


I do exactly the same!


What are you implying about people that go to Berghain?


They don't look like they can solve some obscure optimisation challenge.


This is very cool


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: