Hacker Newsnew | past | comments | ask | show | jobs | submit | kangs's commentslogin

hello faster horses

it's actually just trust but verify type stuff:

- verifying isn't asking "is it correct?" - verifying is "run requests.get, does it return blah or no?'

just like with humans but usually for different reasons and with slightly different types of failures.

The interesting part perhaps, is that verifying pretty much always involves code, and code is great pre-compacted context for humans and machines alike. Ever tried to get LLM to do a visual thing? why is the couch at the wrong spot with a weird color?

if you make the LLM write a program that generate the image (eg game engine picture, or 3d render), you can enforce the rules by code it can also make for you - now the couch color uses an hex code and its placed at the right coordinates, every time.


just run bazzite already


hello b/Googler :)


or, solid state batteries, graphene, fusion, quantum computers, agi =)


bicycle weight ratios are completely different from even motorcycles. a bike wheel can quickly become heavier than the frame for example.


google even has specially signed fw that let you root the device and unlock anything that doesn't rely on the passcode. secureboot passing and all. i can't imagine that the nsa doesnt have them. after that you just gotta crack the usually very simple passcode. wouldny be surprised if thats what cellrite has lol.



That has no bearing on the truth of what I wrote.


hey you aren't supposed to notice :)


you seem to believe that llm are a neutral engine with bias applied. its not the case. the majority of the bias is in the model training data itself.

just like humans, actually. fe: grow up in a world where chopping one of peoples finger off every decade is normal and happens to everyone.. and most will think its fine and that its how you keep gods calm and some crazy stuff like that.

right now, news, reddit, Wikipedia, etc. have a strong authoritarian and progressive bias, so do the models, and a lot of humans who consume daily news, tiktoks, instagrams.


No, that's not what I believe, I said it was one option, with the other option being that the bias is in the training data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: