Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "If you read nothing else, read this: do not ever use an AI or the internet for medical advice. Go to a doctor."

Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.





To be honest, I am pretty embarrassed about the whole thing, but I figured I'd post my story because of that. There are lots of people who misdiagnose themselves doing something stupid on the internet (or teenagers who kill themselves because they fell in love with some Waifu LLM), but you never hear about it because they either died or were too embarrassed to talk about it. Better to be transparent that I did something stupid so that hopefully someone else reads about it and doesn't do the same thing I did

That's my feeling, but I have a friend who is a MD and an enthusiastic supporter of people getting medical info from LLMs...

Getting medical info from LLMs is great and can save your life.

Blindly trusting medical info from LLMs is idiotic and can kill you.

Pretty much any tool will be dangerous if misused.


> Getting medical info from LLMs is great and can save your life.

No its not - LLMs are not medical experts. Nor are they experts for pretty much anything. They just extrapolate statistically the next token. If you fed them anti-vaxer information, theyd start recommending you to not get vaccinated so as to not obtain autism or something like that. We should not use them as experts on anything, much less so for medical information.

On the other hand, if you want to use them to generate large amounts of text and images, sure go do that. They can do that I guess.


> No its not - LLMs are not medical experts

So what? That does not mean they're not very good at searching the internet and often provide useful information.

> If you fed them anti-vaxer information, theyd start recommending you to not get vaccinated so as to not obtain autism or something like that.

I specifically addressed this in the very short comment you're replying to, but I will repeat:

>Blindly trusting medical info from LLMs is idiotic and can kill you.


What does being good in searching Internet (I'll let slide the obvious issue of hallucinating URLs) have to do with medical expertise? At all? At least with WebMD the content did not change every time you visited the same page?

What does medical expertise have to do with being able to provide useful information? AI has no expertise.

> I'll let slide the obvious issue of hallucinating URLs

That indeed was an issue, but I can't remember the last time I've encountered that now that agentic web browsing is everywhere. I guess the cheapest models might still be affected by that?


Well, I guess since you have not encountered it recently, then its fine... Everyone else is just using "cheapest" models ;)

No matter how badly the AI gets things wrong, it's fine. You're not supposed to trust it.

No its bloody not fine! We were told these were "Pocked-PhDs". The investment so far could have paid for a manned mission to Mars and back! The massive advertising campaigns absolutely try to subliminally influence us into believing these are trustworthy tools, while at the same team keeping that sneaky small print about "checking the answers". Whats the goddamn point then? If I cant trust them every single time, like I can with deterministic software, and they create more work for me, because now we have to double-check the shit that they spit out, how are they actually useful, beyond trivial use cases like producing stupid brainrot videos??



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: