Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I personally don’t care who has access to my health data, but I understand those who might.

Either way, I’m excited for some actual innovation in the personal health field. Apple Health is more about aggregating data than actually producing actionable insights. 23andme was mostly useless.

Today I have a ChatGPT project with my health history as a system prompt and it’s been very helpful. Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).



If an insight led you or a family member to being misdiagnosed and crippled would you just say it’s their or your own fault? If it were a doctor would you have the same opinion?


I understand enough about these systems to know they’re not perfect but I agree some people might be misled.

But I don’t know if I should be denied access because of those people.


I just had a deja vu, I'm sure I read this some months ago

Did you previously write this exact comment before?


Had a look and it does not show up anywhere else: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


> But I don’t know if I should be denied access because of those people.

That's the majority of people though, if you really think that I assume you wouldn't have a problem with needing to be licenced to have this kind of access, right?


Depends. If you're talking about a free online test I can take to prove I have basic critical thinking skills, maybe, but that's still a slippery slope. As a legal adult with the right to consent to all sorts of things, I shouldn't have to prove my competence to someone else's satisfaction before I'm allowed autonomy to make my own personal decisions.

If what you're suggesting is a license that would cost money and/or a non-trivial amount of time to obtain, it's a nonstarter. That's how you create an unregulated black market and cause more harm than leaving the situation alone would have. See: the wars on drugs, prostitutes, and alcohol.


Yes, the threshold for restricting freedom should be harm to others not harm to oneself.


Are we at the level of needing a license to read a medical textbook too?


A medical textbook doesn't engage in trying to diagnose you.


So just diagnostic manuals then?


A diagnostic manual doesn't engage in trying to diagnose you.


If they pepper it with warnings and add safe guards, then I'm fine.

I think they can design it to minimize misinformation or at least blind trust.


People are very good at ignoring warnings, I see it all the time.

There's no way to design it to minimise misinformation, the "ground truth" problem of LLM alignment is still unsolved.

The only system we currently have to allow people to verify they know what they are doing is through licencing: you go to training, you are tested that you understand the training, and you are allowed to do the dangerous thing. Are you ok with needing this to be able to access a potentially dangerous tool for the untrained?


There is no way to stop this at this point. Local and/or open models are capable enough that there is just a short window before attempts at restricting this kind of thing will just lead to a proliferation of services outside the reach of whichever jurisdiction decides to regulate this.

If you want working regulation for this, it will need to focus on warnings and damage mitigation, not denying access.


> I personally don’t care who has access to my health data

There's a reason this data is heavily regulated. It's deeply intimate and gives others enormous leverage over you. This is also why the medical industry can charge premium rates while often providing poor service. Something as simple as knowing whether you need insulin to survive might seem harmless, but it creates an asymmetric power dynamic that can be exploited. And we know these companies will absolutely use this data to extract every possible gain.


The medical industry doesn't use your medical data to overcharge for insulin. That's more a question of your financial and insurance data.


> Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).

If you don't mind sharing, what kind of useful information is ChatGPT giving you based off of a photo that your doctor didn't give you? Could you have asked the doctor about the data on the instrument and gotten the same info?

I'm mildly interested in this kind of thing, but I have a severe health anxiety and do not need a walking hypochondria-sycophant in my pocket. My system prompts tell the LLMs not to give me medical advice or indulge in diagnosis roulette.


In one case it was a urinary flow test (uroflowmetry). The results go to a lab and then the doctor gets the summary. Was able to diagnose the issue, prevalence, etc. and educate myself about treatment and risks before seeing a doctor. Papers gave me distributions of flow by age, sex, etc. so I knew it was out of range.

In another case I uploaded a CSV of CGM data, analyzed it and identified trends (e.g. Saturday morning blood sugar spikes). All in five minutes on my phone.


Are you giving your vitals to Sam Altman just like that? What instrument?


Yes, if it will help me and others. Again, I understand those who disagree.


But this is a 'cart before the horse' situation.

What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)?

There is a very real, near definite, chance that giving your, and others', health data to this company will hurt you and others.

Will you still hold this, "I personally don’t care who has access to my health data", position?


I’m definitely a privacy fist person, but can you explain how health data could hurt you, besides obvious things like being discriminated against for insurance if you have a drug habit or whatever. Like, i’m a fitness conscious 30 something white male, what risk is there of my appendix operation being common knowledge or that i need more iron or something?


Well maybe your health data picks up a heart condition you didn't know about.

Maybe you don't know but your car insurance drops you due to the risk you'll have a cardiac event while driving. Their AI flagged you.

You need a new job but the same AI powers the HR screening and denies you because you'll cost more and might have health problems. You'd never know why.

You try to take out a second on the house to pay for expenses, just to get back on your feet, but the AI-powered risk officer judges your payback potential to be %.001 underneath the target and are denied.

The previously treatable heart condition is now dire due to the additional stress of no job, no car and no house and the financial situation continues to erode.

You apply for assistance but are denied because the heart condition is treatable and you're then obviously capable of working and don't meet the standard.


Is your point 'I have no major health conditions, so nobody could be hurt by releasing health data'? If so, I don't think I need to point out the gap in this logic.


Actually you maybe do. I am extremely privacy conscious; so i’m on your side on this one but health data is a bit different from handing over all your email and purchase information to google — in that scenario the danger is that the political or religious or whatever attributes i may have could be exposed to a future regime who considers what is acceptable today to no longer be so, uses them to profile and … whatever me, right? What actual danger is there from a government or a us tech company having my blood work details when i actually have nothing to hide like drug abuse or alcohol etc? health data seems much less risky than my political views, religion, sexuality, minor crimes committed and so on.


Something that is not yet known to be an indicator that you’re at risk of a condition.

Perhaps you were given some medication that is later proven harmful. Maybe there’s a sign in your blood test results that in future will strongly correlate with a condition that emerges in your 50s. Maybe a study will show that having no appendix correlates with later issues.

How confident are you that the data will never be used against you by future insurance, work screening, dating apps, immigration processes, etc


Absolutely not confident at all; thanks, i hadn’t considered some of those.


Depends on the data - if you had genetic data they might run PGS and infer that even though you are healthy now, your genes might predispose you to something bad and deny insurance based on that. If you truly do not see dangers of health data access remember that they could genotype you even when you came just for ordinary bloodwork.


Fortunately I live in a country where one cannot be denied insurance, but yeah I didnt think of these really, was a bit of a “typed before i really thought” moment maybe i should put the keyboard down ;).

It seems like an easy fix with legislation, at least outside the US, though. Mandatory insurance for all with reasonable banded rates, and maximum profit margins for insurers?


Isn't it more productive to regulate health insurance and make health a protected attribute of a person like disability etc?


Not danger as in being kidnapped by government agents, danger in terms of being denied a job or insurance or anything else.

Your comment is extraordinarily naive.


I wasn’t saying there is no danger — just that I didn’t really think about it or see the problem, your sibling comments have changed that. Maybe i am naive but i was asking genuinely not stating i think otherwise.. Unfortunately i have family members in the us and pretty much all of them happily sent their dna off to various services so im fucked either way at this point…


Good point, you did ask in good faith for an explanation and just fired off a quick comment that didn’t serve to further the discussion!


When your health data can say you are trans, and the government decides to persecute you, then yes, it important to maintain privacy.


I find it really really really hard to believe that there exists a person in this planet who:

1. Is transexual but does not tell anybody they are and it is also not blatantly obvious

2. Writes down in a health record they are transexual (instead of whatever sex they are now)

3. Someone doxxes they/them medical records

4. Because of 3, and only because of 3, people find out that said person is transexual

5. And then ... the government decides to persecute they/them

Let's be real, you're really stretching it here. You're talking about a 0.1% of a 0.1% of a 0.1% of a 0.1% of a 0.1% situation here.


If they're an athlete this situation could literally be happening right now.


@cyberpunk's question is pretty clear.

You could try to answer that instead of making up a strawman.

Dialogue 101 but some people still ignore it.


What if you were a woman seeking medical treatment for an ectopic pregnancy?

‘Being able to access people’s medical records is just another tool in law enforcement’s toolbox to prosecute people for stigmatized care'

They are already using the legal system in order to force their way into your medical records to prosecute you under their new 'anti-abortion' rulings.

https://pennsylvaniaindependent.com/reproductive_rights/texa...


> i’m a fitness conscious 30 something white male

Right. So able bodied, and the gender and race least associated with violence from the state.

> being discriminated against for insurance if you have a drug habit

"drug habit", Why choose an example that is often admonished as a personal failing? How about we say the same, but have something wholly, inarguably, outside of your control, like race, be the discriminating factor?

You medical records may be your DNA.

The US once had a racist legal principle called the "one drop rule": https://en.wikipedia.org/wiki/One-drop_rule

Now imagine an, lets say 'sympathetic to the Nazi agenda', administration takes control of the US gov's health and state sanctioned violence services. They decide to use those tools to address all of the, what they consider, 'undesirables'.

Your DNA says you have "one drop" of the undesirable's blood, some ancient ancestor you were unaware of, and this admin tells you they are going to discriminate against your insurance because of it based on some racist psuedoscience.

You say, "but I thought i was a 30 something WHITE male!!" and they tell you "welp, you were wrong, we have your medical records to prove it", you get irate that somehow your medical records left the datacenter of that llm company you liked to have make funny cat pictures for you and got in their hands, and they claim your behavior caused them to fear for their lives and now you are in a detention center or a shallow grave.

"That's an absurd exaggeration." You may say, but the current admin is already removing funding, or entire agencies, based on policy(DEI etc) and race(singling out Haitian and Somali immigrants), how is it much different from Jim Crow era policies like redlining?

If you find yourself thinking, "I'm a fitness conscious 30 something white male, why should I care?", it can help to develop some empathy, and stop to think "what if I was anything but a fitness conscious 30 something white male?"


These points seem to be arguments against giving your health data to anybody, not just to an AI company.


If there's no evidence that it will help you or others, then that's a pretty hard position to argue against. The parent commenter asked about this, and the response basically was that it didn't seem likely to be harmful, and now you're responding to that.


Yes, of course. "Assuming it's entirely useless, why giving your data to anyone" is a hard position to argue against, but unfortunately it's also completely pointless because of the unproven assumption. Besides, there are already enough indications in this thread alone that it is already very useful to many.


That's a pretty disingenuous take on what I said. To quote from the discussion I responded to:

>>>>>> Are you giving your vitals to Sam Altman just like that?

>>>>> Yes, if it will help me and others

>>>> What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)

>>> I’m definitely a privacy fist person, but can you explain how health data could hurt you, besides obvious things like being discriminated against for insurance if you have a drug habit or whatever.

>> [explanation of why it might be worrisome]

> These points seem to be arguments against giving your health data to anybody, not just to an AI company.

I did not make any claims that it was useless; the context I was responding to was someone being dubious the there were risks after being asked whether they had any reason to assume that it would be beneficial to share specific info, and following that a conversation ensued about why it might make sense to err on the side of caution (independently of whether the company happens to be focused on AI).

To be explicit , I'm not taking a stance on whether the experiences cited elsewhere in the thread constitute sufficient evidence. My point isn't that there is no conceivable benefit, but that the baseline should be caution about sharing medical info, and then figuring out if there's enough of a reason to choose otherwise.


Ok, I might have been to hasty in commenting on your last recap. Your baseline is sound. In any case, we're talking about a medical help/ advice tool. If it's not providing any value, any interaction with it (let alone sharing medical data) is pointless and a waste of time. So I think any convincing argument against sharing private data with it should take in consideration at least a minimum of potentially missed valuable information. Otherwise it's an easy argument to make, but also an empty one.

In this case, I suspect that the classic biases of HN (pro-privacy and anti-ai) might interact to dismiss the value that can be provided by a specialized medical llm/ agent (despite indications that an unspecialised one is already helpful!) while rightly pointing out the risks of sharing sensitive data.


Quite - personal data should remain under your control so it's always going to be a bad deal to "give" your data to someone else. It may well make sense to allow them to "use" your data temporarily and for a specific purpose though.


I personally have been helped by talking to ChatGPT about my healthcare. That's the evidence. I will take concrete positive health outcomes now, over your fears of the future.


And what if it harms you?

What if you have to pay health insurrance because of the collected data or what if you don't get certain insurrances?

Most People don't have ap roblem that someone gets their medical data but that these information is used to their disadvantage.


"besides obvious things like being discriminated against for insurance"


Unless you're a doctor you don't know what it's made up though.

That's the trouble with AI. You can only be impressed if you know a subject well enough to know it's not just bullshitting like usual.


This is exaggerated. AI is accurate enough that our sniff tests will get us far. ChatGPT just don't hallucinate all that often.

You can have the same problem with doctors who don't give you even 5 minutes of your time and who don't have time to read through all your medical history.


AI-guided self-medication is certainly problematic. Rubber-ducking your symptoms for free for as long as you need and then asking a doctor for their 2-minute opinion is IMHO the best way to go about healthcare in 2026.

I live in a place where I can get anything related to healthcare and even surgery within the same day at an affordable price, and even here I've wasted days going to various specialists who just tried to give me useless meds.

Imagine if one lives in a place where you need an appointment 3 months in advance, you most certainly will benefit from going there showing your last ChatGPT summary.


Your healthcare situation seems pretty darn good. What country is this?


Thailand is my go-to for healthcare in private hospitals. I heard good things about Singapore too. Taiwan's public hospitals were great too, albeit not as flashy.


I was going to be mad at this, but I remember our doctors are already using it without our consent


Your doctor is bound by HIPAA, you could consider doing something about it. OpenAI may not be bound by HIPAA so your available recourse is lesser.


My doctor is in the EU, and none of them care, it's been documented heavily in the past months.


I'm sorry, but seriously? How could you not care who has your health data?

I think the more plausible comment is "I've been protected my whole life by health data privacy laws that I have no idea what the other side looks like".

Quite frankly, this is even worse as it can and will override doctors orders and feed into people's delusions as an "expert".


I’d rather have all my health data be used in a way that can actually help me, even with a risk of a breach or misuse, than having it in a folder somewhere doing nothing.


It can also help you in not getting a job because your health data says you'll be sick in 6 months.


It would be absolutely amazing if any sort of tech could say that I'm going to have a serious health problem 6 months ahead of time.


How do you think insurance premiums are calculated?


In general, health insurance companies (at least in the US) are pretty much prevented from using any health data to set premiums. In fact, many US states prevent insurers from charge smokers higher premiums.

(Life insurance companies are different.)


How are they calculated? Based on what data? Your google searches? If they don't use goolge search history, why would they use chatgpt history?


Yeah man, when would technology ever be abused to monitor health data. https://www.mirror.co.uk/news/health/period-tracking-apps-ou...


How do you think that can happen realistically? Like seriously can you explain clearly how the data from ChatGPT gets to your employer?


It doesn't have to get to your employer, it just has to get to the enormous industry of grey-market data brokers who will supply the information to a third-party who will supply that information to a third-party who perform recruitment-based analytics which your employer (or their contracted recruitment firm) uses. Employers already use demographic data to bias their decisions all the time. If your issue is "There's no way conversations with ChatGPT would escape the interface in the first place," are you... familiar with Web 2.0?

Edit: Literally on the HN front page right now. https://news.ycombinator.com/item?id=46528353


You're supposed to share it with a doctor you trust, if nobody qualified asked for it it's probably because it's no longer relevant.


I’ve had mixed experiences with doctors. Often times they’re glancing at my chart for two minutes before an appointment and that’s the extent of their concern for me.

I’ve also lived in places where I don’t have a choice in doctor.


What is it with you people and privacy? Sure it is a minor problem but to be _this_ affected by it? Your hospitals already have your data. Google probably has your data that you have google searched.

What's the worst that can happen with OpenAI having your health data? Vs the best case? You all are no different from AI doomers who claim AI will take over the world.. really nonsensical predictions giving undue weight to the worst possible outcomes.


> What is it with you people and privacy?

There are no doubt many here that might wish they had as consequence-free a life as this question suggests you have had thus far.

I'm happy for you, truly, but there are entire libraries written in answer to that question.


I don't care either. Why should I? I go to the doctor once a year and it's always the same. Not much to do with that data


Your health data could be used in the future, when technology is more advanced, to infer things about you that we don't even know about, and target you or your family for it.


Health data could also be used now to spot trends and problems that an assembly-line health system doesn't optimize for.

I think in the US, you get out of the system what you put into it - specific queries and concerns with as much background as you can muster for your doctor. You have to own the initiative to get your reactive medical provider to help.

Using your own AI subscription to analyze your own data seems like immense ROI versus a distant theoretical risk.


It feels like everyone is ignoring the major part of the other side’s argument. Sure, sharing the health data can be used against you in the future, but it can be used to help you right now as well. Anyone with any sort of pain in the past will try any available method to get rid of it. And that’s fair when those methods, even with 50% success rate, are useful.


I'm in the same boat as them, I honestly wouldn't care that much if all my health data got leaked. Not saying I'm "correct" about this (I've read the rest of the thread), just saying they're not alone.

It's always been interesting to me how religiously people manage to care about health data privacy, while not caring at all if the NSA can scan all their messages, track their location, etc. The latter is vastly more important to me. (Yes, these are different groups of people, but on a societal/policy level it still feels like we prioritize health privacy oddly more so than other sorts of privacy.)


>23andme was mostly useless.

23andme was massively successful in their mission.

Sidenote: their mission was not about helping you understand your genomic information.


Yeah, their mission was to make money and collect data for AI training and other usages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: