HIPAA only applies to covered healthcare entitites. If you walk into a McDonalds' and talk about your suicidal ideation with the cashier, that's not HIPAA covered.
To become a covered entity, the business has either work with a healhcare provider, health data trasmiter, or do business as one.
Notably, even in the above case, HIPAA only applies to the healthcare part of the entity. So if McDonald's collocated pharmacies in their restaurants, HIPAA would only apply to the pharmacists, not the cashiers.
That's why you'll see in connivence stores with pharmacies, the registers are separated so healthcare data doesn't go to someone who isn't covered by HIPAA.
**
As for how ChatGPT gets these stats... when you talk about a sensitive or banned topic like suicide, their backend logs it.
Originally, they used that to cut off your access so you wouldn't find a way to cause a PR failure.
Under Medical Device Regulation in the EU, the main purpose of the software needs to be medical for it to become a medical device. In ChatGPT's case, this is not the primary use case.
Same with fitness trackers. They aren't medical devices, because that's not their purpose, but some users might use them to track medical conditions.
Then the McDonalds cashier also becomes a medical practitioner the moment they tell you that killing yourself isn't the answer. And if I tell my friend via SMS that I am thinking about suicide, do both our phones now also become HIPAA-covered medical devices?
To become a covered entity, the business has either work with a healhcare provider, health data trasmiter, or do business as one.
Notably, even in the above case, HIPAA only applies to the healthcare part of the entity. So if McDonald's collocated pharmacies in their restaurants, HIPAA would only apply to the pharmacists, not the cashiers.
That's why you'll see in connivence stores with pharmacies, the registers are separated so healthcare data doesn't go to someone who isn't covered by HIPAA.
**
As for how ChatGPT gets these stats... when you talk about a sensitive or banned topic like suicide, their backend logs it.
Originally, they used that to cut off your access so you wouldn't find a way to cause a PR failure.