this post was submitted on 25 Mar 2024
1 points (57.1% liked)

BecomeMe

846 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I just want to make a distinction that in the US at least, any business that handles medical data should be bound by HIPAA. This not only includes doctors, but insurance companies, billing providers, pharmacies, etc. but it includes data processors. (That would be AI companies.)

The issue is that once any company collects medical data or works with a company where unredacted medical data is shared, compliance regulations kick in.

This system is fallible, absolutely. I don't want to give the impression that medical data is never lost.

AI is still just an ANN running on a computer and that computer is owned by someone. The liability (and ethics) falls squarely on the company that owns the software and collects sensitive data.

There is a gap though. If people willingly sign away their rights for trials before the system is approved for medical use, it may cause a couple of problems. However, medical data is still medical data and the laws around it might still be effective. (Basically, even if a person signs their rights away, I am not sure it changes the actual classification of data and how it should be handled.)

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I understand what you're saying in regards to medical data, but I'm saying this isn't actual medical data because it's not being handled by doctors. It's like using the symptom calculator on webmd or taking a personality test. I could be wrong, but I think that's how they're presenting it.

[–] [email protected] 2 points 1 year ago (1 children)

Your point is logical, absolutely. However, logic is difficult to apply to compliance regulation. I'll explain some of how this process works and then explain how you are correct, but in a different scenario than what you are thinking. (I may use some very specific wording, so buckle up.)

I just read over a sampling of privacy policies and security statements from Wysa and Woebot, and it seems that these companies are being treated as medical companies. Woebot is HIPAA compliant, and Wysa is under GDPR. (I believe medical data handling is baked into GDPR)

Why they are being treated as medical companies is likely because of their partners. If the AI company claims to provide actual treatment, they are going to be slaves to large insurance companies. Those insurance companies will likely demand HIPAA, SOC2, ISO27001 audits and that ist a ball that rolls really far downhill.

Doctors are irrelevant when it comes to the definition of PHI. The second "standard" personal data (PII) is linked with anything that is medical related, it may be considered PHI at that point. (Honestly, that is determined by the auditors during their compliance assessment cycles.) In some of the most benign cases, I have seen data that contains the word "Tylenol" get classified as PHI automatically and by auditors.

Now, there are going to be (if there aren't already) companies that will try to skirt around compliance requirements. That's a thing. WebMD-type companies that are setup as a free service that legally cannot provide treatments are likely going to be the culprits for data selling, like you pointed out. This is actually why I took the time to read through some legal stuffs.

I currently work on a security team for a medical insurance company. We are basically mixed up with the compliance team and we are gearing up for our annual HIPAA assessment. Reading Privacy Policies and Security Agreements is an unfortunate part of my job. Working with auditors directly is also a shit part of my job.

[–] [email protected] 1 points 1 year ago (1 children)

Thanks for explaining all that so thoroughly. I would be happy if they were under HIPAA, people really need the help. What I kind of like about the idea of AI therapy is that they could use what has worked and learn from that in a crowd sourced way. Just like anything else, if they do everything with good intentions and honor the patients privacy while keeping it protected, I could see it being a good thing. I haven't seen a lot of great things like that coming from AI since there is so much VC behind it and they need their money to make money. I like being wrong when I'm being pessimistic though.

[–] [email protected] 2 points 1 year ago

Cool. IMHO, your concerns were still valid. There are a ton of shady companies out there and it's hard to avoid. Stay skeptical, friend!

Oh, I didn't mean to come off as defending AI. I have my own reservations about the bulk AI vaporware that is on the market now. If anything, I just wanted to explain the challenge in selling medical data.

And yeah, any technology can be used for good things. If AI treatments work, that is OK with me.