this post was submitted on 26 Mar 2024
398 points (100.0% liked)

Technology

37696 readers
172 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

exactly how hard did beer person have to try to miss the point when they read a thread about how an AI confidently provided a wrong diagnosis and warning about how we shouldn't always trust AI and proceeded to write a reply accusing Misha Saul of being a tech bro who believed an AI over a human doctor

[–] [email protected] 11 points 7 months ago* (last edited 7 months ago) (1 children)

Unpopular opinion incoming:

I don't think we should ignore AI diagnosis just because they are wrong sometimes. The whole point of AI diagnosis is to catch things physicians don't. No AI diagnosis comes without a physician double checking anyway.

For that reason, I don't think it's necessarily a bad thing that an AI got it wrong. Suspicion was still there and physicians double checked. To me, that means this tool is working as intended.

If the patient was insistent enough that something was wrong, they would have had them double check or would have gotten a second opinion anyway.

Flaming the AI for not being correct is missing the point of using it in the first place.

[–] [email protected] 13 points 7 months ago* (last edited 7 months ago) (1 children)

I don't think it's necessarily a bad thing that an AI got it wrong.

I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.

There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.

[–] [email protected] 2 points 7 months ago

Fair enough

[–] [email protected] 37 points 7 months ago (4 children)

I'm not following this story..

a friend sent me MRI brain scan results and I put it through Claude

...

I annoyed the radiologists until they re-checked.

How was he in a position to annoy his friend's radiologists?

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

I think it's being framed wrongly for the narrative by the guy posting the screenshot.

A friend sent me MRI brain scan results

Without more context I have to assume guy was still convinced of his brain tumor, knew a friend who knew and talked about Claude, had said friend run results through Claude and told guy who's brain was scanned that Claude gave a positive result, and friend went to multiple doctors for a second, third, fourth opinion.

In America we have to advocate hard when there is an ongoing, still unsolved issue, and that includes using all tools at your disposal.

[–] [email protected] 1 points 7 months ago

maybe his friend is also a radiologist and sent op a picture of his own head

[–] [email protected] 9 points 7 months ago

Money. Guy is loaded, he can annoy anyone he wants.

[–] [email protected] 33 points 7 months ago (1 children)
[–] [email protected] 4 points 7 months ago (1 children)
[–] [email protected] 7 points 7 months ago

His friend? Albert Einstein.

[–] [email protected] 4 points 7 months ago

I feel like the book I, Robot provides some fascinating insight into this... specifically Liar

load more comments
view more: next ›