this post was submitted on 23 Jun 2025
165 points (97.7% liked)

Technology

72041 readers
2716 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] [email protected] 23 points 5 days ago (2 children)

Why are they... why are they having autocomplete recommend medical treatment? There are specialized AI algorithms that already exist for that purpose that do it far better (though still not well enough to even assist real doctors, much less replace them).

[–] [email protected] 2 points 5 days ago (1 children)

Are there any studies done (or benchmarks) that show accuracy on recommendations for treatments given a medical history and condition requiring treatment?

[–] [email protected] 3 points 5 days ago (1 children)

Im currently working on one now as a researcher. Its a crude tool to measure the quality of response. But its a start

[–] [email protected] 4 points 5 days ago* (last edited 5 days ago) (1 children)

Gotta start somewhere, and it won’t ever improve if we don’t start improving it. So many on Lemmy assume the tech will never be good enough so why even bother, but that’s why we do things, to make the world that much better… eventually. Why else would we plant literal trees? For those that come after us.

[–] [email protected] 1 points 5 days ago

It's not an assumption it's just a matter of practical reality. If we're at best a decade off from that point why pretend it could suddenly unexpectedly improve to the point it's unrecognizable from its current state? LLMs are neat, scientists should keep working on them and if it weren't for all the nonsense "Ai" hype we have currently I'd expect to see them used rarely but quite successfully as it would be getting used off of merit, not hype.

[–] [email protected] 10 points 5 days ago

Because sycophants keep saying it's going to take these jobs, eventually real scientists/researchers have to come in and show why the sycophants are wrong.

[–] [email protected] 53 points 5 days ago (1 children)

Say it with me, now: chatgpt is not a doctor.

Now, louder for the morons in the back. Altman! Are you listening?!

[–] [email protected] 7 points 5 days ago

ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize.

Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training.

Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)

[–] [email protected] 55 points 5 days ago

LLMs are not Large Medical Expert Systems. They are Large Language Models, and are evaluated on how convincing their output is, instead of how accurate or useful it is.

[–] [email protected] 18 points 5 days ago (1 children)

Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.

This is not an argument for LLMs (which people are deferring to an alarming rate) but I’d call out that this seems to be a bias in humans giving medical care as well.

[–] [email protected] 17 points 5 days ago

Of course it is, LLMs are inherently regurgitation machines - train on biased data, make biased predictions.

[–] [email protected] 12 points 5 days ago (2 children)

large language model deployed to make treatment recommendations

What kind of irrational lunatic would seriously attempt to invoke currently available Counterfeit Cognizance to obtain a "treatment recommendation" for anything...???

FFS.

Anyone who would seems a supreme candidate for a Darwin Award.

[–] [email protected] 2 points 5 days ago (1 children)

Not entirely true. I have several chronic and severe health issues. ChatGPT provides nearly and surpassing medical advice (heavily needs re-verified) from multiple specialialty doctors. In my country doctors are horrible. This bridges the gap albeit again highly needing oversight to be safe. Certainly has merit though.

[–] [email protected] 2 points 5 days ago

Bridging the gap is something sorely needed and LLMs are damn close to achieving.

[–] [email protected] 4 points 5 days ago

There's a potentially justifiable use case in training one and evaluating its performance for use in, idk, triaging a mass-casualty event. Similar to the 911 bot they announced the other day.

Also similar to the 911 bot, i expect it's already being used to justify cuts in necessary staffing so it's going to be required in every ER to ~~maintain higher profit margins~~ just keep the lights on.