this post was submitted on 18 Sep 2024
1 points (100.0% liked)

United Kingdom

4038 readers
24 users here now

General community for news/discussion in the UK.

Less serious posts should go in [email protected] or [email protected]
More serious politics should go in [email protected].

Try not to spam the same link to multiple feddit.uk communities.
Pick the most appropriate, and put it there.

Posts should be related to UK-centric news, and should be either a link to a reputable source, or a text post on this community.

Opinion pieces are also allowed, provided they are not misleading/misrepresented/drivel, and have proper sources.

If you think "reputable news source" needs some definition, by all means start a meta thread.

Posts should be manually submitted, not by bot. Link titles should not be editorialised.

Disappointing comments will generally be left to fester in ratio, outright horrible comments will be removed.
Message the mods if you feel something really should be removed, or if a user seems to have a pattern of awful comments.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 day ago* (last edited 1 day ago) (3 children)

Indeed. GPs have been doing this for a long time. It's nothing new, and expecting every GP to know every single ailment that humanity has ever experienced, to recall it quickly, and immediately know the course of action to take, is unreasonable.

Like you say, if they're blindly following a generic ChatGPT instance, then that's bad.

If they're aiding their search using an LLM that has been trained on a good medical dataset, then taking that and looking more into it, then there's no issue.

People have become so reactionary to LLMs and other 'AI' stuff. It seems there's a "omg it's so cool everybody should use it to the max. Let's blindly trust it!" camp and a "it's awful and should exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!" There's zero nuance in the discussion about this stuff, and it's tiring.

[–] [email protected] 0 points 1 day ago

People have become so reactionary to LLMs and other AI stuff. It seems there's a "omg it's so cool everybody should use it to the max. Let's blindly trust it!" camp and a "it's awful and shouldn't exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!"

Both camps are just as stupid. There's zero nuance in the discussion about this stuff, and it's tiring.

Well said.

[–] [email protected] 0 points 1 day ago* (last edited 1 day ago) (1 children)

You can build excellent expert systems that will definitely help a doctor remember all the illnesses, know what questions to ask to narrow things down or double check it's not something weird, and provide options for treatment.

These exist and are good

Chatgpt isn't an expert system and doctors using it like one need a serious warning from the BMC and would eventually need to be struck off, same as using ouija boards or bones to diagnose illnesses.

[–] [email protected] 0 points 1 day ago

These exist and are good

Any examples off the top of your head? I would assume/speculate they are fairly expensive?

[–] [email protected] 0 points 1 day ago

Exactly. Love the username BTW.