this post was submitted on 02 Jul 2025
39 points (95.3% liked)

World News

48060 readers
1980 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 day ago (2 children)

what makes the checker models any more accurate?

[–] [email protected] 1 points 21 hours ago (1 children)

The checker models aren’t trying to give you a correct answer with confidence. Their purpose is to find an incorrect answer. They’ll both do their task with confidence.

[–] [email protected] 1 points 19 hours ago (1 children)

the first one was confident. But wrong. The second one could be just as confident and just as wrong.

[–] [email protected] 1 points 7 hours ago (1 children)

Sure but they’re doing opposite tasks. You’re absolutely right that they could be wrong sometimes. So are people. Over time it gets better, especially with more regulation and smarter models.

[–] [email protected] 1 points 1 hour ago

opposite or not, they are both tasks that the fixed-matrix-multiplications can utterly fail at. It's not a regulation thing. It's a math thing: this cannot possibly work.

If you could get the checker to be correct all of the time, then you could just do that on the model it's "checking" because it is literally the same thing, with the same failure modes, and the same lack of any real authority in anything it spits

[–] [email protected] 2 points 1 day ago (1 children)

Possibly, reverse motivation - the training goal of such an agent would not be nice and smooth output, but shooting down misinformation.

But I have serious doubts about whether all of that is feasible, given the computational cost of running large language models.

[–] [email protected] 2 points 19 hours ago

how does that stop the checker model from "hallucinating" a "yep, this is fine" when it should have said "nah, this is wrong"