this post was submitted on 12 Nov 2024
1060 points (96.7% liked)

Technology

69112 readers
3147 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 69 points 5 months ago (5 children)

misinformation? just call it lies. reads easier and just as accurate.

[–] [email protected] 36 points 5 months ago (3 children)

Even more accurately: it's bullshit.

"Lie" implies that the person knows the truth and is deliberately saying something that conflicts with it. However the sort of people who spread misinfo doesn't really care about what's true or false, they only care about what further reinforces their claims or not.

load more comments (3 replies)
load more comments (4 replies)
[–] [email protected] 23 points 5 months ago (2 children)

And we have to ask ourselves WHY he'd want to spread misinformation. What is he trying to do?

load more comments (1 replies)
[–] [email protected] 5 points 5 months ago

I don't think Musk would disagree with that definition and I bet he even likes it.

The key word here is "significant". That's the part that clearly matters to him, based on his actions. I don't care about the man and I don't think he's a genius, but he does not look stupid or delusional either.

Musk spreads disinformation very deliberately for the purpose of being significant. Just as his chatbot says.

[–] [email protected] 15 points 5 months ago* (last edited 5 months ago) (2 children)

This is an article about a tweet with a screenshot of an LLM prompt and response. This is rock fucking bottom content generation. Look I can do this too:

Headline: ChatGPT criticizes OpenAI

[–] [email protected] 7 points 5 months ago* (last edited 5 months ago) (2 children)

God, i love LLMs. (sarcasm)

They will say anything you tell them to and you can even lead them into saying shit without explicitly stating it.
They are not to be trusted.

[–] [email protected] 3 points 5 months ago (2 children)

I tried it with your username and instance host and it thought it was an email address. When I corrected it, it said:

I couldn't find any specific information linking the Lemmy account or instance host "[email protected]" to the dissemination of misinformation. It's possible that this account is associated with a private individual or organization not widely recognized in public records.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 11 points 5 months ago* (last edited 5 months ago) (7 children)

To add to this:

All LLMs absolutely have a sycophancy bias. It's what the model is built to do. Even wildly unhinged local ones tend to 'agree' or hedge, generally speaking, if they have any instruction tuning.

Base models can be better in this respect, as their only goal is ostensibly "complete this paragraph" like a naive improv actor, but even thats kinda diminished now because so much ChatGPT is leaking into training data. And users aren't exposed to base models unless they are local LLM nerds.

load more comments (7 replies)
load more comments
view more: ‹ prev next ›