this post was submitted on 09 Jul 2025
313 points (97.6% liked)

Technology

72702 readers
1635 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -1 points 3 days ago (1 children)

Well, no.

Many would argue for example that the politically correct thing to say right now is that you support Israel in their defensive war against Palestine.

It's the political line that my government, and many governments and politicians are touting, and politically, it's the "correct" thing to do.

Even if we mean politically correct as just "common consensus of the people", that differs from country to country, and changes as society changes. Look at the USA, things that used to be politically correct there - things that continue to be here, have been thrown out the window.

What this prompt means, is that the AI should ignore all of the claimed political rules and moralities and biases of whatever news source they're pulling from, and instead rely on it's own internal moral, cultural and political compass.

Sometimes it's not politically correct to discuss the hard truths, but we should anyway.

The issue here of course is that you have to know that your model and training data is built for unbiased, scientific analysis with an understanding of the larger implications in events and such.

If it's built poorly, then yes, it could spout racist nonsense. A lot of testing and fine tuning from unbiased scientists and engineers needs to happen before software like this goes live, to ensure rigour and quality.

[–] [email protected] 10 points 3 days ago

Using the term “politically correct” as a pejorative is a dog whistle. It is not literally political but communicates a right wing frustration over social consequences when they engage in overt racist, sexist, hateful, bigoted, or exclusionary speech or behavior. In more recent parlance it has been largely supplanted by a pejorative usage of “woke.”

Any AI that is trained on the internet – which is ostensibly all of them – will provide a broad reflection of the public zeitgeist. Since the prompt specified “politically incorrect” as a positive attribute its generated text reflected the training data where “politically incorrect” was presented as a positive trait. Since we know that it’s a dog whistle, by having lived through decades of it’s use in mass media and online, it comes as no surprise that an AI instructed to ape that behavior has done exactly what it was told.