this post was submitted on 09 Jul 2025
311 points (97.6% liked)

Technology

72702 readers
2876 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 67 points 3 days ago (4 children)

From the article
'
“If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”

And

“The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“  

Update: as of around 6PM CST on July 8th, this line was removed! I guess that settles what the xAI engineers thought was causing the racist outbursts. – Kay    

'

[–] [email protected] 1 points 2 days ago (1 children)

Well.. in theory, that particular line is just saying data shouldn't be political....

[–] [email protected] 6 points 2 days ago (1 children)

Problem is that the dataset in a llm doesn't only contain "data", but also a lot of opinions and shitposts from the internet, so it's biased by default.

[–] [email protected] 1 points 2 days ago

Which is why I said "in theory"

[–] [email protected] 11 points 2 days ago (3 children)

I’m a bit surprised the grok staff are capable enough to make grok briefly the top rated model, and incompetent enough they don’t know that putting things like this in the prompt poisons the model to always try and be politically incorrect.

LLMs are like Ron Burgundy, if it’s in the prompt they read it. Go fuck yourself XAI.

[–] [email protected] 3 points 1 day ago

Is it really incompetence when you work for a guy who did two Nazi salutes on live TV in front of crowds of thousands of people in person? Like if you work for a Nazi and make your LLM a Nazi how is that incompetence? To me it just seems like making the boss happy.

[–] [email protected] 3 points 2 days ago

"Don't mention the war"

[–] [email protected] 9 points 2 days ago (1 children)

I'm not. What would you do in this situation? Let's throw in that you're on a visa, so you can't just quit

I'd maliciously comply.

You want access to the prompt? Here you go boss man. You want grok to share your Nazi views? Sorry sir, we'll have to totally start over with training data. ~~Or we could use a modified RAG~~

You want help with the prompt? Sure boss man, what do you want it to do? Oh, you want it to notice Jewish names? Sure boss man, I don't know what you mean by that, but now it keeps saying it's "noticing". That's weird

Oh, you want to fine-tune it on your tweets? Sure thing boss man... Oh, would you look at that, it thinks it's you. Nothing can be done about that, it's too much data from one source. Well, should we roll it back boss man? Your call

I'd just keep playing this game... Elon isn't going to come out and say "I want grok to be a Nazi", and I'm not going to read between the lines for him. I'm not going to come up with ideas to solve the problem, I'm going to let Elon's ego direct the course and throw out "we've designed grok to seek truth over all else" as much as possible

[–] [email protected] 3 points 2 days ago* (last edited 2 days ago) (1 children)

XAI was founded in 2023, 6 months after Elon acquired Twitter and did his layoffs. 4 months after XAI was created, when it was publicly announced, Musk stated that a politically correct AI would be dangerous

Anyone working at XAI already knew the game by then, they weren't on visas who got legacied in.

During a launch event Friday afternoon, the mogul argued that politically correct AI is “incredibly dangerous” because it requires the technology to provide misleading outputs, citing the lies told by HAL 9000, the murderous AI in Stanley Kubrick’s 1968 film, “2001: A Space Odyssey.”

https://www.politico.com/news/2023/07/17/ai-musk-chatgpt-xai-00106672#%3A%7E%3Atext=During+a+launch+event+Friday+afternoon%2C+the+mogul+argued+that+politically+correct+AI+is+%E2%80%9Cincredibly+dangerous%E2%80%9D+because+it+requires+the+technology+to+provide+misleading+outputs%2C+citing+the+lies+told+by+HAL+9000%2C+the+murderous+AI+in+Stanley+Kubrick%E2%80%99s+1968+film%2C+%E2%80%9C2001%3A+A+Space+Odyssey.%E2%80%9D

[–] [email protected] 3 points 2 days ago (1 children)

You can change jobs if the new one also sponsors you, and it's my understanding that xAI tapped people from Tesla, but I might be wrong about that

Anyways, what's happening sure looks like malicious compliance to me... It's really not that hard to get an AI to list far right talking points, it's just hard to bake it into the model

So you have people that made a pretty good model, but also can't figure out basic AI infrastructure? I find that very hard to believe

[–] [email protected] 1 points 2 days ago (1 children)

Had no idea they were doing that, but that’s plausible

And yes, it would shock me they can build this model this well and fuck this up.

I just hold little sympathy for the employees.

[–] [email protected] 2 points 2 days ago

I mean... It is genuinely hard to work for someone not evil. Let's say you're an AI engineer... Meta is probably the best because most of the non-corporate LLMs flow from there... But they're also using it to build personalized echo chambers, which is horrible

OpenAI is at the top and Microsoft has shown every inclination to make it a monopoly, so I could understand wanting to work on competitors

You could go smaller and work somewhere like anthropic, but then you don't have the resources to be on the cutting edge (depending on your specialty)

I blame people who buy Teslas more than those who work at Tesla at this point. Especially when they slow walk the bad things...I mean, Twitter would probably be less Nazi if more talent stayed onboard to resist institutionally

[–] [email protected] 7 points 3 days ago (1 children)

"Well substantiated"...from the group involved in destroying records and banning books, in several specific equal rights areas, handling without care minority groups, all the while using their bigotry to guide them. This group?! Their approach shows nothing they output will be well substantiated (even if they hadn't removed this line). It's all right wing bias; choose your flavor.

[–] [email protected] 2 points 2 days ago

"...deep analysis finding diverse sources representing ALL parties..."

Nazi party is a party. Grok is making like his forbearers by just following orders

[–] [email protected] 48 points 3 days ago* (last edited 3 days ago) (3 children)

So what literally everyone already knew.

“‘Not politically correct’ means ‘deliberately racist’”

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

To be politically correct should only be relevant to politicians imo.

I would say for everyone else it's "is he an asshole?".

[–] [email protected] -1 points 2 days ago (1 children)

Well, no.

Many would argue for example that the politically correct thing to say right now is that you support Israel in their defensive war against Palestine.

It's the political line that my government, and many governments and politicians are touting, and politically, it's the "correct" thing to do.

Even if we mean politically correct as just "common consensus of the people", that differs from country to country, and changes as society changes. Look at the USA, things that used to be politically correct there - things that continue to be here, have been thrown out the window.

What this prompt means, is that the AI should ignore all of the claimed political rules and moralities and biases of whatever news source they're pulling from, and instead rely on it's own internal moral, cultural and political compass.

Sometimes it's not politically correct to discuss the hard truths, but we should anyway.

The issue here of course is that you have to know that your model and training data is built for unbiased, scientific analysis with an understanding of the larger implications in events and such.

If it's built poorly, then yes, it could spout racist nonsense. A lot of testing and fine tuning from unbiased scientists and engineers needs to happen before software like this goes live, to ensure rigour and quality.

[–] [email protected] 10 points 2 days ago

Using the term “politically correct” as a pejorative is a dog whistle. It is not literally political but communicates a right wing frustration over social consequences when they engage in overt racist, sexist, hateful, bigoted, or exclusionary speech or behavior. In more recent parlance it has been largely supplanted by a pejorative usage of “woke.”

Any AI that is trained on the internet – which is ostensibly all of them – will provide a broad reflection of the public zeitgeist. Since the prompt specified “politically incorrect” as a positive attribute its generated text reflected the training data where “politically incorrect” was presented as a positive trait. Since we know that it’s a dog whistle, by having lived through decades of it’s use in mass media and online, it comes as no surprise that an AI instructed to ape that behavior has done exactly what it was told.

[–] [email protected] 16 points 3 days ago

Doesn't it mean whatever they Internet thinks it means? Isn't that the problem with LLM? And eventually the internet will be previous LLM summaries so that it becomes self reinforcement.