this post was submitted on 16 May 2025
662 points (97.0% liked)

Technology

70109 readers
2384 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s

My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!

The stuff at the end was sarcasm, you dolt. Shut up.

(page 2) 26 comments
sorted by: hot top controversial new old
[–] [email protected] 18 points 2 days ago* (last edited 2 days ago) (4 children)

I'm going to bring it up.

Isn't this the same asshole who posted the "Woke racist" meme as a response to Gemini generating images of Black SS officers? Of course we now know he was merely triggered by the suggestion because of his commitment to white supremacy and alignment with the SS ideals, which he could not stand to see, pun not intended, denigrated.

The Gemini ordeal was itself a result of a system prompt; a half-ass attempt to correct for white bias deeply learned by the algorithm, just a few short years after Google ousted their AI ethics researcher for bringing this type of stuff up.

Few were the outlets that did not lend credence to the "outrage" about "diversity bias" bullshit and actually covered that deep learning algorithms are indeed sexist and racist.

Now this nazi piece of shit goes ahead and does the exact same thing; he tweaks a system prompt causing the bot to bring up the self-serving and racially charged topic of apartheid racists being purportedly persecuted. He does the vary same thing he said was "uncivilizational", the same concept he brought up just before he performed the two back-to-back Sieg Heil salutes during Trump's inauguration.

He was clearly not concerned about historical accuracy, not the superficial attempt to brown-wash the horrible past of racism which translates to modern algorithms' bias. His concern was clearly the representation of people of color, and the very ideal of diversity, so he effectively went on and implemented his supremacist seething into a brutal, misanthropic policy with his interference in the election and involvement in the criminal, fascist operation also known as DOGE.

Is there anyone at this point that is still sitting on the fence about Musk's intellectual dishonesty and deeply held supremacist convictions? Quickest way to discover nazis nowadays really: (thinks that Musk is a misunderstood genius and the nazi shit is all fake).

load more comments (4 replies)
[–] [email protected] 12 points 2 days ago
[–] [email protected] 3 points 2 days ago* (last edited 2 days ago) (1 children)

Did you read what Grok was saying? Grok was saying that white genocide is questionable at best, and unfounded.

It was just saying when prompted about unrelated stuff which is what made it bizarre. It never said it was a real thing, nor endorsing the idea that it is something real.

[–] [email protected] 2 points 2 days ago (1 children)

Why was it mentioning it at all in conversations not about it?

And why does the fact that it did that not seem to bother you?

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

I guess you didn’t read the article, or don’t understand how LLMs work so I’ll explain.

An employee changed the AI’s system prompt, telling it to avoid spreading white genocide misinformation in South Africa. The system prompt is a context that tells the AI how to work with the prompts it is given and it forces it “to think” about whatever is on there. So by making that change in the system prompt every time someone prompted Grok about anything, it would think about not spreading misinformation about white genocide in South Africa, so it inserted that into pretty much everything.

So it doesn’t bother me because it’s an LLM acting as it is supposed to when someone messes with the settings. Grok probably did not need these instructions in the first place, as it’s consistently been embarrassing Elon every time the man posts one of his shitbrained takes, and while I haven’t used that AI, I don’t think, or have yet to see proof that Elon is directing it’s training to be positive conservative ideologies or harebrained conspiracy theories. It could be for all I know, but from what I’ve seen Grok sticks to facts as they are.

A lot of people are reading the misleadingly titled articles about this thinking that Elon made the AI spread the notion that there’s such a thing as a white genocide in South Africa when that was not at all what happened. You need to read the actual article or else you’re falling for the same shit the MAGAtards do.

[–] [email protected] 5 points 2 days ago (1 children)

That prompt modification "directed Grok to provide a specific response on a political topic" and "violated xAI's internal policies and core values," xAI wrote on social media.

Relevant quote because one of us didn't read the article for sure.

Edit: not to mention that believing a system prompt somehow binds or constrains rather than influences these systems would also indicate to me that one of us definitely doesn't understand how these work, either.

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

That doesn’t say anything about the content of the modification itself. For all you know the internal policy could be that white genocide is a thing. But what they are in fact referring to that violates the internal policies is modifying the prompt in such a way that it takes a specific stance on a political issue. Cmon man use your brain, it’s not that fricking hard.

If the contents of the prompt were to say that white genocide is a thing, it would have likely have said something along the lines that it is a nuanced topic of debate and it depends on how you define the situation or some other non answer. But the AI was consistently taking a stance that it was misinformation, that tells you what the prompt was. Also it was reported in other outlets that that was in fact what the modification was, to not spread misinformation about that.

load more comments (1 replies)
[–] [email protected] 6 points 2 days ago

This actually shows that they're is work being done to use LLM on social media to pretend to be ordinary users and trying to sway opinion of the population.

This is currently the biggest danger of LLM, and the bill to prevent states from regulating it is to ensure they can continue using it

[–] [email protected] 3 points 2 days ago

That's the problem with modern AI and future technologies we are creating

We, as a human civilization, are not creating future technology for the betterment of mankind ... we are arrogantly and ignorantly manipulating all future technology for our own personal gain and preferences.

[–] [email protected] 7 points 2 days ago

...is entertaining a plan to grant refugee status to white Afrikaners

FYI, the Republicans have already done it.

https://www.npr.org/2025/05/12/nx-s1-5395067/first-group-afrikaner-refugees-arrive

[–] [email protected] 121 points 2 days ago* (last edited 2 days ago)

Elon looking for the unauthorized person:

[–] [email protected] 68 points 2 days ago (4 children)

Yeah, billionaires are just going to randomly change AI around whenever they feel like it.

That AI you've been using for 5 years? Wake up one day, and it's been lobotomized into a trump asshole. Now it gives you bad information constantly.

Maybe the AI was taken over by religious assholes, now telling people that gods exist, manufacturing false evidence?

Who knows who is controlling these AI. Billionaires, tech assholes, some random evil corporation?

[–] [email protected] 5 points 2 days ago (1 children)

I currently treat any positive interaction with an LLM as a “while the getting’s good” experience. It probably won’t be this good forever, just like Google’s search.

[–] [email protected] 8 points 2 days ago (2 children)

Pretty sad that the current state would be considered "good"

load more comments (2 replies)
[–] [email protected] 40 points 2 days ago* (last edited 2 days ago) (1 children)

Joke's on you, LLMs already give us bad information

[–] [email protected] 13 points 2 days ago (1 children)

Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn't find. The customer didn't believe him when he said that the promotion didn't exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.

Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.

[–] [email protected] 11 points 2 days ago (4 children)

"Unintentionally" is the wrong word, because it attributes the intent to the model rather than the people who designed it.

Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.

load more comments (4 replies)
[–] [email protected] 12 points 2 days ago (2 children)

That's a good reason to use open source models. If your provider does something you don't like, you can always switch to another one, or even selfhost it.

[–] [email protected] 11 points 2 days ago

While true, it doesn't keep you safe from sleeper agent attacks.

These can essentially allow the creator of your model to inject (seamlessly, undetectably until the desired response is triggered) behaviors into a model that will only trigger when given a specific prompt, or when a certain condition is met. (such as a date in time having passed)

https://arxiv.org/pdf/2401.05566

It's obviously not as likely as a company simply tweaking their models when they feel like it, and it prevents them from changing anything on the fly after the training is complete and the model is distributed, (although I could see a model designed to pull from the internet being given a vulnerability where it queries a specific URL on the company's servers that can then be updated with any given additional payload) but I personally think we'll see vulnerabilities like this become evident over time, as I have no doubts it will become a target, especially for nation state actors, to simply slip some faulty data into training datasets or fine-tuning processes that get picked up by many models.

[–] [email protected] 21 points 2 days ago (1 children)

Or better yet, use your own brain.

[–] [email protected] 4 points 2 days ago

Yep, not arguing for the use of generative AI in the slightest. I very rarely use it myself.

load more comments (1 replies)
[–] [email protected] 171 points 2 days ago (3 children)

Musk isn't authorized anymore?

[–] [email protected] 44 points 2 days ago (4 children)

Depends on the ketamine levels in his blood at any given moment. Sometimes, you edit your prompts from a k-hole, and everyone knows you can't authorize your own actions when you're fully dissociated.

load more comments (4 replies)
[–] [email protected] 10 points 2 days ago

Looks like Elon used his Alt account.

[–] [email protected] 19 points 2 days ago* (last edited 2 days ago)

Unilaterally Authorized. Or UnAuthorized for short.

[–] [email protected] 17 points 2 days ago
load more comments
view more: ‹ prev next ›