this post was submitted on 02 May 2025
480 points (96.0% liked)
Technology
69604 readers
3643 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The article literally shows how the goals are being set in this case. They're prompts. The prompts are telling the AI what to do. I quoted one of them.
I assume they're talking about the design and training, not the prompt.
If you read the article (or my comment that quoted the article) you'll see your assumption is wrong.
Not the article, the commenter before you points at a deeper issue.
It doesn't matter how if your prompt tells it not to lie is it isn't actually capable of following that instruction.
It is following the instructions it was given. That's the point. It's being told "promote this drug", and so it's promoting it, exactly as it was instructed to. It followed the instructions that it was given.
Why are you think that the correct behaviour for the AI must be for it to be "truthful"? If it was being truthful then that would be an example of it failing to follow its instructions in this case.
I feel like you're missing the forest for the trees here. Two things can be true. Yes, if you give AI a prompt that implies it should lie, you shouldn't be surprised when it lies. You're not wrong. Nobody is saying you're wrong. It's also true that LLMs don't really have "goals" because they're trained by examples. Their goal is, at the end of the day, mimicry. This is what the commenter was getting at.