It’s not a lie if you believe it.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
word lying would imply intent. Is this pseudocode
print "sky is green" lying or doing what its coded to do?
The one who is lying is the company running the ai
It's lying whether you do it knowingly or not.
The difference is whether it's intentional lying.
Lying is saying a falsehood, that can be both accidental or intentional.
The difference is in how bad we perceive it to be, but in this case, I don't really see a purpose of that, because an AI lying makes it a bad AI no matter why it lies.
I just think lying is wrong word to use here. Outputting false information would be better. Its kind of nitpicky but not really since choice of words affects how people perceive things. In this matter it shifts the blame from the company to their product and also makes it seem more capable than it is since when you think about something lying, it would also mean that something is intelligent enough to lie.
Outputting false information
I understand what you mean, but technically that is lying, and I sort of disagree, because I think it's easier for people to be aware of AI lying than "Outputting false information".
Well, I guess its just a little thing and doesn't ultimately matter. But little things add up
I think the disagreement here is semantics around the meaning of the word "lie". The word "lie" commonly has an element of intent behind it. An LLM can't be said to have intent. It isn't conscious and, therefor, cannot have intent. The developers may have intent and may have adjusted the LLM to output false information on certain topics, but the LLM isn't making any decision and has no intent.
IMO parroting lies of others without critical thinking is also lies.
For instance if you print lies in an article, the article is lying. But not only the article, if the article is in a paper, the paper is also lying.
Even if the AI is merely a medium, then the medium is lying. No matter who made the lie originally.
Then we can debate afterwards the seriousness and who made up the lie, but the lie remains a lie no-matter what or who repeats it.
These kinds of bullshit humanizing headlines are the part of the grift.
They paint this as if it was a step back, as if it doesn't already copy human behaviour perfectly and isn't in line with technofascist goals. sad news for smartasses that thought they are getting a perfect magic 8ball. sike, get ready for fully automated trollfarms to be 99% of commercial web for the next decade(s).
Maybe the darknet will grow in its place.
Google and others used Reddit data to train their LLMs. That’s all you need to know about how accurate it will be.
That’s not to say it’s not useful, but you need to know how to use it and understand that you need to only use it as a tool to help, not to take it as correct.