this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

59405 readers
2727 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 22 points 5 months ago (2 children)

The same thing actually passing a turing test would require. You've obviously read the words "Turing test" somewhere and thought you understood what it meant, but no robot we've ever produced as a species has passed the turing test. It EXPLICITLY requires that intelligence equal to (or indistinguishable from) HUMAN intelligence is shown. Without a liar reading responses, no AI we'll produce for decades will pass the turing test.

No large language model has intelligence. They're just complicated call and response mechanisms that guess what answer we want based on a weighted response system (we tell it directly or tell another machine how to help it "weigh" words in a response). Obviously with anything that requires massive amounts of input or nuance, like language, it'll only be right about what it was guided on, which is limited to areas it is trained in.

We don't have any novel interactions with AI. They are regurgitation engines, bringing forward sentences that aren't theirs piecemeal. Given ten messages, I'm confident no major LLM would pass a Turing test.

[–] [email protected] 3 points 5 months ago (1 children)

The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn't be surprised if a modern LLM could pass it, at least some of the time. That doesn't mean they are intelligent, they aren't, but I don't think the Turing test is good justification.

For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn't plan a whole sentence out in advance, it works token by token... The input to each prediction is just everything so far, up to the last word. When it starts writing "As..." it has no concept of the fact that it's going to write "...an AI A language model" until it gets through those words.

Frankly, given that fact it's amazing that LLMs can be as powerful as they are. They don't check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token... An incredible piece of technology, despite it's obvious flaws.

[–] [email protected] 7 points 5 months ago (1 children)

The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake.

This is just conjecture, but I assume this is because the question of consciousness is not really falsifiable, so you just kind of have to draw an arbitrary line somewhere.

Like, maybe tech gets so good that we really can't tell the difference, and only god knows it isn't really alive. But then, how would we know not to give the machine legal rights?

For the record, ChatGPT does not pass the turing test.

[–] [email protected] 2 points 5 months ago

ChatGPT is not designed to fool us into thinking it's a human. It produces language with a specific tone & direct references to the fact it is a language model. I am confident that an LLM trained specifically to speak naturally could do it. It still wouldn't be intelligent, in my view.