this post was submitted on 03 Apr 2024
99 points (87.8% liked)

Technology

59331 readers
4641 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 7 months ago* (last edited 7 months ago) (2 children)

The most infuriating thing for me is the constant barrage of "LLMs aren't AI" from people.

These people have no understanding of what they're talking about.

Edit: to everyone down voting me, look at this image image

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (1 children)

If you want a refreshing opposite version of that comment perspective, you might enjoy this piece:

https://www.lesswrong.com/posts/gP8tvspKG79RqACTn/modern-transformers-are-agi-and-human-level

[–] [email protected] 5 points 7 months ago (1 children)

Thanks for that read. I definitely agree with the author for the most part. I don't really agree that current LLMs are a form of AGI, but it's definitely close.

But what isn't up for debate is the fact that LLMs are 100% AI. There's no debate there. But I think the reason why people argue that is because they conflate "intelligence" with concepts like sapience, sentience, consciousness, etc.

These people don't understand that intelligence is a concept that can, and does, exist outside of consciousness.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

The problem with 'AGI' is that it's a nonsense term with no agreed upon meaning. I remember in a discussion on Hacker News describing one of Sam Altman's definitions and being told by someone "no one defines it that way." It's a term that means whatever the eye of the beholder finds it convenient to mean.

The article's point was more that when the term was originally coined it was to distinguish from narrow AI, and according to that original definition and distinction we're already there (which I definitely agree with).

It's not saying we're already at AGI as it's loosely being used today, where in the comments there's a handful of better options for that term than AGI, though in spite of it I'm sure we'll continue to use AGI to the point of meaninglessness as a goal post we'll never define as met until one day in the far future we claim it's always been agreed upon as having been met years ago and no one ever doubted it.

And yes, I agree that 'sentience' is a red herring discussion point when it comes to LLMs. A cockroach is sentient by the dictionary definition. But a cockroach can't make similes to Escher drawings in a discussion, which is perhaps the more impressive quality.