this post was submitted on 14 Apr 2024
1 points (100.0% liked)

Futurology

1759 readers
5 users here now

founded 1 year ago
MODERATORS
(page 2) 13 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 6 months ago

Just like ETH before staking

[–] [email protected] 0 points 6 months ago

I'm glad, you know. We're talking about preparing for AGI now, but if it's not imminent we also have some time to actually do it.

[–] [email protected] 0 points 6 months ago (4 children)

You can't get GI through spicy autocorrect ? 😱

load more comments (4 replies)
[–] [email protected] 0 points 6 months ago (4 children)

If you're thinking about clicking the link to find out what AGI is, don't bother 😂

[–] [email protected] 0 points 6 months ago

Artificial General Intelligence. Basically what most people think of when they hear AI compared to how its often used by computer scientists.

[–] [email protected] 0 points 6 months ago (1 children)

If you’re unsure, it stands for artificial general intelligence, an actual full AI like we’re used to from Sci-fi

[–] [email protected] 0 points 6 months ago (1 children)
load more comments (1 replies)
[–] [email protected] 0 points 6 months ago (1 children)

What about LLM? Does it say what it means?

[–] [email protected] 0 points 6 months ago (1 children)

I know that's Large Language Model because the phrase has been bandied about for a while now

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 0 points 6 months ago (7 children)

Added to this finding, there's a perhaps greater reason to think LLMs will never deliver AGI. They lack independent reasoning. Some supporters of LLMs said reasoning might arrive via "emergent behavior". It hasn't.

People are looking to get to AGI in other ways. A startup called Symbolica says a whole new approach to AI called Category Theory might be what leads to AGI. Another is “objective-driven AI”, which is built to fulfill specific goals set by humans in 3D space. By the time they are 4 years old, a child has processed 50 times more training data than the largest LLM by existing and learning in the 3D world.

[–] [email protected] 0 points 6 months ago* (last edited 6 months ago)

I don't have a lot of confidence that category theory will be useful here. Category theorists spend more time re-proving existing mathematics than doing anything novel. As evidenced by the Wikipedia article for Applied category theory being bereft of real-life examples.

Objective driven AI (reinforcement learning) has shown much more promise, but it still needs more work on neural net architectures and data efficiency, gains that usually come from traditional supervised learning.

[–] [email protected] 0 points 6 months ago (1 children)

They can quite possibly be a useful component. They're the language center of the brain.

People who ever thought they would actually resemble intelligence were woefully uninformed of how complex intelligence is.

[–] [email protected] 0 points 6 months ago* (last edited 6 months ago) (10 children)

How complex is intelligence, though? People who were sure they don't were drawing from information we don't actually have.

load more comments (10 replies)
load more comments (5 replies)
load more comments
view more: ‹ prev next ›