Just like ETH before staking
Futurology
I'm glad, you know. We're talking about preparing for AGI now, but if it's not imminent we also have some time to actually do it.
If you're thinking about clicking the link to find out what AGI is, don't bother 😂
Artificial General Intelligence. Basically what most people think of when they hear AI compared to how its often used by computer scientists.
If you’re unsure, it stands for artificial general intelligence, an actual full AI like we’re used to from Sci-fi
What about LLM? Does it say what it means?
I know that's Large Language Model because the phrase has been bandied about for a while now
Added to this finding, there's a perhaps greater reason to think LLMs will never deliver AGI. They lack independent reasoning. Some supporters of LLMs said reasoning might arrive via "emergent behavior". It hasn't.
People are looking to get to AGI in other ways. A startup called Symbolica says a whole new approach to AI called Category Theory might be what leads to AGI. Another is “objective-driven AI”, which is built to fulfill specific goals set by humans in 3D space. By the time they are 4 years old, a child has processed 50 times more training data than the largest LLM by existing and learning in the 3D world.
I don't have a lot of confidence that category theory will be useful here. Category theorists spend more time re-proving existing mathematics than doing anything novel. As evidenced by the Wikipedia article for Applied category theory being bereft of real-life examples.
Objective driven AI (reinforcement learning) has shown much more promise, but it still needs more work on neural net architectures and data efficiency, gains that usually come from traditional supervised learning.
They can quite possibly be a useful component. They're the language center of the brain.
People who ever thought they would actually resemble intelligence were woefully uninformed of how complex intelligence is.
How complex is intelligence, though? People who were sure they don't were drawing from information we don't actually have.