this post was submitted on 08 Dec 2024
460 points (94.9% liked)

Technology

60082 readers
2807 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
460
The GPT Era Is Already Ending (www.theatlantic.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 2 weeks ago

The monkey's typing and generating Shakespeare is supposed to show the ridiculousness of the concept of infinity. It does not mean it would happen in years, or millions of years, or billions, or trillions, or... So unless the "AI" can move outside the flow of time and take an infinite amount of time and also then has a human or other actual intelligence to review every single result to verify when it comes up with the right one...yeah, not real...this is what happens when we give power to people with no understanding of the problem much less how to solve it. They come up with random ideas from random slivers of information. Maybe in an infinite amount of time a million CEOs could make a longterm profitable company.

[–] [email protected] 5 points 2 weeks ago

I had a bunch of roofers hammering nails in with hammers.

I bought a bunch of nail guns and then fired all the roofers. Now less roofing is being done! It is the end to the Era of nail guns! Everyone should just go back to hammers.

[–] [email protected] 2 points 2 weeks ago
[–] [email protected] 27 points 2 weeks ago (3 children)

We're hitting the end of free/cheap innovation. We can't just make a one-time adjustment to training and make a permanent and substantially better product.

What's coming now are conventionally developed applications using LLM tech. o1 is trying to fact-check itself and use better sources.

I'm pretty happy it's slowing down right at this point.

I'd like to see non-profit open systems for education. Let's feed these things textbooks and lectures. Model the teaching after some of our best minds. Give individuals 1:1 time with a system 24x7 that they can just ask whatever they want and as often as they want and have it keep track of what they know and teach them the things that they need to advance. .

[–] [email protected] 1 points 1 week ago (1 children)

I mean isn't it already that is included in the datasets? It's pretty much a mix of everything.

[–] [email protected] 1 points 1 week ago

Not everything in the dataset is retrievable. It's very lossy. It's also extremely noisy with a lot of training data that's not education-worthy.

I suspect they'd make a purpose-built model trained mainly on what they actually would want to teach especially from good educators.

[–] [email protected] 2 points 2 weeks ago

That's the job I need. I've spent my whole live trying to be Data from Star Trek. I'm ready to try to mentor and befriend a computer.

[–] [email protected] 4 points 2 weeks ago

Amazing idea, holy moly.

[–] [email protected] 8 points 2 weeks ago (2 children)

People writing off AI because it isn’t fully replacing humans. Sounds like writing off calculators because they can’t work without human input.

Used correctly and in the right context, it can still significantly increase productivity.

[–] [email protected] 3 points 2 weeks ago (1 children)

No, this is the equivalent of writing off calculators if they required as much power as a city block. There are some applications for LLMs, but if they cost this much power, they're doing far more harm than good.

[–] [email protected] 0 points 2 weeks ago (1 children)

Imagine if the engineers for computers were just as short sighted. If they had stopped prioritizing development when computers were massive, room sized machines with limited computing power and obscenely inefficient.

Not all AI development is focused on increasing complexity. Much is focused on refinement, and increasing efficiency. And there’s been a ton of progress in this area.

[–] [email protected] 0 points 2 weeks ago (1 children)

This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI's CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

I'm not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.

[–] [email protected] 0 points 1 week ago

Sounds like you’re only reading a certain narrative then. There’s plenty of articles about increasing efficiency, too.

[–] [email protected] 11 points 2 weeks ago (1 children)

Except it has gotten progressively worse as a product due to misuse, corporate censorship of the engine and the dataset feeding itself.

[–] [email protected] -3 points 2 weeks ago (1 children)

Yeah, the leash they put it on to keep it friendly towards capitalists is the biggest thing holding it back right now.

[–] [email protected] 4 points 2 weeks ago

'Jesse, what the fuck are you talking about'.jpg

[–] [email protected] 22 points 2 weeks ago

"In OpenAI’s early tests, scaling o1 showed diminishing returns: Linear improvements on a challenging math exam required exponentially growing computing power."

Sounds like most other drugs, too.

load more comments
view more: next ›