this post was submitted on 14 Sep 2024
55 points (80.2% liked)

Technology

58123 readers
4021 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student. It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of "competent graduate student" is reached, at which point I could see this tool being of significant use in research level tasks.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 48 points 4 days ago (3 children)

I genuinely hate this statement. A competent grad student can solve problems. GPT cannot solve anything, as all it does is put together the shit it stole from somewhere before

[–] [email protected] 18 points 4 days ago

Isn’t problem solving mostly put things together of what you’ve learned before?

[–] [email protected] 17 points 4 days ago (1 children)

Aren't the grad students similarly trained on books that other people wrote?

[–] [email protected] 1 points 3 days ago

Didn't you steal great students before from somewhere?

[–] [email protected] 23 points 4 days ago (2 children)

O1 is (apparently) different according to some videos I watched, as it pulls apart the question and does some reasoning steps.

[–] [email protected] 2 points 4 days ago (2 children)

does some reasoning steps.

The people who believe in "AI" say the wackiest things.

[–] [email protected] 1 points 3 days ago

Its what chaptgpt calls it.

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago)

LLMs are basically just good pattern matchers. But just like how A* search can find a better path than a human can by breaking the problem down into simple steps, so too can an LLM make progress on an unsolved problem if it's used properly and combined with a formal reasoning engine.

I'm going to be real with you: the big insight behind almost all new mathematical ideas is based on the math that came before. Nothing is truly original the way AI detractors seem to believe.

By "does some reasoning steps," OpenAI presumably are just invoking the LLM iteratively so that it can review its own output before providing a final answer. It's not a new idea.

[–] [email protected] 15 points 4 days ago (1 children)

I'd love to see one of those videos

[–] [email protected] 2 points 4 days ago (1 children)

like, a video of Tao giving a demonstration?

[–] [email protected] 2 points 3 days ago

@NegentropicBoy English20•

O1 is (apparently) different according to some videos I watched, as it pulls apart the question ...

Yes