this post was submitted on 07 Apr 2025
1 points (100.0% liked)

TechTakes

1800 readers
15 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

"Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as "trivial", even when their validity was crucial."

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 weeks ago (34 children)

I think a recent paper showed that LLMs lie about their thought process when asked to explain how they came to a certain conclusion. They use shortcuts internally to intuitively figure it out but then report that they used an algorithmic method.

It’s possible that the AI has figured out how to solve these things using a shortcut method, but is incapable of realizing its own thought path, so it just explains things in the way it’s been told to, missing some steps because it never actually did those steps.

[–] [email protected] 0 points 2 weeks ago (32 children)
[–] [email protected] 0 points 2 weeks ago (30 children)

LLMs are a lot more sophisticated than we initially thought, read the study yourself.

Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.

[–] [email protected] 0 points 2 weeks ago (3 children)

You didn't link to the study; you linked to the PR release for the study. This is the study.

Note that the paper hasn't been published anywhere other than on Anthropic's online journal. Also, what the paper is doing is essentially a tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, "there's a dog!" or "that's a bird!" or "bitcoin is going up this year!". It's all rubbish dawg

[–] [email protected] 0 points 2 weeks ago (1 children)

To be fair, the typesetting of the papers is quite pleasant and the pictures are nice.

[–] [email protected] 0 points 2 weeks ago

they gotta make up for all those scary cave-wall pictures somehow

[–] [email protected] 0 points 2 weeks ago (5 children)

Fair enough, you’re the only person with a reasonable argument, as nobody else can seem to do anything other than name calling.

Linking to the actual papers and pointing out they haven’t been published to a third party journal is far more productive than whatever anti-scientific bullshit the other commenters are doing.

We should be people of science, not reactionaries.

[–] [email protected] 0 points 2 weeks ago

you got banned before I got to you, but holy fuck are you intolerable

We should be people of science, not reactionaries.

which we should do by parroting press releases and cherry picking which papers count as science, of course

but heaven forbid anyone is rude when they rightly tell you to go fuck yourself

[–] [email protected] 0 points 2 weeks ago (1 children)

reactionaries

So, how does any of this relate to wanting to go back to an imagined status quo ante? (yes, I refuse to use reactionary in any other way than to describe politcal movements. Conservatives do not can fruits).

[–] [email protected] 0 points 2 weeks ago (1 children)

nah I think it just sits weirdly with people (I can see what you mean but also why it would strike someone as frustrating)

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago)

Yeah, I know, it is a personal thing from me. I have more of those, think it isn't helpful to use certain too general terms in specific cases as then you cast a too wide net. I fun at parties. (It is also me poking fun at how the soviets called everybody who disagreed with them a reactionary)

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago)

This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).

As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.

[–] [email protected] 0 points 2 weeks ago
[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago)

your argument would be immensely helped if you posted science instead of corporate marketing brochures

[–] [email protected] 0 points 2 weeks ago

It's an anti-fun version of listening to dark side of the moon while watching the wizard of oz.

load more comments (26 replies)
load more comments (27 replies)
load more comments (28 replies)