this post was submitted on 07 Apr 2025
1 points (100.0% liked)

TechTakes

1788 readers
17 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

"Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as "trivial", even when their validity was crucial."

top 44 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 week ago

I heard new Gemini got the first question, so thats SOTA now*

*allegedly it came out the same day as the math olympiad so it twas fair, but who the fuck knows

[–] [email protected] 0 points 1 week ago (2 children)

I think a recent paper showed that LLMs lie about their thought process when asked to explain how they came to a certain conclusion. They use shortcuts internally to intuitively figure it out but then report that they used an algorithmic method.

It’s possible that the AI has figured out how to solve these things using a shortcut method, but is incapable of realizing its own thought path, so it just explains things in the way it’s been told to, missing some steps because it never actually did those steps.

[–] [email protected] 0 points 1 week ago

@pennomi @slop_as_a_service "It’s possible that the AI has figured out how" can I just stop you there

[–] [email protected] 0 points 1 week ago (2 children)
[–] [email protected] 0 points 1 week ago

"Thought process"

"Intuitively"

"Figured out"

"Thought path"

I miss the days when the consensus reaction to Blake Lemoine was to point and laugh. Now the people anthropomorphizing linear algebra are being taken far too seriously.

[–] [email protected] 0 points 1 week ago (6 children)

LLMs are a lot more sophisticated than we initially thought, read the study yourself.

Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.

[–] [email protected] 0 points 1 week ago

this is credulous bro did you even look at the papers

[–] [email protected] 0 points 1 week ago (3 children)

You didn't link to the study; you linked to the PR release for the study. This is the study.

Note that the paper hasn't been published anywhere other than on Anthropic's online journal. Also, what the paper is doing is essentially a tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, "there's a dog!" or "that's a bird!" or "bitcoin is going up this year!". It's all rubbish dawg

[–] [email protected] 0 points 1 week ago (1 children)

To be fair, the typesetting of the papers is quite pleasant and the pictures are nice.

[–] [email protected] 0 points 1 week ago

they gotta make up for all those scary cave-wall pictures somehow

[–] [email protected] 0 points 1 week ago (5 children)

Fair enough, you’re the only person with a reasonable argument, as nobody else can seem to do anything other than name calling.

Linking to the actual papers and pointing out they haven’t been published to a third party journal is far more productive than whatever anti-scientific bullshit the other commenters are doing.

We should be people of science, not reactionaries.

[–] [email protected] 0 points 1 week ago

you got banned before I got to you, but holy fuck are you intolerable

We should be people of science, not reactionaries.

which we should do by parroting press releases and cherry picking which papers count as science, of course

but heaven forbid anyone is rude when they rightly tell you to go fuck yourself

[–] [email protected] 0 points 1 week ago (1 children)

reactionaries

So, how does any of this relate to wanting to go back to an imagined status quo ante? (yes, I refuse to use reactionary in any other way than to describe politcal movements. Conservatives do not can fruits).

[–] [email protected] 0 points 1 week ago (1 children)

nah I think it just sits weirdly with people (I can see what you mean but also why it would strike someone as frustrating)

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

Yeah, I know, it is a personal thing from me. I have more of those, think it isn't helpful to use certain too general terms in specific cases as then you cast a too wide net. I fun at parties. (It is also me poking fun at how the soviets called everybody who disagreed with them a reactionary)

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).

As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.

[–] [email protected] 0 points 1 week ago

lmao fuck off

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

your argument would be immensely helped if you posted science instead of corporate marketing brochures

[–] [email protected] 0 points 1 week ago

It's an anti-fun version of listening to dark side of the moon while watching the wizard of oz.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

read the study yourself

  • > ask the commenter if it's a study or a self-interested blog post
  • > they don't understand
  • > pull out illustrated diagram explaining that something hosted exclusively on the website of the for-profit business all authors are affiliated with is not the same as a peer-reviewed study published in a real venue
  • > they laugh and say "it's a good study sir"
  • > click the link
  • > it's a blog post
[–] [email protected] 0 points 1 week ago

I wonder if they already made up terms like 'bloggophobic' or 'peer review elitist' in that 'rightwinger tries to use leftwing language' way.

[–] [email protected] 0 points 1 week ago (1 children)

This study is bullshit, because they only trace evaluations and not trace training process that align tokens with probabilities.

[–] [email protected] 0 points 1 week ago (1 children)

remember, if we look too closely at the magic box, ~~we might notice how we've been fooled~~ the box will stop magicing for us!

[–] [email protected] 0 points 1 week ago (1 children)

Well, every civilisation needs it's prophets. Our civilisation built prophet machines that will kill us. We just didn't get to the killing step yet.

[–] [email protected] 0 points 1 week ago (1 children)

yeah but see, these grifters all heard it as "every civilisation needs its profits". just a shame they suck at that too

[–] [email protected] 0 points 1 week ago

No prophet worked for free and they were always near the rullers and near big money. The story repeats itself, just the times are different and we can instant message with each other.

[–] [email protected] 0 points 1 week ago (2 children)

Essentially they do not simply predict the next token

looks inside

it's predicting the next token

[–] [email protected] 0 points 1 week ago (1 children)

every time I read these posters it's in that type of the Everyman characters in the discworld that say some utter lunatic shit and follow it up with "it's just [logical/natural/obvious/...]"

[–] [email protected] 0 points 1 week ago

Stands to reason

[–] [email protected] 0 points 1 week ago (2 children)

Read the paper, it’s not simply predicting the next token. For instance, when writing a rhyming couplet, it first plans ahead on what the rhyme is, and then fills in the rest of the sentence.

The researchers were surprised by this too, they expected it to be the other way around.

[–] [email protected] 0 points 1 week ago

Oh, sorry, I got so absorbed into reading the riveting material about features predicting state name tokens to predict state capital tokens I missed that we were quibbling over the word "next". Alright they can predict tokens out of order, too. Very impressive I guess.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

first plans ahead

predict

to declare or tell in advance; prophesy; foretell;

ahead

Strongest matches: advanced; along; before; earlier; forward

stop prompting LLMs and go read some books, it'll do you a world of good

[–] [email protected] 0 points 1 week ago (1 children)

nothx, I can find better fiction on ao3

[–] [email protected] 0 points 1 week ago (2 children)

Aw, you can’t handle a little science so you decide to throw insults instead.

[–] [email protected] 0 points 1 week ago

the user who cannot read has been guided to go not read elsewhere

[–] [email protected] 0 points 1 week ago

pray forgive, fair poster, for the shame I have cast upon myself in the action of doubting the Most Serious Article so affine to yourself - clearly a person of taste and wit, and I deserve the ire and muck resultant

wait... wait, no, sorry! got those the wrong way around. happens all the time - guess I tried too hard to think like you.

[–] [email protected] 0 points 1 week ago (3 children)

“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”

LLMs achieve intelligence level of average rationalist

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

it's a very human and annoying way of bullshitting. I took every opportunity to crush this habit out of undergrads. "If you say trivial, obvious, or clearly, that usually means you're making a mistake and you're avoiding thinking about it"

[–] [email protected] 0 points 1 week ago

feels like the same manner as my "'just' is a weaselword" speach

[–] [email protected] 0 points 1 week ago (2 children)

This is actually an accurate representation of most "gifted olympiad laureate attempting to solve a freshman CS problem on the blackboard" students I've went to uni with.

Jumps to the front after 5 seconds from the task being assigned, bluffs that the problem is trivial, tries to salvage their reasoning for 5 minutes when questioned by the tutor, turns out the theorem they said was trivial is actually false, sits down having wasted 10 minutes of everyone's time.

[–] [email protected] 0 points 1 week ago

I just remember a professor saying that after he filled the board with proofs and math. 'the rest is trivial' not sure if it was a joke, as I found none of it trivial. (and neither did the rest of the people doing the course).

[–] [email protected] 0 points 1 week ago (1 children)

This needed a TW jfc (jk, uh, sorta)

[–] [email protected] 0 points 1 week ago

TW: contains real chuds

[–] [email protected] 0 points 1 week ago

"Trivially" fits nicely in a margin, too. Suck on that, Andrew and Pierre!