this post was submitted on 04 Apr 2025
1 points (100.0% liked)

SneerClub

1067 readers
18 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

Came across this fuckin disaster on Ye Olde LinkedIn by 'Caroline Jeanmaire at AI Governance at The Future Society'

"I've just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027. Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it's a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.

What makes this forecast exceptionally credible:

  1. One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed

  2. The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio

  3. It makes concrete, testable predictions rather than vague statements that cannot be evaluated

The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.

As the authors state: "It would be a grave mistake to dismiss this as mere hype."

For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."

Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let's at least take a look inside for some of their deep quantitative reasoning...

....hmmmm....

O_O

The answer may surprise you!

top 24 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 4 hours ago

One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed

Ah, this reminds me of an old book I came across years ago. Printed around 1920 it spent the first half with examples of how the future has been foretold correctly many, many times across history. The author had also made several correct foretellings, among them the Great War. Apparently he tried to warn the Kaiser.

The second half was his visions of the future including a great war...

Unfortunately it was France and Russia invading the Nordic countries in the 1930ies. The Franco-Russian alliance almost got beat thanks to new electric weapons, but then God himself intervened and brought the defenders low because the people had been sining and turning away from Christianity.

An early clue to the author being a bit particular was when he argued that he got his ability to predict the future because he was one quarter Sami, but could still be trusted because he was "3/4 solid Nordic stock". Best combo apparently and a totally normal way to describe yourself.

[–] [email protected] 0 points 7 hours ago (1 children)

The obvious effort is to mark each temporal milestone, then post snarkily as each is missed

[–] [email protected] 0 points 4 hours ago

We're already behind schedule, we're supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!

[–] [email protected] 0 points 9 hours ago

The report received feedback from ~100 AI experts (myself included)

"It's Shake and Bake — and I helped!"

[–] [email protected] 0 points 10 hours ago* (last edited 10 hours ago) (1 children)

Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?

Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced ~~grifters~~ forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.

[–] [email protected] 0 points 9 hours ago (1 children)

Every competent apocalyptic cult leader knows that committing to hard dates is wrong because if the grift survives that long, you'll need to come up with a new story.

Luckily, these folks have spicy autocomplete to do their thinking!

I was going to make a comparison to Elron, but... oh, too late.

[–] [email protected] 0 points 8 hours ago (1 children)

I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term "0-2 paradigm shifts" so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).

[–] [email protected] 0 points 7 hours ago

Huh, 2 paradigm shifts is about what it takes to get my old Beetle up to freeway speed, maybe big Yud is onto something

[–] [email protected] 0 points 13 hours ago (1 children)

Oh lord one of my less online friends posted this in a group chat. Love that group, but I am NOT happy about having to read so much of Scott's writing again to explain the various ways it's loony.

[–] [email protected] 0 points 9 hours ago

"First, he started his blog with the deliberate goal of giving a veneer of respectability to racist pseudoscience. Second, everything else...."

[–] [email protected] 0 points 16 hours ago (2 children)

It is with great regret that I must inform you that all this comes with a three-hour podcast featuring Scoot in the flesh: 2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo

[–] [email protected] 0 points 5 hours ago (1 children)

I'm fascinated by the way they're hyping up Daniel Kokotajlo to be some sort of AI prophet. Scott does it here, but so does Caroline Jeanmaire in the OP's twitter link. It's like they all got the talking point (probably from Scott) that Daniel is the new guru. Perhaps they're trying to anoint someone less off-putting and awkward than Yud. (This is also the first time I've ever seen Scott on video, and he definitely gives off a weird vibe.)

[–] [email protected] 0 points 5 hours ago (2 children)

Kokotajlo is a new name to me. What's his background? Prolific LW poster?

[–] [email protected] 0 points 4 hours ago* (last edited 4 hours ago) (2 children)

He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.

His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far

My own scoring:

The first prompt programming libraries start to develop, along with the first bureaucracies.

I don't think any sane programmer or scientist would credit the current "prompt engineering" "skill set" with comparison to programming libraries, and AI agents still aren't what he was predicting for 2022.

Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.

There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.

Revenue is high enough to recoup training costs within a year or so.

Hahahaha, no... they are still losing money per customer, much less recouping training costs.

Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice

The safety researchers have made this one "true" by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don't read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.

The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.

They also try to contrive scenarios

Emphasis on the word"contrive"

The age of the AI assistant has finally dawned.

So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.

I can see how they are trying to anoint his as a prophet, but I don't think anyone not already drinking the kool aid will buy it.

[–] [email protected] 0 points 3 hours ago

The first prompt programming libraries start to develop, along with the first bureaucracies.

I went three layers deep in his references and his references' references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:

It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.gwern wrote:

I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).

[–] [email protected] 0 points 4 hours ago

Bonus: a recent comment is skeptical:

well, how do I play democracy with AI? It’s already 2025

[–] [email protected] 0 points 5 hours ago

Scott talks a bit about it in the video, but he was recently in the news as the guy who refused to sign a non-disparagement agreement when he left OpenAI, which caused them to claw back his stock options.

[–] [email protected] 0 points 6 hours ago (1 children)
[–] [email protected] 0 points 4 hours ago

They look like the evil twins of the Penny Arcade writers.

[–] [email protected] 0 points 20 hours ago (1 children)

After minutes of meticulous research and quantitative analysis, I've come up with my own predictions about the future of AI.

[–] [email protected] 0 points 13 hours ago (1 children)

I'm happy to see you fully commit to acausal theory here.

[–] [email protected] 0 points 9 hours ago

(Ozymandias voice) "I fully commit to acausal theory twenty-five minutes from now."

[–] [email protected] 0 points 20 hours ago (1 children)

"USG gets captured by AGI".

Promise?

[–] [email protected] 0 points 17 hours ago

A markov chain is smarter than the current POTUS.