New piece from the Wall Street Journal: We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All (archive link)
The piece falls back into the standard "AI Is Inevitable™" at the end, but its still a surprisingly strong sneer IMO.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
New piece from the Wall Street Journal: We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All (archive link)
The piece falls back into the standard "AI Is Inevitable™" at the end, but its still a surprisingly strong sneer IMO.
It bums me out with cryptocurrency/blockchain and now “AI” that people are afraid to commit to calling it bullshit. They always end with “but it could evolve and become revolutionary!” I assume from deep seated FOMO. Journalists especially need more backbone but that’s asking too much from WSJ I know
I think everyone has a deep-seated fear of both slander lawsuits and more importantly of being the guy who called the Internet a passing fad in 1989 or whenever it was. Which seems like a strange attitude to take on to me. Isn't being quoted for generations some element of the point? If you make a strong claim and are correct then you might be a genius and spare people a lot of harm. If you're wrong maybe some people miss out on an opportunity but you become a legend.
Via Tante on bsky:
""Intel admits what we all knew: no one is buying AI PCs"
People would rather buy older processors that aren't that much less powerful but way cheaper. The "AI" benefits obviously aren't worth paying for.
https://www.xda-developers.com/intel-admits-what-we-all-knew-no-one-is-buying-ai-pcs/"
My 2022 iPhone SE has the “neural engine" core. But isn't supported for Apple Intelligence.
And that’s a phone and OS and CPU produced by the same company.
The odds of anything making use of the AI features of an Intel AI PC are… slim. Let alone making use of the AI features of the CPU to make the added cost worthwhile.
haha I was just about to post this after seeing it too
must be a great feather to add into the cap along with all the recent silicon issues
You know what they say. Great minds repost Tante.
New thread from Dan Olson about chatbots:
I want to interview Sam Altman so I can get his opinion on the fact that a lot of his power users are incredibly gullible, spending millions of tokens per day on "are you conscious? Would you tell me if you were? How can I trust that you're not lying about not being conscious?"
For the kinds of personalities that get really into Indigo Children, reality shifting, simulation theory, and the like chatbots are uncut Colombian cocaine. It's the monkey orgasm button, and they're just hammering it; an infinite supply of material for their apophenia to absorb.
Chatbots are basically adding a strain of techno-animism to every already cultic woo community with an internet presence, not a Jehovah that issues scripture, but more something akin to a Kami, Saint, or Lwa to appeal to, flatter, and appease in a much more transactional way.
Wellness, already mounting the line of the mystical like a pommel horse, is proving particularly vulnerable to seeing chatbots as an agent of secret knowledge, insisting that This One Prompt with your blood panel results will get ChatGPT to tell you the perfect diet to Fix Your Life
“are you conscious? Would you tell me if you were? How can I trust that you’re not lying about not being conscious?”
Somehow more stupid than “If you’re a cop and I ask you if you’re a cop, you gotta tell me!”
"How can I trust that you’re not lying about not being conscious?”
Its a silicon-based insult to life, it can't be conscious
That Couple are in the news arís. surprisingly, the racist, sexist dog holds opinions that a racist, sexist dog could be expected to hold, and doesn't think poor people should have more babies. He does want Native Americans to have more babies, though, because they're "on the verge of extinction", and he thinks of cultural groups and races as exhibits in a human zoo. Simone Collins sits next to her racist, sexist dog of a husband and explains how paid parental leave could lead to companies being reluctant to hire women (although her husband seems to think all women are good for us having kids).
This gruesome twosome deserve each other: their kids don't.
yet again, you can bypass LLM "prompt security" with a fanfiction attack
https://hiddenlayer.com/innovation-hub/novel-universal-bypass-for-all-major-llms/
not Pivoting cos (1) the fanfic attack is implicit in building an uncensored compressed text repo, then trying to filter output after the fact (2) it's an ad for them claiming they can protect against fanfic attacks, and I don't believe them
I think unrelated to the attack above, but more about prompt hack security, so while back I heard people in tech mention that the solution to all these prompt hack attacks is have a secondary LLM look at the output of the first and prevent bad output that way. Which is another LLM under the trench coat (drink!), but also doesn't feel like it would secure a thing, it would just require more complex nested prompthacks. I wonder if somebody is just going to eventually generalize how to nest various prompt hacks and just generate a 'prompthack for a LLM protected by N layers of security LLMs'. Just found the 'well protect it with another AI layer' to sound a bit naive, and I was a bit disappointed in the people saying this, who used to be more genAI skeptical (but money).
Now I'm wondering if an infinite sequence of nested LLMs could achieve AGI. Probably not.
Now I wonder if your creation ever halts. Might be a problem.
(thinks)
(thinks)
I get it!
Days since last "novel" prompt injection attack that I first saw on social media months and months ago: zero
r/changemyview recently announced the University of Zurich had performed an unauthorised AI experiment on the subreddit. Unsurprisingly, there were a litany of ethical violations.
(Found the whole thing through a r/subredditdrama thread, for the record)
fuck me, that's a Pivot
Ow god, the bots pretended to be stuff like SA survivors and the like. Also the whole research is invalid just because they cannot tell that the reactions they will get are not also bot generated. What is wrong with these people.
They targeted redditors. Redditors. (jk)
Ok but yeah that is extraordinarily shitty.
In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible.
If you can't do your study ethically, don't do your study at all.
if ethical concerns deterred promptfans, they wouldn't be promptfans in the first place
Also, blinded studies don’t exist and even if they did there’s no reason any academics would have heard of them.
(found here:) O'Reilly is going to publish a book "Vibe Coding: The Future of Programming"
In the past, they have published some of my favourite computer/programming books... but right now, my respect for them is in free fall.
I picked up a modern Fortran book from Manning out of curiosity, and hoo boy are they even worse in terms of trend-riding. Not only can you find all the AI content you can handle, there's a nice fat back catalog full of blockchain integration, smart-contract coding... I guess they can afford that if they expect the majority of their sales to be ebooks.
Early release. Raw and unedited.
Vibe publishing.
gotta make sure to catch that wave before the air goes outta the balloon
Just a standard story about a lawyer using GenAI and fucking up, but included for the nice list of services available
https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html
This is not by any means the first time ChatGPT, or Gemini, or Bard, or Copilot, or Claude, or Jasper, or Perplexity, or Steve, or Frodo, or El Braino Grande, or whatever stupid thing it is people are using, has embarrassed a lawyer by just completely making things up.
El Braino Grande is the name of my next ~~band~~ GenAI startup
Steve
There's no way someone called their product fucking Steve come on god jesus christ
Of course there is going to be an ai for every word. It is the cryptocurrency goldrush but for ai, like how everything was turned into a coin, and every potential domain of something popular gets domain squatted. Tech has empowered parasite behaviour.
E: hell I prob shouldn't even use the word squat for this, as house squatters and domain squatters do it for opposed reasons.
I bring you: this
they based their entire public support/response/community/social/everything program on that
for years
(I should be clear, they based "their" thing on the "not steve"..... but, well....)
Against my better judgement I typed steve.ai into my browser and yep. It's an AI product.
frodo.ai on the other hand is currently domain parked. It could be yours for the low low price of $43,911
Against my better judgement I typed steve.ai into my browser and yep. It’s an AI product.
But is chickenjockey.ai domain parked
Hank Green (of Vlogbrothers fame) recently made a vaguely positive post about AI on Bluesky, seemingly thinking "they can be very useful" (in what, Hank?) in spite of their massive costs:
Unsurprisingly, the Bluesky crowd's having none of it, treating him as an outright rube at best and an unrepentant AI bro at worst. Needless to say, he's getting dragged in the replies and QRTs - I recommend taking a look, they are giving that man zero mercy.
Shit, I actually like Hank Green his brother John. They’re two internet personalities I actually have something like respect for, mainly because of their activism, John’s campaign to get medical care to countries who desperately need it, and his fight to raise awareness of and improve the conditions around treatment for tuberculosis. And I’ve been semi-regularly watching their stuff (mostly vlogbrothers though, but I do enjoy the occasional SciShow episode too) for over a decade now.
At least Hank isn’t afraid to admit when he’s wrong. He’s done this multiple times in the past, making a video where he says he changed his mind/got stuff wrong. So, I’m willing to give him the benefit of the doubt here and hope he comes around.
Still, fuck.
Just gonna go ahead and make sure I fact check any scishow or crash course that the kid gets into a bit more aggressively now.
I'm sorry you had to learn this way. Most of us find out when SciShow says something that triggers the Gell-Mann effect. Green's background is in biochemistry and environmental studies, and he is trained as a science communicator; outside of the narrow arenas of biology and pop science, he isn't a reliable source. Crash Course is better than the curricula of e.g. Texas, Louisiana, or Florida (and that was the point!) but not better than university-level courses.
That Wikipedia article is impressively terrible. It cites an opinion column that couldn't spell Sokal correctly, a right-wing culture-war rag (The Critic) and a screed by an investment manager complaining that John Oliver treated him unfairly on Last Week Tonight. It says that the "Gell-Mann amnesia effect is similar to Erwin Knoll's law of media accuracy" from 1982, which as I understand it violates Wikipedia's policy.
By Crichton's logic, we get to ignore Wikipedia now!
Yeah. The whole Gel-Mann effect always feels overstated to me. Similar to the "falsus in unus" doctrine Crichton mentions in his blog, the actual consensus appears to be that actually context does matter. Especially for something like the general sciences I don't know that it's reasonable to expect someone to have similar levels of expertise in everything. To be sure the kinds of errors people make matter; it looks like this is a case of insufficient skepticism and fact checking, so John is more credulous than I had thought. That's not the same as everything he's put out being nonsense, though.
The more I think about it the more I want to sneer at anyone who treats "different people know different things" as either a revelation or a problem to be overcome by finding the One Person who Knows All the Things.
Even setting aside the fact that Crichton coined the term in a climate-science-denial screed — which, frankly, we probably shouldn't set aside — yeah, it's just not good media literacy. A newspaper might run a superficial item about pure mathematics (on the occasion of the Abel Prize, say) and still do in-depth reporting about the US Supreme Court, for example. The causes that contribute to poor reporting will vary from subject to subject.
Remember the time a reporter called out Crichton for his shitty politics and Crichton wrote him into his next novel as a child rapist with a tiny penis? Pepperidge Farm remembers.
I imagine a lotta people will be doing the same now, if not dismissing any further stuff from SciShow/Crash Course altogether.
Active distrust is a difficult thing to exorcise, after all.
Depends, he made an anti-GMO video on SciShow about a decade ago yet eventually walked it back. He seemed to be forgiven for that.