this post was submitted on 25 May 2025
1 points (100.0% liked)

TechTakes

1882 readers
17 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 3) 13 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 5 days ago (8 children)

Curtis Yarvin:

Girls think the "eu" in "eugenics" means EW. Don't get the ick, girls! It literally means good.

So if you're not into eugenics, that means you must be into dysgenics. Dissing your own genes! OMG girl what

dr. caitlin m. green:

... how is this man still able to post from inside the locker he should be stuffed in 24/7

load more comments (8 replies)
[–] [email protected] 0 points 5 days ago (3 children)

A new LLM plays pokemon has started, with o3 this time. It plays moderately faster, and the twitch display UI is a little bit cleaner, so it is less tedious to watch. But in terms of actual ability, so far o3 has made many of the exact same errors as Claude and Gemini including: completely making things up/seeing things that aren't on the screen (items in Virdian Forest), confused attempts at navigation (it went back and forth on whether the exit to Virdian Forest was in the NE or NW corner), repeating mistakes to itself (both the items and the navigation issues I mentioned), confusing details from other generations of Pokemon (Nidoran learns double kick at level 12 in Fire Red and Leaf Green, but not the original Blue/Yellow), and it has signs of being prone to going on completely batshit tangents (it briefly started getting derailed about sneaking through the tree in Virdian Forest... i.e. moving through completely impassable tiles).

I don't know how anyone can watch any of the attempts at LLMs playing Pokemon and think (viable) LLM agents are just around the corner... well actually I do know: hopium, cope, cognitive bias, and deliberate deception. The whole LLM playing Pokemon thing is turning into less of a test of LLMs and more entertainment and advertising of the models, and the scaffold are extensive enough and different enough from each other that they really aren't showing the models' raw capabilities (which are even worse than I complained about) or comparing them meaningfully.

load more comments (3 replies)
[–] [email protected] 0 points 5 days ago (1 children)

Another critihype article from the BBC, with far too much credulousness at the idea behind supposed AI consciousness at the cost of covering the harms of AI as things stand, e.g. the privacy, environmental, data set bias problems:

https://www.bbc.com/news/articles/c0k3700zljjo

[–] [email protected] 0 points 5 days ago (2 children)

Tried to read it, ended up glazing over after the first or second paragraph, so I'll fire off a hot take and call it a day:

Artificial intelligence is a pseudoscience, and it should be treated as such.

[–] [email protected] 0 points 5 days ago

Every AI winter, the label AI becomes unwanted and people go with other terms (expert systems, machine learning, etc.)... and I've come around to thinking this is a good thing, as it forces people to specify what it is they actually mean, instead of using a nebulous label with many science fiction connotations that lumps together decent approaches and paradigms with complete garbage and everything in between.

[–] [email protected] 0 points 5 days ago (7 children)

I'm gonna be polite, but your position is deeply sneerworthy; I don't really respect folks who don't read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:

There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as "blindly optimistic and driven by human exceptionalism." … "We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn't mean they go together in general, for example in animals."

At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:

In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won't be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. "It will mean that we trust these things more, share more data with them and be more open to persuasion." But the greater risk from the illusion of consciousness is a "moral corrosion", he says. "It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives" – meaning that we might have compassion for robots, but care less for other humans.

A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it's a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I'll try to salvage your position:

Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it's definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.

[–] [email protected] 0 points 5 days ago (1 children)

No, I think BlueMonday is being reasonable. The article has some quotes from scientists with actually relevant expertise, but it uncritically mixes them with LLM hype and speculation in a typical both sides sort of thing that gives lay readers the (false) impression that both sides are equal. This sort of journalism may appear balanced, but it ultimately has contributed to all kinds of controversies (from Global Warming to Intelligent Design to medical pseudoscience) where the viewpoints of cranks and uninformed busybodies and autodidacts of questionable ability and deliberate fraudsters get presented equally with actually educated and researched viewpoints.

[–] [email protected] 0 points 5 days ago

Having now read the thing myself, I agree that the BBC is serving up criti-hype and false balance.

load more comments (6 replies)
[–] [email protected] 0 points 6 days ago (3 children)

Some quality sneers in Extropic's latest presentation about their thermodynamics hardware. My favorite part was the Founder's mission slide "e/acc maximizes the watts per civilization while Extropic maximizes intelligence per watt".

[–] [email protected] 0 points 5 days ago

I'm not going to watch more than a few seconds but I enjoyed how awkward Beff Jezos is coming across.

load more comments (2 replies)
[–] [email protected] 0 points 6 days ago* (last edited 6 days ago)
[–] [email protected] 0 points 6 days ago (4 children)

Opening up the sack with your new favourite uwu news influencer giving a quick shout-out to our old pals, the NRx. Hoped that we wouldn’t get here, but here we are, regardless.

[–] [email protected] 0 points 5 days ago

I had so hoped that the rise of Trump (and his fall due to Biden) on the back of the more numerous and popular seeming Alt-Right had been the end of all this. Showing that NRx was a sort of weaker evolutionary dead end so to speak. But sadly no.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›