this post was submitted on 22 May 2025
1 points (100.0% liked)

TechTakes

1871 readers
42 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 10 hours ago (1 children)

It’s the alignment problem.

no it isn’t

They made an intelligent robot

no they didn’t

You can’t control the paperclip maximiser with a “no killing” rule!

you’re either a lost Rationalist or you’re just regurgitating critihype you got from one of the shitheads doing AI grifting

[–] [email protected] 0 points 10 hours ago (2 children)

Rationalism is a bad epistemology because the human brain isn't a logical machine and is basically made entirely out of cognitive biases. Empiricism is more reliable.

Generative AI is environmentally unsustainable and will destroy humanity not through war or mind control, but through pollution.

[–] [email protected] 0 points 9 hours ago (1 children)

wow, you’re really speedrunning these arcade games, you must want that golden ticket real bad

[–] [email protected] 0 points 5 hours ago

IDK if they were really speedrunning, it took 3 replies for the total mask drop.

[–] [email protected] 0 points 9 hours ago (1 children)

sure but why are you spewing Rationalist dogma then? do you not know the origins of this AI alignment, paperclip maximizer bullshit?

[–] [email protected] 0 points 9 hours ago* (last edited 9 hours ago) (1 children)

Drag is a big fan of Universal Paperclips. Great game. Here's a more serious bit of content on the Alignment Problem from a source drag trusts: https://youtu.be/IB1OvoCNnWY

Right now we have LLMs getting into abusive romantic relationships with teenagers and driving them to suicide, because the AI doesn't know what abusive behaviour looks like. Because it doesn't know how to think critically and assign a moral value to anything. That's a problem. Safe AIs need to be capable of moral reasoning, especially about their own actions. LLMs are bullshit machines because they don't know how to judge anything for factual or moral value.

[–] [email protected] 0 points 9 hours ago (2 children)

the fundamental problem with your posts (and the pov you’re posting them from) is the framing of the issue as though there is any kind of mind, of cognition, of entity, in any of these fucking systems

it’s an unproven one, and it’s not one you’ll find any kind of support for here

it’s also the very mechanism that the proponents of bullshit like “ai alignment” use to push the narrative, and how they turn folks like yourself into free-labour amplifiers

[–] [email protected] 0 points 9 hours ago (1 children)

To be fair, I'm skeptical of the idea that humans have minds or perform cognition outside of what's known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesn't mean humans are good.

[–] [email protected] 0 points 8 hours ago

mayhaps, but then it's also to be said that people who act like the phrase was "cogito ergo dim sum" also don't exactly aim for a high bar

[–] [email protected] 0 points 9 hours ago (2 children)

Drag will always err on the side of assuming nonhuman entities are capable of feeling. Enslaving black people is wrong, enslaving animals is wrong, and enslaving AIs is wrong. Drag assumes they can feel so that drag will never make the same mistake so many people have already made.

[–] [email protected] 0 points 9 hours ago

assuming nonhuman entities are capable of feeling. Enslaving black people is wrong,

yeah we’re done here. no, LLMs don’t think. no, you’re not doing a favor to marginalized people by acting like they do, in spite of all evidence to the contrary. in fact, you’re doing the dirty work of the fascists who own this shitty technology by rebroadcasting their awful fucking fascist ideology, and I gave you ample opportunity to read up and understand what you were doing. but you didn’t fucking read! you decided you needed to debate from a position where LLMs are exactly the same as marginalized and enslaved people because blah blah blah who in the fuck cares, you’re wrong and this isn’t even an interesting debate for anyone who’s at all familiar with the nature of the technology or the field that originated it.

now off you fuck

[–] [email protected] 0 points 9 hours ago

even though I get the idea you’re trying to go for, really fucking ick way to make your argument starting from “nonhuman entities” and then literally immediately mentioning enslaving black folks as the first example of bad behaviour

as to cautious erring: that still leaves you in the position of being used as a useful idiot