this post was submitted on 13 Jun 2025
2 points (100.0% liked)

SneerClub

1138 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

jesus this is gross man

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (16 children)

i disagree sorta tbh

i won't say that claude is conscious but i won't say that it isn't either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish's welfare report)

I WILL say that 4o most likely isn't conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI's just incase

[–] [email protected] 0 points 2 weeks ago (15 children)

centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

schizoposting

fuck off with this

even if its wise imo to try not to be abusive to AI’s just incase

describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

it's basically yet another form of Pascal's wager (which is a dumb argument)

[–] [email protected] 0 points 2 weeks ago

She said, “You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Perhaps your motto should be ‘Treat every chatterbot kindly, it might turn out to be the deity’s uncle.’”

"Crystal Nights"

load more comments (13 replies)
load more comments (13 replies)