BlueMonday1984

joined 1 year ago
[–] [email protected] 0 points 1 week ago (1 children)

Discovered an animation sneering at the tech oligarchs on Newgrounds - I recommend checking it out. Its sister animation is a solid sneer, too, even if it is pretty soul crushing.

[–] [email protected] 0 points 1 week ago

He's an AI bro, having even a basic understanding of art is beyond him

[–] [email protected] 0 points 1 week ago (9 children)
[–] [email protected] 0 points 1 week ago

looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted.

Adding insult to injury, they'd likely also have to contend with the fact that much of the harm this AI bubble caused was the direct consequence of their dumbshit attempts to prevent an AI Apocalypse^tm^

As for the upcoming AI winter, I'm predicting we're gonna see the death of AI as a concept once it starts. With LLMs and Gen-AI thoroughly redefining how the public thinks and feels about AI (near-universally for the worse), I suspect the public's gonna come to view humanlike intelligence/creativity as something unachievable by artificial means, and I expect future attempts at creating AI to face ridicule at best and active hostility at worst.

Taking a shot in the dark, I suspect we'll see active attempts to drop the banhammer on AI as well, though admittedly my only reason is a random BlueSky post openly calling for LLMs to be banned.

[–] [email protected] 0 points 1 week ago

Quick update on the CoreWeave affair: turns out they're facing technical defaults on their Blackstone loans, which is gonna hurt their IPO a fair bit.

[–] [email protected] 0 points 1 week ago

I'd bet good money they vibe-coded the whole thing. Its AI, the whole point is to enable laziness, grifts and laziness in grifts.

[–] [email protected] 0 points 1 week ago (2 children)

Here's the link, so you can read Stack's teardown without giving orange site traffic:

https://ewanmorrison.substack.com/p/the-tranhumanist-cult-test

[–] [email protected] 0 points 1 week ago (3 children)

Stumbled across some AI criti-hype in the wild on BlueSky:

The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its "deceptions" when its actually learning to avoid tokens that paint it as deceptive.

On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI's impending death as a concept (a sign I've touched on before without realising), if you want my take:

[–] [email protected] 0 points 1 week ago

AI bros are exceedingly lazy fucks by nature, so this kind of shit should be pretty rare. Combined with their near-complete lack of taste, and the risk that such an attempt succeeds drops pretty low.

(Sidenote: Didn't know about Samizdat until now, thanks for the new rabbit hole to go down)

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (3 children)

Is there already a nice term for “this was published before the slop flood gates opened”? There should be.

"Pre-slopnami" works well enough, I feel.

EDIT: On an unrelated note, I suspect hand-writing your original manuscript (or using a typewriter) will also help increase the value, simply through strongly suggesting ChatGPT was not involved with making it.

 

None of what I write in this newsletter is about sowing doubt or "hating," but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom — is (as I've said before) unsustainable, and will ultimately collapse. I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

Can't blame Zitron for being pretty downbeat in this - given the AI bubble's size and side-effects, its easy to see how its bursting can have some cataclysmic effects.

(Shameless self-promo: I ended up writing a bit about the potential aftermath as well)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

 

Damn nice sneer from Charlie Warzel in this one, taking a direct shot at Silicon Valley and its AGI rhetoric.

Archive link, to get past the paywall.

 

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

babe wake up the butlerian jihad is coming

1
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 

I stopped writing seriously about “AI” a few months ago because I felt that it was more important to promote the critical voices of those doing substantive research in the field.

But also because anybody who hadn’t become a sceptic about LLMs and diffusion models by the end of 2023 was just flat out wilfully ignoring the facts.

The public has for a while now switched to using “AI” as a negative – using the term “artificial” much as you do with “artificial flavouring” or “that smile’s artificial”.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Between this, and the rise of "AI-free" as a marketing strategy, the bursting of the AI bubble seems quite close.

Another solid piece from Bjarnason.

view more: ‹ prev next ›