this post was submitted on 06 Apr 2025
1 points (100.0% liked)

TechTakes

1791 readers
15 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this..)

(page 4) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 week ago (2 children)

pedal to the metal on the content and information theft, folks:

a photo taken on a huge banner advert on a building titled Bayfront Park. the ad reads "STOP HIRING HUMANS", with a tagline of "The Era Of AI Employees Is Here". the advert is from a company named artisan

seems it's this lot. despite their name, there appears to be almost nothing artful or artistic about them - it's all b2b shit for Selling Better

[–] [email protected] 0 points 1 week ago (1 children)

Incorporating into your workflow a company that is a shell around other companies that are selling their products at a loss with no path to profitability seems like quite an unacceptable business risk to me. But I dont get paid the big bux

[–] [email protected] 0 points 1 week ago (2 children)

as long as you can mark it up and as long as the charade lasts, and as long as there's someone willing to pay, this will make money. when spicy autocomplete provider collapses just pack your bags and leave

[–] [email protected] 0 points 1 week ago

@fullsquare @Soyweiser "I've sold monorails to Brockway, Ogdenville and North Haverbrook, and by gum, it put them on the map!"

[–] [email protected] 0 points 1 week ago (3 children)

I think when you have integrated all this into your workflows doing that and going back might be hard esp on the enterprise level.

[–] [email protected] 0 points 1 week ago (1 children)

wait, what do you mean "integrating it into workflows". this juicero of outsourcing won't work as advertised and it's probably cheaper and less prone to fucking up to hire a couple of southeastern asians or eastern europeans. as long as business is selling of these juiceros, they'll be fine as long as they can find suckers. these suckers, tho, might be in trouble even before openai goes under for unrelated reasons

load more comments (1 replies)
[–] [email protected] 0 points 1 week ago

oh but that's not my problem, and those who got in that very stupid position deserve every last bit of it

[–] [email protected] 0 points 1 week ago (1 children)

For the enterprise using it, yes. For the enterprise selling it, probably not so much.

load more comments (1 replies)
[–] [email protected] 0 points 1 week ago (3 children)

somebody had to do the design + layout for that banner. i wonder what was going through their head then.

[–] [email protected] 0 points 1 week ago

"I should start the Butlerian Jihad"

[–] [email protected] 0 points 1 week ago

"God I wish I was an AI so i didn't have to do this"

load more comments (1 replies)
[–] [email protected] 0 points 1 week ago (7 children)

apparently a complete archive of scott siskind's old livejournal. found on the EA forum no less. https://archive.fo/fCFQx

[–] [email protected] 0 points 1 week ago

That is odd in a way, you would expect them to honour is wishes of that data no longer being available, but nope.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

couldn't help myself, there are seldom more perfect opportunities to use this one

load more comments (5 replies)
[–] [email protected] 0 points 1 week ago (1 children)

:( looked in my old CS dept's discord, recruitment posts for the "Existential Risk Laboratory" running an intro fellowship for AI Safety.

Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.

[–] [email protected] 0 points 1 week ago (2 children)

that's CFAR cult jargon right?

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (3 children)

Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.

Instrumental convergence.... I think Bostrom coined that one?

The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?

[–] [email protected] 0 points 1 week ago

I'm thinking they hired Jar-Jar Binks to the team.

[–] [email protected] 0 points 1 week ago

Mesa-optimization

Why use the perfectly fine 'inner optimizer' mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of 'in' instead?

Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.

[–] [email protected] 0 points 1 week ago (1 children)

Mesa-optimization... that must be when you rail some crushed-up Adderall XRs, boof some modafinil for good measure, and spend the night making sure your kitchen table surface is perfectly flat with no defects abrasions deviations contusions...

[–] [email protected] 0 points 1 week ago

and you wrap it off with some linux 3d graphics lib hacking

[–] [email protected] 0 points 1 week ago (1 children)
[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

Center For Applied Rationality. They hosted "workshops" were people could learn to be more rational. Except there methods weren't really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.

Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day ~~brainwashing sessions~~ workshops.

[–] [email protected] 0 points 1 week ago (2 children)

Shopify going all in on AI, apparently, and the CEO is having a proper born-again moment. Don’t have a source more concrete than this yet:

https://cyberplace.social/@GossiTheDog/114298302252798365

(and transcript: https://infosec.exchange/@barubary/114298367285112648)

It’s a lot like this:

Using AI effectively is now a fundamental expectation of everyone at Shopify. It’s a tool of all trades today, and will only grow in importance. Frankly, I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest I cannot see this working out today, and definitely not tomorrow. Stagnation is almost certain, and stagnation is slow-motion failure. If you’re not climbing, you’re sliding.

[–] [email protected] 0 points 1 week ago

Extreme sent at 4am energy.

[–] [email protected] 0 points 1 week ago (1 children)

That text is painful to read (I wonder how much of it is slop)... ugh, what is chatgpt doing to the brains of people? (And I've had the bad luck of reading some pretty unhinged pro-AI stuff from management at my employer too, although not as bad as this mail from shopify).

Is there a precedent for this hype? For the extent of damage that it will cause? Most tech industry hype is a waste of resources, but otherwise mostly harmless. Like that time when everyone believed that XML is the holy grail, that was silly, and although we still have to deal with some unfortunate data formats from those days, it passed. There were worse ones, most notably blockchain was almost catastrophic, but most companies hesitated to go all-in and pursued it more on the side, so when that hype faded, they simply buried their involvement and that was that.

But "AI"... it has such potential to create significant and long term damage to the companies adopting it. The slop code alone might haunt them forever, in ways that even the worst excesses of 90s enterprise java couldn't. There's nothing to learn from resulting failure, except "don't use AI".

In this case, given shopify's general behaviour, I won't be sad at all though if they crash and fail.

[–] [email protected] 0 points 1 week ago (1 children)

I also thought 'guess LLMs dont work as an editor'.

And blockchains did massive damage, all the ransomware crime would be impossible if the tech world had not jumped into blockchain as much as they did and created and kept maintaining the ecosystem. (It also caused the techbro people who now pivot to AI rise, so it is connected). Note that the damage done by BEC is still greater than ransomware, so not cybersecurity advice.

But I get your point, I think a real example would be facebooks pivot to video. Which destroyed companies.

[–] [email protected] 0 points 1 week ago (2 children)

Yes, that's true. Indirectly it costs them all dearly with ransomware. Likewise, I think the overall damage that AI will do to society as a whole will be much, much greater than just rotting some tech companies from the inside (most of which I wouldn't be sad anyway if they went away...).

What I meant is that with blockchain the big tech companies at least didn't willingly destroy their products, their processes, their decision making etc. I.e. they didn't put blockchain into absolutely everything, all the way to MS Notepad. What I find staggering about this hype is the depth of the delusion, the willingness to not just experiment with it but really go all-in.

[–] [email protected] 0 points 1 week ago

blockchain targeted libertarian post-goldbug pro-cyberpunk-dystopia fuckheads, llms target management types (you will replace workers with machines!), maybe that's why

[–] [email protected] 0 points 1 week ago

yeah, no I agree that blockchain is a bad example, just think we shouldn't understate the massive damage that has done. Not just in actually damaged systems but also just in additional cost that now everybody has to worry about this. Same as how AI is not just causing climate change problems by running it, but the scraping as well has increased the cost of running a webserver by 50% in load alone. (which on a global scale is just horrid). And then there is the forcing of it in everything, the burning of the boats.

[–] [email protected] 0 points 1 week ago (1 children)

tesla: "your car is not your car and we have deep, varied firmware and systems access to it on a permanent basis. we can see you and control you at all times. piss us off and we'll turn off the car that we own."

also tesla: "sorry no you can't return it"

[–] [email protected] 0 points 1 week ago

I wonder how often Musk fires employees who explain to him that, no using tesla cars for distributed computing is a bad idea and we should stop working on this.

load more comments
view more: ‹ prev next ›