this post was submitted on 10 Feb 2025
1 points (100.0% liked)

TechTakes

1638 readers
11 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Taking over for Gerard this time. Special thanks to him for starting this.)

(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

What if we used AI to do De-Ba'athificaiton, but for CIA agents?

I'm sure it will be fine.

load more comments (1 replies)
[–] [email protected] 0 points 1 week ago

In lighter news, has anyone else noticed that it's necessary for any kind of Cybersecurity course to open with what is effectively a Tumblr-style DNI for unethical hackers? Like, I'm not criticizing exactly and I certainly don't have any better ideas to prevent people using these skills for evil, but the disclaimer up top fits into a certain kind of pattern that I, for one, find hilarious.

[–] [email protected] 0 points 1 week ago (2 children)

AI researchers continue to daub soot on the walls of Plato's cave, scaring themselves witless:

https://www.emergent-values.ai/

At least I've IDd the transmission vector from LW to lobste.rs

[–] [email protected] 0 points 1 week ago

why do all these papers have their own microsites, dedicated domain names (in Anguilla no less!), and shitty graphics? I can’t think of another branch of research that consistently does this shit. it’s almost like it’s all marketing fluff or something!

[–] [email protected] 0 points 1 week ago (1 children)

other vein

it's amazing that the paper site has its own little pretty marketing pictures. easily digestible unseriousness!

[–] [email protected] 0 points 1 week ago (1 children)

D'oh! I missed that connection, although the little infographic amoebas should have tipped me off

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

seeing dan in the authors list made me realize

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (2 children)

Niko Matsakis wrote a post about how the fucking rust compiler toolchain should include an LLM to explain error messages because teaching the semantics of a language is too hard and that pissed me off so much that instead of linking that piece of shit directly I’m posting this excellent sneer from Anatol Ulrich that’s also much shorter than Niko’s extended attempt to beg for a promotion at Amazon

e: mastodon thread

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

Oh no :((

Rust’s emphasis on “reliability” makes it a great target for AI

What?! Aaaaaaahhh! This makes me want to scream. After so many years of C++ misery and increasing frustration with it, and looking into various alternatives for years, I found that Rust is the lanuage which brought me back to actually enjoying programming. If they really ruin it with LLMs now, I don't know where to go anymore in terms of programming languages.

this excellent sneer from Anatol Ulrich

That reply is indeed excellent.

[–] [email protected] 0 points 1 week ago

I feel the same way. I program in rust cause I like it, and the feeling of actually liking writing systems code was refreshing coming from C and especially C++. rust is a language I find beautiful — but I won’t for long if its excellent diagnostics and tooling all get deprecated in favor of an LLM. I can’t imagine what pivoting to AI would do to the language’s roadmap.

[–] [email protected] 0 points 1 week ago

If the rust community spearheads the downfall of LLMs somehow, I’ll stop using C

[–] [email protected] 0 points 1 week ago (2 children)

The singularity will be a dozen crappy models in a trench coat, and then finally we’ll have Magic Unified Intelligience™

[–] [email protected] 0 points 1 week ago

imagine if actual roadmaps just said "we want to tell you how to get there" "we hate giving you bad directions" "we will make sure you get there at some time in the future"

[–] [email protected] 0 points 1 week ago (1 children)

But the llm is like 7 bipartite graphs in a trench coat.

[–] [email protected] 0 points 1 week ago (1 children)

It’s things in trench coats all the way down!

[–] [email protected] 0 points 1 week ago

I'll tell you this for free, we gotta do something about all these trenches!

[–] [email protected] 0 points 1 week ago (1 children)

does anyone know why Venmo has a faux social media presentation? a feed, payments visible to other users (??) by default, etc

[–] [email protected] 0 points 1 week ago (1 children)
[–] [email protected] 0 points 1 week ago (1 children)

Though if I had to guess: data tagging for data mining.

[–] [email protected] 0 points 1 week ago

my related guess is primarily marketing surveillance and secondarily all other types of surveillance. notably Venmo no longer lets you put “hookers and blow” as a transaction note, because it was the default response from most people who didn’t really want to do a social transaction but had to use Venmo cause it’s all their recipient had. Venmo’s social features are all designed to make you leak as much data to Venmo as possible so it can be monetized or otherwise capitalized upon, and that’s about par for the course for how a thielverse paypal mafia offshoot operates. this is surveillance capitalism with a smiley face.

[–] [email protected] 0 points 1 week ago (1 children)

MoreWronger is concerned that the shitty fanfic the community excretes is limited to LW and Xhitter and wonders if The Atlantic is a better venue

Look I have nothing against fanfic myself but if there's one powerful corrective it lacks, it is commercial content editorial feedback.

[–] [email protected] 0 points 1 week ago

The way the responses talk about "polarization", ugh fucking ghouls.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (4 children)

US government tech hellscape roundup part the third (ugh):

  1. Elon Musk jokes(?) that the government doesn't use SQL ??? (source, note that his tweet has an ableist slur). I don't even know what to think about this. Is it supposed to be funny or something? Does he actually believe it?

  2. Article: Elon Musk’s A.I.-Fuelled War on Human Agency -- People here probably already knew all this; but one of the ways the admin thinks they can fire everyone is by replacing people with AI / automating everything. Some of the social media responses from federal workers are pretty great:

    Really excited to see AI put on some waders and unclog a beaver dam from a water structure for me.

    If I've learned anything from all this it's about how ~~unfathomably based~~ cool a lot of federal workers are.

  3. The less fascist / cowed parts of the infosec industry are currently raising the alarm about how insecure this all is. A representative social media post from Gossi The Dog

    I definitely recommend posting about what is happening in the US on LinkedIn as you will quickly learn many of the largest security vendors are staffed by people who have no interest in protecting people, while posting with their employers names.

  4. Some federal workers have been fired via emails calling them [EmployeeFirstName].

[–] [email protected] 0 points 1 week ago

Is it supposed to be funny or something? Does he actually believe it?

Or: does he even know what it is?

load more comments (3 replies)
[–] [email protected] 0 points 1 week ago (1 children)

so apparently one of the weird creeps went to go shittalk europe (video, transcript (archive))

it's some full-on doublespeak and utterly wild shit. some of the best tho:

The US innovators of all sizes already know what it’s like to deal with onerous international rules.

"waaaaaaaah how dare you have your own rules we need to care about, so hard"

[–] [email protected] 0 points 1 week ago

Nothing to see here, just a strong implication that in the minds of these lizards “AI” is a weapon, and the only thing that can stop a bad guy with an AI is a good guy with an AI. Because this precise line of thinking has yielded such good results historically.

Also, think of the GDP growth! Imagine the shareholder value we could create boiling an ocean to predict how many letters “r” there are in “strawberry” and failing after billions in VC funding. This should not be regulated, lest progress is halted!

And don’t fret about safety kids, because once we build the genie we get three wishes for sure, and then Sam Altman’s dad will finally look him in the eye and say “I’m proud of you” for the first time, and all will be well in the world.

[–] [email protected] 0 points 1 week ago

"AI Alignment" is chiropractic for computer touchers.

load more comments
view more: ‹ prev next ›