this post was submitted on 16 Sep 2024
1 points (100.0% liked)

TechTakes

1276 readers
29 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 3 hours ago (1 children)

A lemmy-specific coiner today: https://awful.systems/post/2417754

The dilema of charging the users and a solution by integrating blockchain to fediverse

First, there will be a blockchain. There will be these cryptocurrencies:

This guy is speaking like he is in Genesis 1

I guess it would be better that only the instances can own instance-specific coins.

You guess alright? You mean that you have no idea what you're saying.

if a user on lemmy.ee want to post on lemmy.world, then lemmy.ee have to pay 10 lemmy.world coin to lemmy.world

What will this solve? If 2 people respond to each other's comments, the instance with the most valuable coin will win. What does that have to do with who caused the interaction?

[–] [email protected] 0 points 2 hours ago

Yes crypto instances, please all implement this and "disallow" everyone else from interacting with you! I promise we'll be sad and not secretly happy and that you'll make lots of money from people wanting to interact with you.

[–] [email protected] 0 points 6 hours ago* (last edited 6 hours ago) (1 children)

I signed up for the Urbit newsletter many moons ago when I was a little internet child. Now, it's a pretty decent source of sneers. This month's contains: "The First Wartime Address with Curtis Yarvin". In classic Moldbug fashion, it's Two Hours and Forty Fucking Five minutes long. I'm not going to watch the whole thing, but I'll try to mine the transcript for sneers.

26:23 --

Simplicity in them you know it runs on a virtual machine who specification Nock [which] fits on a T-shirt and uh you know the goal of the system is to basically take this kind of fundamental mathematical simplicity of Nock and maintain that simplicity all the way to user space so we create something that's simple and easy to use that's not a small amount of of work

Holy fucking shit, does this guy really think building your entire software stack on brainfuck makes even a little bit of sense at all?

30:17 -- a diatribe about how social media can only get worse and how Facebook was better than myspace because its original users were at the top of the social hierarchy. Obviously, this bodes well for urbit because all of you spending 3 hours of your valuable time listening to this wartime address? You're the cream of the crop.

~2:00:00 -- here he addresses concerns about his political leanings, caricaturing the concern as "oh Yarvin wants to make this a monarchy" and responding by saying "nuh uh, urbit is decentralized." Absent from all this is any meaningful analysis of how decentralized systems (such as the internet itself) eventually tend to centralized systems under certain incentive structures. Completely devoid of substance.

[–] [email protected] 0 points 4 hours ago* (last edited 4 hours ago) (1 children)

Is he inscrutable/obscurantist on purpose, or is it because he never had a proper humanities education nor an editor?

[–] [email protected] 0 points 51 minutes ago

It has been suggested, either on this site or by people who pop up here a lot, that the idiosyncratic (eg. Fucking Weird) design of hoon and nock was a deliberate attempt to build something akin to cult mysteries, where not just anyone could grasp it and the initiates had powers that the ignorant outsiders would not, etc etc.

Unfortunately, whilst he’s clearly not stupid, Yarvin isn’t nearly as clever as he thinks he is, and has ended up producing a load of unwieldy cryptic nonsense that no one can work with. I expect this applies to other things he does, too.

[–] [email protected] 0 points 7 hours ago* (last edited 7 hours ago) (4 children)

The robots clearly want us dead -- "Delivery Robot Knocked Over Pedestrian, Company Offered ‘Promo Codes’ to Apologize" (404 media) (archive)

And here rationalists warned that AI misalignment would be hidden from us until the "diamonoid bacteria".

[–] [email protected] 0 points 1 hour ago

This reminded me of that prediction I made w.r.t the "AI Doom" criti-hype (and touched on after SB 1047 popped up) back when OpenAI was gunning (heh) for DoD dollars.

Personally, I suspect that this might provide another case of "AI doom" becoming a double-edged sword for the AI industry. What can be dismissed as a simple error on their products' parts gets potentially a lot more problematic to deal with when a vocal minority is primed to find malice where none exists.

[–] [email protected] 0 points 1 hour ago (1 children)

I literally just saw a xitter post about how the exploding pagers in Lebanon is actually a microcosm of how a 'smarter' entity (the yahood) can attack a 'dumber' entity, much like how AGI will unleash the diamond bacterium to simultaneously kill all of humanity.

Which again, both entities are humans- they have the same intelligence you twats. Same argument people make all the time w.r.t. Spanish v Aztecs where gunpowder somehow made Cortez and company gigabrains compared to the lowly indigenous people (and totally ignoring the contributions of the real super intelligent entity: the small pox virus).

[–] [email protected] 0 points 1 hour ago

OK new rule you're only allowed to call someone dumb for not finding explosives in their pagers if you had, previously to hearing the news, regularly checked with no specialized tools all electronics you buy for bombs hidden inside of the battery compartment.

[–] [email protected] 0 points 6 hours ago

AI misalignment leads to spinal misalignment.

[–] [email protected] 0 points 7 hours ago

If only we had paid attention to the roomba hitting us in the leg. It wasn't adorable, it was a murder attempt!

[–] [email protected] 0 points 14 hours ago (3 children)

Despite Soatak explicitely warning users that posting his latest rant[1] to the more popular tech aggregators would lead to loss of karma and/or public ridicule, someone did just that on lobsters and provoked this mask-slippage[2]. (comment is in three paras, which I will subcomment on below)

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade. As far as I can tell, it’s a meme that is exclusively kept alive by our detractors.

This is the Rationalist version of the village worthy complaining that everyone keeps bringing up that one time he fucked a goat.

Also, “this sure looks like a religion to me” can be - and is - argued about any human social activity. I’m quite happy to see rationality in the company of, say, feminism and climate change.

Sure, "religion" is on a sliding scale, but Big Yud-flavored Rationality ticks more of the boxes on the "Religion or not" checklist than feminism or climate change. In fact, treating the latter as a religion is often a way to denigrate them, and never used in good faith.

Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

Citation very much needed, bub.


[1] https://soatok.blog/2024/09/18/the-continued-trajectory-of-idiocy-in-the-tech-industry/

[2] link and username witheld to protect the guilty. Suffice to say that They Are On My List.

[–] [email protected] 0 points 1 hour ago

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade.

Sure, but that doesn't change that the head EA guy wrote an OP-Ed for Time magazine that a nuclear holocaust is preferable to a world that has GPT-5 in it.

[–] [email protected] 0 points 6 hours ago (1 children)

nobody in the community is actually interested in the Basilisk

except the ones still getting upset over it, but if we deny their existence as hard as possible they won't be there

[–] [email protected] 0 points 6 hours ago* (last edited 6 hours ago)

The reference to the Basilisk was literally one sentence and not central to the post at all, but this big-R Rationalist couldn't resist on singling it out and loudly proclaiming it's not relevant anymore. The m'lady doth protest too much.

[–] [email protected] 0 points 13 hours ago* (last edited 13 hours ago)

nobody in the community is actually interested in the Basilisk

But you should, yall created an idea which some people do take seriously and it is causing them mental harm. In fact, Yud took it so seriously in a way that shows that he either beliefs in potential acausal blackmail himself, or that enough people in the community believe it that the idea would cause harm.

A community he created to help people think better. Which now has a mental minefield somewhere but because they want to look sane to outsiders now people don't talk about it. (And also pretend that now mentally exploded people don't exist). This is bad.

I get that we put them in a no-win situation, either take their own ideas seriously enough to talk about acausal blackmail. And then either help people by disproving the idea, or help people by going 'this part of our totally Rational way of thinking is actually toxic and radioactive and you should keep away from it (A bit like Hegel am I right(*))'. Which makes them look a bit silly for taking it seriously (of which you could say who cares?), or a bit openly culty if they go with the secret knowledge route. Or they could pretend it never happened and never was a big deal and isn't a big deal in an attempt to not look silly. Of course, we know what happened, and that it still is causing harm to a small group of (proto)-Rationalists. This option makes them look insecure, potentially dangerous, and weak to social pressure.

That they do the last one, while have also written a lot about acausal trading, which just shows they don't take their own ideas that seriously. Or if it is an open secret to not talk openly about acausal trade due to acausal blackmail it is just more cult signs. You have to reach level 10 before they teach you about lord Xeno type stuff.

Anyway, I assume this is a bit of a problem for all communal worldbuilding projects, eventually somebody introduces a few ideas which have far reaching consequences for the roleplay but which people rather not have included. It gets worse when the non-larping outside then notices you and the first reaction is to pretend larping isn't that important for your group because the incident was a bit embarrassing. Own the lightning bolt tennis ball, it is fine. (**)

*: I actually don't know enough about philosophy to know if this joke is correct, so apologies if Hegel is not hated.

**: I admit, this joke was all a bit forced.

[–] [email protected] 0 points 15 hours ago (2 children)

Timnit Gebru on Twitter:

We received feedback from a grant application that included "While your impact metrics & thoughtful approach to addressing systemic issues in AI are impressive, some reviewers noted the inherent risks of navigating this space without alignment with larger corporate players,"

https://xcancel.com/timnitGebru/status/1836492467287507243

[–] [email protected] 0 points 10 hours ago

navigating this space without alignment with larger corporate players

stares into middle distance, hollow laugh

[–] [email protected] 0 points 11 hours ago

No need for xcancel, Gebru is on actually social media: https://dair-community.social/@timnitGebru/113160285088058319

[–] [email protected] 0 points 20 hours ago* (last edited 20 hours ago) (1 children)

Via Timnit Gebru's mastodon, I just learned that Emily Bender (both of On the Dangers of Stochastic Parrots fame) has a podcast: "Mystery AI Hype Theater 3000." Looking forward to checking it out tomorrow at the gym!

https://www.buzzsprout.com/2126417/episodes

Summary: Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.

[–] [email protected] 0 points 14 hours ago

It's pretty great. The closing bits of improv from her cohost are a bit meh imo but the AI hype assessments are legitimately amazing.

load more comments
view more: next ›