this post was submitted on 16 Sep 2024
1 points (100.0% liked)

TechTakes

1386 readers
11 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 month ago (6 children)

Follow up for this post from the other day.

Our DSO now greenlit the stupid Copilot integration because "Microsoft said it's okay" (of course they did), and he also was on some stupid AI convention yesterday and whatever fucking happened there, he's become a complete AI bro and is now preaching the Gospel of Altman that everyone who's not using AI will be obsolete in few years and we need to ADAPT OR DIE. It's the exact same shit CEO is spewing.

He wants an AI that handles data security breaches by itself. He also now writes emails with ChatGPT even though just a week ago he was hating on people who did that. I sat with my fucking mouth open in that meeting and people asked me whether I'm okay (I'm not).

I need to get another job ASAP or I will go clinically insane.

[–] [email protected] 0 points 1 month ago

He wants an AI that handles data security breaches by itself. He also now writes emails with ChatGP

He is the data security breach.

load more comments (5 replies)
[–] [email protected] 0 points 1 month ago (4 children)

This quote flashbanged me a little

When you describe your symptoms to a doctor, and that doctor needs to form a diagnosis on what disease or ailment that is, that's a next word prediction task. When choosing appropriate treatment options for said ailment, that's also a next word prediction task.

From this thread: https://www.reddit.com/r/gamedev/comments/1fkn0aw/chatgpt_is_still_very_far_away_from_making_a/lnx8k9l/

[–] [email protected] 0 points 1 month ago

Instead of improving LLMs, they are working backwards to prove that all other things are actually word prediction tasks. It is so annoying and also quite dumb. No chemisty isn't like coding/legos. The law isn't invalid because it doesn't have gold fringes and you use magical words.

[–] [email protected] 0 points 1 month ago

This is just standard promptfondler false equivalence: "when people (including me) speak, they just select the next most likely token, just like an LLM"

[–] [email protected] 0 points 1 month ago

The problem is that there could be any number of possible next words, and the available results suggest that the appropriate context isn't covered in the statistical relationships between prior words for anything but the most trivial of tasks i.e. automating the writing and parsing of emails that nobody ever wanted to read in the first place.

[–] [email protected] 0 points 1 month ago

None of these fucking goblins have learned that analogies aren’t equivalences!!! They break down!!! Auuuuuuugggggaaaaaaarghhhh!!!!!!

[–] [email protected] 0 points 1 month ago (4 children)

A lemmy-specific coiner today: https://awful.systems/post/2417754

The dilema of charging the users and a solution by integrating blockchain to fediverse

First, there will be a blockchain. There will be these cryptocurrencies:

This guy is speaking like he is in Genesis 1

I guess it would be better that only the instances can own instance-specific coins.

You guess alright? You mean that you have no idea what you're saying.

if a user on lemmy.ee want to post on lemmy.world, then lemmy.ee have to pay 10 lemmy.world coin to lemmy.world

What will this solve? If 2 people respond to each other's comments, the instance with the most valuable coin will win. What does that have to do with who caused the interaction?

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

if a user on lemmy.ee want to post on lemmy.world, then lemmy.ee have to pay 10 lemmy.world coin to lemmy.world

Note that you don't need cryptocurrencies for this. I think Jaron Lanier talked about an idea like this ages ago, before people tried to put cryptocurrencies into everything.

[–] [email protected] 0 points 1 month ago

1 post 6 comments joined 3 months ago, "i'm naive to crypto" "I want to host an instance that serves as a competitive alternative to Facebook/Threads/X to the users in my country,"

yeah he doesn't even have to charge for interacting with him i'll avoid him without it

[–] [email protected] 0 points 1 month ago

that's lemm.ee last time i've checked. he made that mistake 14x

[–] [email protected] 0 points 1 month ago (1 children)

Yes crypto instances, please all implement this and "disallow" everyone else from interacting with you! I promise we'll be sad and not secretly happy and that you'll make lots of money from people wanting to interact with you.

[–] [email protected] 0 points 1 month ago

I know I won't be secretly happy if they do this.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (1 children)

I signed up for the Urbit newsletter many moons ago when I was a little internet child. Now, it's a pretty decent source of sneers. This month's contains: "The First Wartime Address with Curtis Yarvin". In classic Moldbug fashion, it's Two Hours and Forty Fucking Five minutes long. I'm not going to watch the whole thing, but I'll try to mine the transcript for sneers.

26:23 --

Simplicity in them you know it runs on a virtual machine who specification Nock [which] fits on a T-shirt and uh you know the goal of the system is to basically take this kind of fundamental mathematical simplicity of Nock and maintain that simplicity all the way to user space so we create something that's simple and easy to use that's not a small amount of of work

Holy fucking shit, does this guy really think building your entire software stack on brainfuck makes even a little bit of sense at all?

30:17 -- a diatribe about how social media can only get worse and how Facebook was better than myspace because its original users were at the top of the social hierarchy. Obviously, this bodes well for urbit because all of you spending 3 hours of your valuable time listening to this wartime address? You're the cream of the crop.

~2:00:00 -- here he addresses concerns about his political leanings, caricaturing the concern as "oh Yarvin wants to make this a monarchy" and responding by saying "nuh uh, urbit is decentralized." Absent from all this is any meaningful analysis of how decentralized systems (such as the internet itself) eventually tend to centralized systems under certain incentive structures. Completely devoid of substance.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (2 children)

Is he inscrutable/obscurantist on purpose, or is it because he never had a proper humanities education nor an editor?

[–] [email protected] 0 points 1 month ago (1 children)

He shares a lot of speaking patterns with obvious cranks. I've spent some time listening to people who think they've figured out quantum gravity and the way they make little digressions sounds exactly like Yarvin does in this video. It's not rigorous, but if I didn't know who Yarvin was before watching this video I'm pretty sure I would have thought "crank" and quickly clicked away.

load more comments (1 replies)
[–] [email protected] 0 points 1 month ago

It has been suggested, either on this site or by people who pop up here a lot, that the idiosyncratic (eg. Fucking Weird) design of hoon and nock was a deliberate attempt to build something akin to cult mysteries, where not just anyone could grasp it and the initiates had powers that the ignorant outsiders would not, etc etc.

Unfortunately, whilst he’s clearly not stupid, Yarvin isn’t nearly as clever as he thinks he is, and has ended up producing a load of unwieldy cryptic nonsense that no one can work with. I expect this applies to other things he does, too.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (4 children)

The robots clearly want us dead -- "Delivery Robot Knocked Over Pedestrian, Company Offered ‘Promo Codes’ to Apologize" (404 media) (archive)

And here rationalists warned that AI misalignment would be hidden from us until the "diamonoid bacteria".

[–] [email protected] 0 points 1 month ago

This reminded me of that prediction I made w.r.t the "AI Doom" criti-hype (and touched on after SB 1047 popped up) back when OpenAI was gunning (heh) for DoD dollars.

Personally, I suspect that this might provide another case of "AI doom" becoming a double-edged sword for the AI industry. What can be dismissed as a simple error on their products' parts gets potentially a lot more problematic to deal with when a vocal minority is primed to find malice where none exists.

[–] [email protected] 0 points 1 month ago (1 children)

I literally just saw a xitter post about how the exploding pagers in Lebanon is actually a microcosm of how a 'smarter' entity (the yahood) can attack a 'dumber' entity, much like how AGI will unleash the diamond bacterium to simultaneously kill all of humanity.

Which again, both entities are humans- they have the same intelligence you twats. Same argument people make all the time w.r.t. Spanish v Aztecs where gunpowder somehow made Cortez and company gigabrains compared to the lowly indigenous people (and totally ignoring the contributions of the real super intelligent entity: the small pox virus).

[–] [email protected] 0 points 1 month ago

OK new rule you're only allowed to call someone dumb for not finding explosives in their pagers if you had, previously to hearing the news, regularly checked with no specialized tools all electronics you buy for bombs hidden inside of the battery compartment.

[–] [email protected] 0 points 1 month ago

AI misalignment leads to spinal misalignment.

[–] [email protected] 0 points 1 month ago

If only we had paid attention to the roomba hitting us in the leg. It wasn't adorable, it was a murder attempt!

[–] [email protected] 0 points 1 month ago (3 children)

Despite Soatak explicitely warning users that posting his latest rant[1] to the more popular tech aggregators would lead to loss of karma and/or public ridicule, someone did just that on lobsters and provoked this mask-slippage[2]. (comment is in three paras, which I will subcomment on below)

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade. As far as I can tell, it’s a meme that is exclusively kept alive by our detractors.

This is the Rationalist version of the village worthy complaining that everyone keeps bringing up that one time he fucked a goat.

Also, “this sure looks like a religion to me” can be - and is - argued about any human social activity. I’m quite happy to see rationality in the company of, say, feminism and climate change.

Sure, "religion" is on a sliding scale, but Big Yud-flavored Rationality ticks more of the boxes on the "Religion or not" checklist than feminism or climate change. In fact, treating the latter as a religion is often a way to denigrate them, and never used in good faith.

Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

Citation very much needed, bub.


[1] https://soatok.blog/2024/09/18/the-continued-trajectory-of-idiocy-in-the-tech-industry/

[2] link and username witheld to protect the guilty. Suffice to say that They Are On My List.

[–] [email protected] 0 points 1 month ago (1 children)

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade.

Sure, but that doesn't change that the head EA guy wrote an OP-Ed for Time magazine that a nuclear holocaust is preferable to a world that has GPT-5 in it.

[–] [email protected] 0 points 1 month ago (1 children)

Oh, that craziness is orthodoxy (check the last part of the quote).

[–] [email protected] 0 points 1 month ago

Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

This one?

[–] [email protected] 0 points 1 month ago (1 children)

nobody in the community is actually interested in the Basilisk

except the ones still getting upset over it, but if we deny their existence as hard as possible they won't be there

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

The reference to the Basilisk was literally one sentence and not central to the post at all, but this big-R Rationalist couldn't resist on singling it out and loudly proclaiming it's not relevant anymore. The m'lady doth protest too much.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

nobody in the community is actually interested in the Basilisk

But you should, yall created an idea which some people do take seriously and it is causing them mental harm. In fact, Yud took it so seriously in a way that shows that he either beliefs in potential acausal blackmail himself, or that enough people in the community believe it that the idea would cause harm.

A community he created to help people think better. Which now has a mental minefield somewhere but because they want to look sane to outsiders now people don't talk about it. (And also pretend that now mentally exploded people don't exist). This is bad.

I get that we put them in a no-win situation, either take their own ideas seriously enough to talk about acausal blackmail. And then either help people by disproving the idea, or help people by going 'this part of our totally Rational way of thinking is actually toxic and radioactive and you should keep away from it (A bit like Hegel am I right(*))'. Which makes them look a bit silly for taking it seriously (of which you could say who cares?), or a bit openly culty if they go with the secret knowledge route. Or they could pretend it never happened and never was a big deal and isn't a big deal in an attempt to not look silly. Of course, we know what happened, and that it still is causing harm to a small group of (proto)-Rationalists. This option makes them look insecure, potentially dangerous, and weak to social pressure.

That they do the last one, while have also written a lot about acausal trading, which just shows they don't take their own ideas that seriously. Or if it is an open secret to not talk openly about acausal trade due to acausal blackmail it is just more cult signs. You have to reach level 10 before they teach you about lord Xeno type stuff.

Anyway, I assume this is a bit of a problem for all communal worldbuilding projects, eventually somebody introduces a few ideas which have far reaching consequences for the roleplay but which people rather not have included. It gets worse when the non-larping outside then notices you and the first reaction is to pretend larping isn't that important for your group because the incident was a bit embarrassing. Own the lightning bolt tennis ball, it is fine. (**)

*: I actually don't know enough about philosophy to know if this joke is correct, so apologies if Hegel is not hated.

**: I admit, this joke was all a bit forced.

[–] [email protected] 0 points 1 month ago (2 children)

Timnit Gebru on Twitter:

We received feedback from a grant application that included "While your impact metrics & thoughtful approach to addressing systemic issues in AI are impressive, some reviewers noted the inherent risks of navigating this space without alignment with larger corporate players,"

https://xcancel.com/timnitGebru/status/1836492467287507243

[–] [email protected] 0 points 1 month ago

navigating this space without alignment with larger corporate players

stares into middle distance, hollow laugh

[–] [email protected] 0 points 1 month ago

No need for xcancel, Gebru is on actually social media: https://dair-community.social/@timnitGebru/113160285088058319

load more comments
view more: ‹ prev next ›