this post was submitted on 29 Jul 2024
1 points (100.0% liked)

TechTakes

1432 readers
16 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (6 children)

Presented without comment: this utter crankery about hacking the Matrix (HN)

Given the highly speculative subject of this paper, we will attempt to give our work more gravitas by concentrating only on escape paths which rely on attacks similar to those we see in cybersecurity [37-39] research (hardware/software hacks and social engineering) and will ignore escape attempts via more esoteric paths such as:, meditation [40], psychedelics (DMT [41-43], ibogaine, psilocybin, LSD) [44, 45], dreams [46], magic, shamanism, mysticism, hypnosis, parapsychology, death (suicide [47], near-death experiences, induced clinical death), time travel, multiverse travel [48], or religion.

Among the things they've already tried are torture, touching grass, and declining all cookies:

Unethical behavior, such as torture, doesn’t cause suffering reducing interventions from the simulators.

Breaking out of your routine, such as by suddenly traveling to a new location [199], doesn’t result in unexpected observations.

Saying "I no longer consent to being in a simulation" [200].

load more comments (6 replies)
[–] [email protected] 0 points 3 months ago (1 children)
load more comments (1 replies)
[–] [email protected] 0 points 3 months ago (16 children)
[–] [email protected] 0 points 3 months ago

Matt Yglesias argued that “Different Places Have Different Safety Rules and That’s OK” following the deadly collapse of a garment factory in Bangladesh. And yet his arguments were perfectly correct, if maybe a bit “too soon.”

Jesus.

Roe v. Wade cannot be overturned twice

Where are your Bayesian priors now, asshole?

Siri, how can I unread a post?

[–] [email protected] 0 points 3 months ago (1 children)

Tangent: I had assumed nitter was dead and buried by now, glad to see there are still some functioning mirrors. I've found it impossible to share threads without.

[–] [email protected] 0 points 3 months ago

https://status.d420.de/ there's a few nitters post-rebirth, all of them probably using a fleet of real accounts and anti-bot protection instead of the guest accounts that first-age nitter used.

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (2 children)

So many fully-cooked brains in that thread.

BTW, ever notice how easy it is to replace "robot overlords" in TESCREAL with literally anything else? Compare this gem from the linked thread, "The Authoritarian Peril":

https://nitter.poast.org/pic/orig/media%2FGUAQszqWkAAGFRb.jpg

with this version where Tolkein stuff is seamlessly swapped in for evil sentient gaming rigs with no loss of generality:

A dictator who wields the power of the One Ring would command concentrated power unlike any we’ve ever seen. In addition to being able to impose their will on other countries, they could enshrine their rule internally. Millions of orcs could police their populace; mass surveillance would be hypercharged; dictator-loyal Ring Wraiths could individually assess every citizen for dissent, with advanced near-perfect lie detection rooting out any disloyalty. Most importantly, the orcish military and police force could be wholly controlled by a single political leader, and programmed to be perfectly obedient—no more risk of coups or popular rebellions. Whereas past dictatorships were never permanent, Isildur's Bane could eliminate basically all historical threats to a dictator’s rule and lock in their power (cf value lock-in). If the CCP gets this power, they could enforce the Party’s conception of “truth” totally and completely.

[–] [email protected] 0 points 3 months ago (1 children)

isn't that the Palantir business plan

load more comments (1 replies)
[–] [email protected] 0 points 3 months ago (2 children)

with advanced near-perfect lie detection

how the fuck would that work mate, just give me a glimpse of your notes on how a legendarily useless and unworkable forensic technique would become "near-perfect" with GPUs.

load more comments (2 replies)
[–] [email protected] 0 points 3 months ago

The proud tradition of "no really, Trump isn't that bad if you pretend he's a normal libertarian who happens to be an asshole instead of listening to the words that come out of his mouth"

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (2 children)

Realistically, most EAs will vote for the white male over the non-white female anyway, but it's on-brand for them to post a giant substack with reasons why.

[–] [email protected] 0 points 3 months ago

Kinda funny that they write thousand page essays with detailed reasons on why to vote for a person who can't read more than two sentences without getting distracted by a ketchup bottle.

[–] [email protected] 0 points 3 months ago (1 children)

After hate-reading the replies I spied Roko plus an an unironic Yarvinite what an absolute G E M

[–] [email protected] 0 points 3 months ago

Dude posted to the EA forum as suggested and it went over like a lead balloon:

https://forum.effectivealtruism.org/posts/A6W5qm9gWyr3mikmS/the-ea-case-for-trump-2024

load more comments (10 replies)
[–] [email protected] 0 points 3 months ago (2 children)

Riffing on this fun subthread:

  • Will AI invent the Philosopher's Stone?
  • Can AI duck the Zuck?
  • WIll AI divide by zero?
  • Can AI give you diarrhea?
  • Will AI make "fetch" happen?
  • Will AI have a second helping?
  • Does AI unlock the secret to making the Moon happy again?
  • Can AI stop eating after only one marshmallow?
[–] [email protected] 0 points 3 months ago (1 children)

Can AI give you diarrhea?

Already does make me nauseous, so...

[–] [email protected] 0 points 3 months ago (1 children)

soooooo, what you're saying is that AI can cure constipation

use cases everywhere!

[–] [email protected] 0 points 3 months ago

I'd much rather have a technology that allows some specific people to keep their shit inside, without spilling it for all of us to see. @TheBigYud

[–] [email protected] 0 points 3 months ago (1 children)

You know, I feel like it's only a matter of time before someone in the overlapping griftoverse tries their hand at immanentizing the eschaton by creating their own second coming - the MessAIah.

[–] [email protected] 0 points 3 months ago
  • Will AI die for our sins?
  • Will AI break the Wheel?
  • WIll AI serve the Antichrist?
  • Will AI accept bribes from Freeman Dyson?
[–] [email protected] 0 points 3 months ago

google continuing on their crusade to force ads down everyone’s eyeballs

not that it’s surprising, just “good” to see this playing out exactly as predicted

[–] [email protected] 0 points 3 months ago (4 children)
[–] [email protected] 0 points 3 months ago (1 children)

It's the same story as has ever been. "Smart People"'s position on anything is often informed by their current economic relationship wrt to the things they care about. And maybe even Yud isn't super happy about his profession being co-opted. What scraps will he have if his own delusions became true about GPT zombies replacing "authentic voices"?

No one is immune to seeing a better take when it's their shit on the line, and no is immune from being in a bubble without stake.

[–] [email protected] 0 points 3 months ago

I've read enough of the Yudster's work to recognize that he is particularly vulnerable to being replaced by a small shell script that outputs a massive volume of text that says very little of substance, and what little there is is weirdly racist.

[–] [email protected] 0 points 3 months ago

that's a linkedin-level take

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (1 children)

Harry Potter and the Surprisingly Good Take

[–] [email protected] 0 points 3 months ago

Harry Potter and the Broken Clock

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (1 children)

Somebody got all his books from libgen and it shows.

[–] [email protected] 0 points 3 months ago (1 children)

hey no shaming libgen, that shit exists for good reason

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (1 children)

You are mistaken in my reasoning, I'm saying a person with a well paid university position (which gives him access to money and the university library, which I assume pays for access to their books/papers and doesn't libgen or equivs them) should understand that a lot of training material is indeed not free. This being in addition to the university paying him for his own research, and him prob being pretty annoyed if he was replaced with an iSandberg bot and now was homeless. (This is in addition to what Yud said).

Turns out making papers about replacing the earth with fruit is something Anders-GPT can do perfectly well on its own.

[–] [email protected] 0 points 3 months ago (2 children)

which I assume pays for access to their books/papers and doesn’t libgen or equivs them

wrong assumption tbh. the open-access fight is happening because the publisher cartels are extortionate and access is extremely uneven. I (personally/directly) know more than a few people presently in academia who roll libgen on a daily basis because it is easier/quicker/the only option/the only actually working option for the things they need

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago)

it's often easier to pirate the published version of your own paper than to access it by official means

[–] [email protected] 0 points 3 months ago (1 children)

Fair enough, my bad. No idea it had gotten that bad. But still, I wasn't intending to rag on libgen, just his idea that these things (which also includes education) are free already.

[–] [email protected] 0 points 3 months ago

if you want to get really mad, shibboleth "elsevier". it'll be a speedrun of learning some of the worst of what's fucked atm

[–] [email protected] 0 points 3 months ago

This orange thread is about San Francisco banning certain types of landlord rent collusion. I cannot possibly sneer better than the following in-thread comment explaining why this is worthwhile:

While I agree that the giant metal spikes we put on all the cars aren't the exclusive reason that cars are lethal, I would hope we both agree that cars are less lethal when we don't cover them in giant metal spikes.

[–] [email protected] 0 points 3 months ago (2 children)

AI is dying

AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

(Tweet source)

[–] [email protected] 0 points 3 months ago (1 children)

This does suck for all the non-LLM ML stuff that has actual usage. (Even if a percentage of that was also snake oil, or had dubious success rates).

[–] [email protected] 0 points 3 months ago (1 children)

Ehh, if you're calling your ML stuff AI then that's on you (and you're probably not technically serious about what you're doing anyway). Other people are pointing out that AI isn't a term that many compsci/software people would use, and neither the article (or afaict the study) or my experience suggest that ML has the same negative association as AI.

[–] [email protected] 0 points 3 months ago

AI has been and always will be the term for making the computer play a game against you. A search tree with some prunning is not ML, but if you use it to implement a chess bot then it is AI in the game sense.

If you're not doing video games then the term is essentially meaningless

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (1 children)

Well, history sure does fucking repeat itself again, doesn't it?

At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers. (New York Times, 2005, at the end of the last AI winter.)

[–] [email protected] 0 points 3 months ago (1 children)

At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers. (New York Times, 2005, at the end of the last AI winter.)

I expect history to repeat itself quite soon - where previously using the term "artificial intelligence" got you looked at as a wild-eyed dreamer, now, using that term's likely getting you looked at as an asshole techbro, and your research deemed a willing attempt to hurt others.

[–] [email protected] 0 points 3 months ago

asshole or scammer or grifter. so many possibilities

load more comments
view more: ‹ prev next ›