this post was submitted on 13 May 2024
1 points (100.0% liked)

TechTakes

1381 readers
33 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (1 children)

In case you didn’t know, Sam “the Man” Altman is deadass the coolest motherfucker around. With world leaders on speed dial and balls of steel, he’s here to kick ass and drink milkshakes.

Within a day of his ousting, Altman said he received 10 to 20 texts from presidents and prime ministers around the world. At the time, it felt "very normal," and he responded to the messages and thanked the leaders without feeling fazed.

Archive link.

[–] [email protected] 0 points 5 months ago

"10 to 20" so, 10. How many of the messages were just from Nayib Bukele and Javier Milei?

[–] [email protected] 0 points 5 months ago (1 children)

just want to share my article from this week. It's about products that account for their lack of usefulness with ease of use. Using the gpt-4oh as an example. https://fasterandworse.com/known-purpose-and-trusted-potential/

[–] [email protected] 0 points 5 months ago (1 children)

If the house doesn’t have a roof, don’t paint the walls.

i adore this line. because yeah, what i see the rest of the tech industry doing is either:

  • scrambling to erect their own, faster, better, cheaper roofless house
  • scrambling to sell furniture and utilities for the people who are definitely, inevitably going to move in
  • or making a ton of bank by selling resources to the first two groups

without even stopping to ask: why would anyone want to live here?

[–] [email protected] 0 points 5 months ago

thanks! I think I began saying that when I moved from digital marketing agencies to startups around 2011

[–] [email protected] 0 points 5 months ago (2 children)

apparently oai has lifetime NDAs, via fasterthanlime[0]:

[0] - technically via friend who sent me a screenshot, but y'know

[–] [email protected] 0 points 5 months ago (1 children)

I can't imagine signing that - unless I was being paid a lot of money

[–] [email protected] 0 points 5 months ago (1 children)

keep in mind that the company heavily pre-filters for believers. that means that you have a whole set of other decision-influence things going on too, including not thinking much about this

(and then probably also the general SFBA vibe of getting people before they have any outside experiences, and know what is/is not sane)

[–] [email protected] 0 points 5 months ago (1 children)

oh that reminds me of Anthropic's company values page where they call it "unusually high trust" to believe that their employees work there in "good faith"

Unusually high trustOur company is an unusually high trust environment: we assume good faith, disagree kindly, and prioritize honesty. We expect emotional maturity and intellectual openness. At its best, our trust enables us to make better decisions as an organization than any one of us could as individuals.

[–] [email protected] 0 points 5 months ago

.......that's going to be some ratsphere fucking bullshit, isn't it

[–] [email protected] 0 points 5 months ago

(the tweet is here, can't find a working nitter ritenao)

[–] [email protected] 0 points 5 months ago (3 children)

a crossover sneer from the Nix Zulip:

This is an easy place to add a hook that goes through each message and asks a language model if the message violates any rules. Computers are about as good as humans at interpreting rules about tone and so on, and the biases come from the training data, so for any specific instance the decision is relatively impartial.

please for the love of fuck kick the fascists out of your community because they’ll never stop with this shit

[–] [email protected] 0 points 5 months ago

This is using-a-sheep's-bladder-to-predict-earthquakes territory.

Do you suppose that an LLM can detect a witch?

[–] [email protected] 0 points 5 months ago

This is another one for the "throw an AI model at the problem with no concrete plans for how to evaluate its performance" category.

[–] [email protected] 0 points 5 months ago

jesus christ that post is so not even wrong it's hard to know where to start

[–] [email protected] 0 points 5 months ago (1 children)
[–] [email protected] 0 points 5 months ago (4 children)

Hmm, a xitter link, I guess I'll take a moment to open that in a private tab in case it's passingly amusing...

To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—

OK, you have my attention now.

To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—

During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI employees, their friends, and adjacent entrepreneurs, which I have not previously spoken about publicly.

It is not my place to speak as to why Jan Leike and the superalignment team resigned. I have no idea why and cannot make any claims. However, I do believe my cultural observations of the SF AI scene are more broadly relevant to the AI industry.

I don't think events like the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers have been good for women. They create a climate that can be very bad for female AI researchers, with broader implications relevant to X-risk and AGI safety. I believe they are somewhat emblematic of broader problems: a coercive climate that normalizes recklessness and crossing boundaries, which we are seeing playing out more broadly in the industry today. Move fast and break things, applied to people.

There is nothing wrong imo with sex parties and heavy LSD use in theory, but combined with the shadow of 100B+ interest groups, leads to some of the most coercive and fucked up social dynamics that I have ever seen. The climate was like a fratty LSD version of 2008 Wall Street bankers, which bodes ill for AI safety.

Women are like canaries in the coal mine. They are often the first to realize that something has gone horribly wrong, and to smell the cultural carbon monoxide in the air. For many women, Silicon Valley can be like Westworld, where violence is pay-to-pay.

I have seen people repeatedly get shut down for pointing out these problems. Once, when trying to point out these problems, I had three OpenAI and Anthropic researchers debate whether I was mentally ill on a Google document. I have no history of mental illness; and this incident stuck with me as an example of blindspots/groupthink.

I am not writing this on the behalf of any interest group. Historically, much of OpenAI-adjacent shenanigans has been blamed on groups with weaker PR teams, like Effective Altruism and rationalists. I actually feel bad for the latter two groups for taking so many undeserved hits. There are good and bad apples in every faction. There are so many brilliant, kind, amazing people at OpenAI, and there are so many brilliant, kind, and amazing people in Anthropic/EA/Google/[insert whatever group]. I’m agnostic. My one loyalty is to the respect and dignity of human life.

I'm not under an NDA. I never worked for OpenAI. I just observed the surrounding AI culture through the community house scene in SF, as a fly-on-the-wall, hearing insider information and backroom deals, befriending dozens of women and allies and well-meaning parties, and watching many them get burned. It’s likely these problems are not really on OpenAI but symptomatic of a much deeper rot in the Valley. I wish I could say more, but probably shouldn’t.

I will not pretend that my time among these circles didn’t do damage. I wish that 55% of my brain was not devoted to strategizing about the survival of me and of my friends. I would like to devote my brain completely and totally to AI research— finding the first principles of visual circuits, and collecting maximally activating images of CLIP SAEs to send to my collaborators for publication.

[–] [email protected] 0 points 5 months ago (1 children)

Useful context: this is a followup to this post:

The thing about being active in the hacker house scene is you are accidentally signing up for a career as a shadow politician in the Silicon Valley startup scene. This process is insidious because you’re initially just signing up for a place to live and a nice community. But given the financial and social entanglement of startup networks, you are effectively signing yourself up for a job that is way more than meets the eye, and can be horribly distracting if you are not prepared for it. If you play your cards well, you can have an absurd amount of influence in fundraising and being privy to insider industry information. If you play your cards poorly, you will be blacklisted from the Valley. There is no safety net here. If I had known what I was getting myself into in my early twenties, I wouldn’t have signed up for it. But at the time, I had no idea. I just wanted to meet other AI researchers.

I’ve mind-merged with many of the top and rising players in the Valley. I’ve met some of the most interesting and brilliant people in the world who were playing at levels leagues beyond me. I leveled up my conception of what is possible.

But the dark side is dark. The hacker house scene disproportionately benefits men compared to women. Think of frat houses without Title IX or HR departments. Your peer group is your HR department. I cannot say that everyone I have met has been good or kind.

Socially, you are in the wild west. When I joined a more structured accelerator later, I was shocked by the amount of order and structure there was in comparison.

[–] [email protected] 0 points 5 months ago (1 children)

it is just straight up fucked that there’s a hacker house scene where you’ll be so heavily indoctrinated (with sexual coercion and forced drug use to boot (please can the capitalists leave acid the fuck alone? also, please can the capitalists just leave?)) that a fucking Silicon Valley startup accelerator seems like a beacon of sanity

like, as someone who was indoctrinated into a bunch of this hacker culture bullshit as a kid (and a bunch of other cult shit from my upbringing before that), I get a fucking gross feeling inside imaging the type of grooming it takes to get someone to want to join up with a just hacker culture and AI research 24/7, abandon your family and come here house, and then stay in that fucking environment with all the monstrous shit going on because you’ve given up everything else. that shit brings me back in a bad way.

[–] [email protected] 0 points 5 months ago

I want to tell myself that it's probably a tiny scene of 10s to 100s, that it's just vestigial cult mindset that what she went through is the real SV VC scene, and most of it is just the more pedestrian techbro buzzword pptx deck tedium ...but even then, it's still incredibly tragic for everyone who went through and is going through that manipulation and abuse.

[–] [email protected] 0 points 5 months ago

Very grim that she feels the need to couch her damning report with "some, I assume, are good people" for a paragraph. I guess that's one of her survival strategies.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

Good thing that none of this mad-science bullshit is in danger of working, because I don't think that the spicy autocorrect leadership cadre would hesitate to hurt people if they could build something that's actually impressive.

[–] [email protected] 0 points 5 months ago (1 children)

Thanks! I thought the multiple journos sniffing around was very interesting.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

some of these guys get in touch with me from time to time, apparently i have a rep as a sneerer (I am delighted)

(i generally don't have a lot specific to add - I'm not that up on the rationalist gossip - except that it's just as stupid as it looks and frequently stupider, don't feel you have to mentally construct a more sensible version if they don't themselves)

[–] [email protected] 0 points 5 months ago

from the Ed Zitron discord. YOU HAD ONE JOB

(image)

[–] [email protected] 0 points 5 months ago

When randoms from /all wander into the vale of sneers:

https://www.buttersafe.com/2008/10/23/the-detour/

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (3 children)

Creativity is a lie. You heard it here first.

[–] [email protected] 0 points 5 months ago

I keep hearing this shit from creatively bankrupt folks and fash billionaires, because it’s very important to them that art and creativity isn’t for us — it’s an expensive relic of the past and a commodity that only they can afford. it’s fucking ghoulish, but that goes without saying with Sammy

what none of them seem to have an answer for is the obvious chicken and egg problem that their weird fucking conjecture leads to. if creativity isn’t possible, where in fuck did all the art come from? these assholes assert that modern artists just remix their inputs like a fucking generative AI, which is plainly false to anyone who knows artists or has even objectively evaluated generative AI outputs, but where in fuck did historical artists get their inputs from if we’re supposing that’s true? generative AI copies, but there’s nothing to copy from when there’s no original.

and that’s not meant to give a single inch to these shitheads and allow the fash idea that true art comes from some fantasy version of history either. modern artists do fucking fantastic creative work — when they’re not shackled by the capitalist systems that keep shitheads like Sammy boy buying expensive, unchallenging art pieces purely as a tax dodge

[–] [email protected] 0 points 5 months ago (1 children)

It's hard to understand what Samuel Alternativeman hopes to accomplish by making such statements. Does he want everyone to give up on being creative and just defer to AI? Does he think that without a source of real creativity for training, his products have any value at all?

[–] [email protected] 0 points 5 months ago (1 children)

He’s either trying to generate new critihype by making Clippy intelligent again (“It learns just like those pesky hoomans do!”), or slither his way out of that lawsuit by claiming it couldn’t have stolen original ideas when there have never been any original ideas in the first place.

I’m still trying to figure out what’s stupider.

[–] [email protected] 0 points 5 months ago

“It learns just like those pesky hoomans do!”

It's Furbies all over again

[–] [email protected] 0 points 5 months ago (1 children)

I’m starting to think that this Sam Altman guy just might be a giant asshole.

[–] [email protected] 0 points 5 months ago (1 children)

It took you this long to suss that???

[–] [email protected] 0 points 5 months ago (1 children)

First they came for the eyeballs, and I said nothing.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

Where we're going, we don't need eyes.

--Sam Altman, probably

[–] [email protected] 0 points 5 months ago (1 children)

Here's today's episode of "The Left is So Mean and The Right is so Nice, What Gives?" HN edition.

[–] [email protected] 0 points 5 months ago (1 children)

And right leaning people have been so far more capable of cooperating with those who don't match their values. Also, on paper, left leaning have expressed more humane values, espacially when very general or abstract. But in the day to day, when it comes to the mondain or practical, I've seen the sexist or racist guys help more my muslim or girl friends that the ones saying every body is equal.

what in the absolute fuck. nobody calls this out?

The extreme left is a good bit smaller than the extreme right. The vast, vast majority of Democrats that MAGA folks like to paint as communists are in fact damn near as conservative as they are on most issues. You could probably put all the actual communists in America in a single stadium, with space left over.

this is barely masking some “all the leftists you see online are sockpuppets and everyone protesting is a paid actor” horseshit, isn’t it?

[–] [email protected] 0 points 5 months ago

The part about him seeing racists be nicer than leftists (?) is beyond baffling. Yeah, white US southerners in 1850 were pretty dang nice and willing to help a random marginalized person out once in a while, what the fuck is the point?

[–] [email protected] 0 points 5 months ago (2 children)
[–] [email protected] 0 points 5 months ago (1 children)

it's important to keep in mind that Kevin Roose is the most gullible motherfucker.

[–] [email protected] 0 points 5 months ago

“Latecomer’s guide to creating an AI girlfriend”in this Sunday’s NYT magazine.

[–] [email protected] 0 points 5 months ago

finally we’ve created the AI girlfriend from the famous movie “don’t create an AI girlfriend, it’s kind of a fucked up thing to do”

[–] [email protected] 0 points 5 months ago

MS carbon emissions up 30% due to spicy autocomplete

https://www.theregister.com/2024/05/16/microsoft_co2_emissions/

load more comments
view more: next ›