scruiser

joined 1 year ago
[–] scruiser@awful.systems 0 points 1 day ago* (last edited 1 day ago) (4 children)

That was literally the inflection point on my path to sneerclub. I had started to break from less wrong before, but I hadn't reached the tipping point of saying it was all bs. And for ssc and Scott in particular I had managed to overlook the real message buried in thousands of words of equivocating and bad analogies and bad research in his earlier posts. But "you are still crying wolf" made me finally question what Scott's real intent was.

[–] scruiser@awful.systems 0 points 5 days ago

I normally think gatekeeping fandoms and calling people fake fans is bad, but it is necessary and deserved in this case to assume Elon Musk is only a surface level fan grabbing names and icons without understanding them.

[–] scruiser@awful.systems 0 points 5 days ago

This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadn't thought about the extent to which Altman's plan is "hey morons, hook my shit up to fucking everything and try to stumble across a use case that’s good for something" (as opposed to the "we’re building a genie, and when we’re done we’re going to ask it for three wishes" he hypes up), that makes more sense as a long term plan...

[–] scruiser@awful.systems 0 points 6 months ago

It's not all the exact same! ~~Friendship is Optimal adds in pony sex~~

[–] scruiser@awful.systems 0 points 6 months ago

There’s also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

/r/rational isn't just for AI fiction, it also ~~claims~~ includes anything with decent verisimilitude, so stuff like The Hatchet and The Martian show up in its recommendation lists also! ~~letting it claim credit for better fiction than the AI stuff~~

[–] scruiser@awful.systems 0 points 6 months ago* (last edited 6 months ago) (8 children)

Oh no, its much more than a single piece of fiction, it's like an entire mini genre. If you're curious...

A short story... where the humans are the AI! https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message Its meant to suggest what could be done with arbitrary computational power and time. Which is Eliezer's only way of evaluating AI, by comparing it to the fictional version with infinite compute inside of his head. Expanded into a longer story here: https://alicorn.elcenia.com/stories/starwink.shtml

Another parable by Eliezer (the genie is blatantly an AI): https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2 Fitting that his analogy for AI is a literal genie. This story also has some weird gender stuff, because why not!

One of the longer ones: https://www.fimfiction.net/story/62074/friendship-is-optimal A MLP MMORPG AI is engineered to be able to bootstrap to singularity. It manipulates everyone into uploading into it's take on My Little Pony! The author intended it as a singularity gone subtly wrong, but because they posted it to both a MLP fan-fiction site in addition to linking it to lesswrong, it got an audience that unironically liked the manipulative uploading scenario and prefers it to real life.

Gwern has taken a stab at it: https://gwern.net/fiction/clippy We made fun of Eliezer warning about watching the training loss function, in this story the AI literally hacks it way out in the middle of training!

And another short story: https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story

So yeah, it an entire genre at this point!

[–] scruiser@awful.systems 0 points 6 months ago (10 children)

Short fiction of AGI takeover is a lesswrong tradition! And some longer fics too! Are you actually looking for specific examples and/or links? Lots of them are fun, in a sci-fi short form kind of way. The goofier ones and cringer ones are definitely sneerable.

[–] scruiser@awful.systems 0 points 6 months ago

I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.

[–] scruiser@awful.systems 0 points 6 months ago* (last edited 6 months ago) (12 children)

Some nitpicks. some of which are serious are some of which are sneers...

consternating about the policy implications of Sam Altman’s speculative fan fiction

Hey, the fanfiction is actually Eliezer's (who in turn copied it from older scifi), Sam Altman just popularized it as a way of milking the doom for hype!

So, for starters, in order to fit something as powerful as ChatGPT onto ordinary hardware you could buy in a store, you would need to see at least three more orders of magnitude in the density of RAM chips—​leaving completely aside for now the necessary vector compute.

Well actually, you can get something close to as powerful on a personal computer... because the massive size of ChatGPT and the like don't actually improve their performance that much (the most useful thing I think is the longer context window?).

I actually liked one of the lawfare AI articles recently (even though it did lean into a light fantasy scenario)... https://www.lawfaremedia.org/article/tort-law-should-be-the-centerpiece-of-ai-governance . Their main idea is that corporations should be liable for near-misses. Like if it can be shown that the corporation nearly caused a much bigger disaster, they get fined in accordance with the bigger disaster. Of course, US courts routinely fail to properly penalize (either in terms of incentives of in terms of compensation) corporations for harms they actually cause, so this seems like a distant fantasy to me.

AI has no initiative. It doesn’t want anything

That’s next on the roadmap though, right? AI agents?

Well... if the way corporations have tried to use ChatGPT has taught me anything, its that they'll misapply AI in any and every way that looks like it might save or make a buck. So they'll slap an API to a AI it into a script to turn it into an "agent" despite that being entirely outside the use case of spewing words. It won't actually be agentic, but I bet it could cause a disaster all the same!

[–] scruiser@awful.systems 0 points 7 months ago* (last edited 7 months ago)

I saw people making fun of this on (the normally absurdly overly credulous) /r/singularity of all places. I guess even hopeful techno-rapture believers have limits to their suspension of disbelief.

[–] scruiser@awful.systems 0 points 7 months ago

First of all. You could make facts a token value in an LLM if you had some pre-calculated truth value for your data set.

An extra bit of labeling on your training data set really doesn't help you that much. LLMs already make up plausible looking citations and website links (and other data types) that are actually complete garbage even though their training data has valid citations and website links (and other data types). Labeling things as "fact" and forcing the LLM to output stuff with that "fact" label will get you output that looks (in terms of statistical structure) like valid labeled "facts" but have absolutely no guarantee of being true.

[–] scruiser@awful.systems 0 points 7 months ago

His replies have gone up in upvotes substantially since yesterday, so it looks like a bit of light brigading is going on.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

view more: next ›