gerikson

joined 2 years ago
[–] [email protected] 0 points 1 day ago (2 children)

esprit d'escalier

this whole "superbabies will save us from AI" presupposes that the superbabies are immune to the pull of LW ideas. Just as LW are discounting global warming, fascism etc to focus on runaway AI, who says superbabies won't have a similar problem? It's just one step up the metaphorical ladder:

LW: "ugh normies don't understand the x-risk of AI!"

Superbabies: "ugh our LW parents don't understand the x-risk of Evangelion being actually, like, real!"

[–] [email protected] 0 points 1 day ago (4 children)

I got caught on that quote too...

Superbabies is a backup plan; focus the energy of humanity’s collective genetic endowment into a single generation, and have THAT generation to solve problems like “figure out how to control digital superintelligence”.

Science-fiction solutions for science-fiction problems!

Let's see what the comments say!

Considering current human distributions and a lack of 160+ IQ people having written off sub-100 IQ populations as morally useless [...]

Dude are you aware where you are posting.

Just hope it never happens, like nuke wars?

Yeah that's what ran the Cold War, hopes and dreams. JFC I keep forgetting these are kids born long after 1989.

Could you do all the research on a boat in the ocean? Excuse the naive question.

No, please keep asking the naive questions, it's what provides fodder for comments like this .

(regarding humans having "[F]ixed skull size" and can therefore a priori not compete with AI):

Artificial wombs may remove this bottleneck.

This points to another implied SF solution. It's already postulated by these people that humans are not having enough babies, or rather the right kind of humans aren't (wink wink). If we assume that they don't adhere to the Platonic ideal that women are simply wombs and all traits are inherited from males, then to breed superbabies you need buy-in from the moms. Considering how hard it is for these people to have a normal conversation with the fairer sex, them both managing to convince a partner to have a baby and let some quack from El Salvador mess with its genes seems insurmountable. Artificial wombs will resolve this nicely. Just do a quick test at around puberty to determine the God-given IQ level of a female, then harvest her eggs and implant them into artificial wombs. The less intelligent ones can provide eggs for the "Beta" and "Gamma" models...

But you don't go from a 160 IQ person with a lot of disagreeability and ambition, who ends up being a big commercial player or whatnot, to 195 IQ and suddenly get someone who just sits in their room for a decade and then speaks gibberish into a youtube livestream and everyone dies, or whatever.

These people are insane.

[–] [email protected] 0 points 2 days ago

This isn't even skating towards where the puck is, it's skating in a fucking swimming pool.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

Yeah that’s “Lena” by the SCP guy. Great story.

https://qntm.org/mmacevedo

MMAcevedo's demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary.

[–] [email protected] 0 points 1 week ago (3 children)

About the only good stuff LW does is remind me of other, much better SF. In this case Ian McDonald's Necroville (Terminal Café in the US), about a future where nanotech enables the resurrection of the dead. Said neo-living are of course discriminated against, have no human rights, and are used as cheap disposable labor by the corporation that uses the technology.

[–] [email protected] 0 points 1 week ago (1 children)

D'oh! I missed that connection, although the little infographic amoebas should have tipped me off

[–] [email protected] 0 points 1 week ago (4 children)

AI researchers continue to daub soot on the walls of Plato's cave, scaring themselves witless:

https://www.emergent-values.ai/

At least I've IDd the transmission vector from LW to lobste.rs

[–] [email protected] 0 points 1 week ago (1 children)

MoreWronger is concerned that the shitty fanfic the community excretes is limited to LW and Xhitter and wonders if The Atlantic is a better venue

Look I have nothing against fanfic myself but if there's one powerful corrective it lacks, it is commercial content editorial feedback.

[–] [email protected] 0 points 1 week ago (2 children)

Credit where credit is due, this is a decent comeback

https://news.ycombinator.com/item?id=43005246

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (2 children)

They didn't care much for marine life when they abandoned a tugboat, leavnig it to leak fuel and crap into the sea:

https://archive.ph/1UcWZ

[–] [email protected] 0 points 2 weeks ago

they recently got a profile in fucking WaPo

[–] [email protected] 0 points 2 weeks ago (1 children)

Rats have reached the "put up stickers to proselytize" stage of their weird religion

https://www.lesswrong.com/posts/SvtKronRNrw9AxXwa/clement-l-s-shortform?commentId=ZifQXkhxo5dJvrL3Q

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

1
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

 

Yes, I know it's a Verge link, but I found the explanation of the legal failings quite funny, and I think it's "important" we keep track of which obscenely rich people are mad at each other so we can choose which of their kingdoms to be serfs in.

 

Apologies for the link to The Register...

Dean Phillips is your classic ratfucking candidate, attempting to siphon off support from the incumbent to help their opponent. After a brief flare of hype before the (unofficial) NH primary, he seems to have flamed out by revealing his master plan too early.

Anyway, apparently some outfit called "Delphi" tried to create an AI version of him via a SuperPAC and got their OpenAI API access banned for their pains.

Quoth ElReg:

Not even the presence of Matt Krisiloff, a founding member of OpenAI, at the head of the PAC made a difference.

The pair have reportedly raised millions for We Deserve Better, driven in part by a $1 million donation from hedge fund billionaire Bill Ackman, who described his funding of the super PAC as "the largest investment I have ever made in someone running for office."

So the same asshole who is combating "woke" and DEI is bankrolling Phillips, supposed to be the new Bernie. Got it.

 

In a since deleted thread on another site, I wrote

For the OG effective altruists, it’s imperative to rebrand the kooky ultra-utilitarianists as something else. TESCREAL is the term adopted by their opponents.

Looks like great minds think alike! The EA's need to up their google juice so people searching for the term find malaria nets, not FTX. Good luck on that, Scott!

The HN comments are ok, with this hilarious sentence

I go to LessWrong, ACX, and sometimes EA meetups. Why? Mainly because it's like the HackerNews comment section but in person.

What's the German term for a recommendation that's the exact opposite?

 

[this is probably off-topic for this forum, but I found it on HN so...]

Edit "enjoy" the discussion: https://news.ycombinator.com/item?id=38233810

 

Title is ... editorialized.

1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Title quote stolen from JZW: https://www.jwz.org/blog/2023/10/the-best-way-to-profit-from-ai/

Yet again, the best way to profit from a gold rush is to sell shovels.

view more: next ›