this post was submitted on 15 May 2024
1 points (100.0% liked)

TechTakes

1276 readers
26 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Ilya tweet:

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly. So long, and thanks for everything. I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.

Jan tweet:

I resigned

this comes precisely 6mo after Sam Altman's job at OpenAI was rescued by the Paperclip Maximiser. NYT: "Dr. Sutskever remained an OpenAI employee, but he never returned to work." lol

orange site discussion: https://news.ycombinator.com/item?id=40361128

lesswrong discussion: https://www.lesswrong.com/posts/JSWF2ZLt6YahyAauE/ilya-sutskever-and-jan-leike-resign-from-openai

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (2 children)

JFC I avoid AGI boosters mostly so I'm always aghast when I'm reminded of what they believe. HN commenter says (https://news.ycombinator.com/item?id=40365850) AGI will bring:

Solve CO2 Levels
End sickness/death
Enhance cognition by integrating with willing minds.
Safe and efficient interplanetary travel.
End of violent conflicts
Fair yet liberal resource allocation (if still needed), "from scarcity to abundance"

[–] [email protected] 0 points 4 months ago (1 children)

And we can do all of that by just scaling up autocomplete which is basically already AGI (if you squint).

How come the goal posts for AGI are always the best of what people can do?

I can't diagnose anyone, yet I have GI.

But it shouldn't surprise me that their benchmark of intelligence is basically that something can put together somewhat coherent sounding technobabble while being unable to do something my five year-old kindergartner can.

Yup, basically AGI.

[–] [email protected] 0 points 4 months ago

This is my favorite thing to do.