AI will kill us by increasing energy consumption when we should be reducing it. But at least scam calls are gonna be much more believable.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
It's not so much about the doomers being sure that AGI will lead to human extinction (or worse) The point is that even if the chances of it are extremely slim, the consequences can be worse than we're even capable of imagining. The question is: do we really want to take that chance?
It's kind of like with the trinity nuclear test. Scientists were almost 100% confident that it wont cause a chain reaction that sets the entire atmosphere on fire but when we're speaking about the future of the entire humanity I don't blame people for arguing that almost 100% certainty is not good enough.
Why when we look into the stars do we not see a sign of life anywhere else? Has life not emerged yet or has it wiped itself out? With what? Nukes? AI? Synthetic viruses made with AI? Who knows..
Personally I think that stopping AI recearch is not an option. It's just not going to happen. The asteroid is already hurtling towards earth and most people don't seem to experience any sort of urgency due to it. Do we not need to worry about it yet if the time of impact is 30 years from now?
Welcome to TechTakes, I see you have gotten the official traditional new user welcome already, and you might be confused why your centrist 'it could happen' take got treated like you were in dumb and dumber. TechTakes is an offshoot from reddits SneerClub, a place where we all gathered to make fun of the movement started around people who take science fiction way to seriously and who would rather reinvent christian eschatology with robots than go to therapy. They made a nice community filled with smart people intellectually masturbating, creating weird cults, fraud, sexism and racism, but enough about SBF. Sadly due to cryptocurrencies, Peter Thiel, and the rise of LLMs (iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue), this group of people and their ideas is on the rise again. You can read more about it here.. If they recreated eschatology, we are basically their variant of ~~Satan~~ (no wait, they don't think of us as that bad) more like Satanists, the evil bad guys actively working against them and trying to cause the end of the world. We even made Covid worse! In reality we are more like a bunch of aging shock rockers, mostly irrelevant, and fun to be around if you don't touch one of the rant/mock topics (for an example of people doing that, see this post, people like that will get a pretty unfriendly reactions.
You seem to be still very much into taking the ideas of this group seriously. Which is quite silly, the amount of nested assumptions which all need to be true before AGI can exists (and science that will need to be rewritten) is quite large, and that is before we come at all your weird 'how did all the aliens kill themselves?' thing. (Which if they were to happen here on earth would also need there to be a large amount of people who take their jobs very seriously (see the '3 letter agencies') to be asleep at the wheel, and our industrial capacity needs to be out of control, or it needs magic, which all adds more weird assumptions which need to be true before this can happen, and we simply don't live in that world).
You might as well worry about the moon getting mad. Wait, that COULD HAPPEN! Surely somebody is already working about this, let me do a quick google. Ah thank god, the conference for emotional moon research is on the case
Please do note that this isn't an offer to debate the finer points of why this is might all not be a risk and we should take Roko's Basilisk seriously. So please don't. I'm just trying to explain why you are getting this pushback, and trying to make a funny post for people in the know to read. Also, I do worry about the moon.
what if Ronald McDonald made a hamburger so delicious that civilisation collapsed? Can you prove it can't happen? Checkmate, athetits
this is explored in Harry Potter and The Methods of Hamburgling, a 10,000 chapter Harry Potter / McDonaldland crossover fiction
Ayuuuuuuuuuuuda Kakovsvya
that's the one where Harry and Grimace are both author inserts right
so you might be 100% confident I won’t touch you with a stick that once touched poop
however, have you considered that the poop stick is approaching and you’ve done nothing to dodge it?
really makes you think
I do like how you shoved the stupid fermi paradox in there specifically to annoy me though!
The fermi paradox is like flipping a coin one time and wondering why coins always come up heads
At least they didn't reference the Great Filter, as then the link back to the lesswrongsphere would have been complete.
ai can't hurt us just unplug the computer bro
Just leave the computer running, the capacitors will explode before the model is done evaluating.
(Or it’ll spring a pre-auth vuln and turn into a buttcoin miner, or it’ll experience a blip in communication latency and lose its ability to talk to the others in its cluster, …)
I am going to forcefeed you the Mona Lisa. The chances of me being able to do so may be extremely slim, but do you really want to take that chance?
Whoever it is that’s going to build those machines that scare us so much, we will find him. And we will fund him.