This professor is arguing we need to regulate AI because we haven't found any space aliens yet and the most conceivably explanation why is that they all wiped themselves out with killer AIs.
And hits some of the greatest hits:
- AI will nuke us all because the nuclear powers are so incompetent they'd hook the bombs up to Chat-GPT.
- AI will wipe us out with a killer virus for reasons
- We may not be adorable enough towards AI to prevent being vaporized even if we become cyborgs 🥺
- AI will wipe out an entire planet. Solution: we need people on a bunch of different planets and space-stations to study it "safely"
- Um actually space aliens would all be robots. Be free from your flesh prisons!
Zero mentions of global warming of course.
I kinda want to think that the author has just been reading some weird ideas. At least he put himself out there and wrote a paper with human sentences! It's all aboard the AI hype train for sure, and constantly makes huge logical leaps, but it somehow doesn't make me feel as skeezy as some of the other stuff on here.