this post was submitted on 25 May 2025
29 points (100.0% liked)
askchapo
23013 readers
80 users here now
Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.
Rules:
-
Posts must ask a question.
-
If the question asked is serious, answer seriously.
-
Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.
-
Try [email protected] if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think AI safety is such a big problem that it means we gotta stop building AI or we'll destroy the world or something, but I do agree there should be things like regulations, oversight, some specialized people to make sure AI is being developed in a safe way just to help mitigate problems that could possibly come up. There is a mentality that AI will never be as smart as humans so any time people suggest some sort of policies for AI safety that it's unreasonable because it's overhyping how good AI is and it won't get to a point of being dangerous for a long time. But if we have this mentality indefinitely then eventually when it does become dangerous we'd have no roadblocks and it might actually become a problem. I do think completely unregulated AI developed without any oversight or guardrails could in the future lead to bad consequences, but I also don't think that is something that can't be mitigated with oversight. I don't believe for example like an AGI will somehow "break free" and take over the world if it is ever developed. If it is "freed" in a way that starts doing harm, it would be because someone allowed that.