this post was submitted on 08 Jun 2024
362 points (98.1% liked)

Technology

58159 readers
3568 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

The safety concern is for renegade super intelligent AI, not an AI that can recite bomb recipes scraped from the internet.

[–] [email protected] 1 points 3 months ago (1 children)

Damn if only we had some way to you know turn off electricity to a device. A switch of some sort.

I already pointed this out in the thread, scroll down. The idea of a kill switch makes no sense. If the decision is made that some tech is dangerous it will be made by the owner or the government. In either case it will be a political/legal decision not a technical one. And you don't need a kill switch for something that someone actively needs to pump resources into. All you need to do is turn it off.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

there's a whole lot of discussion around this already, going on for years now. an AI that was generally smarter than humans would probably be able to do things undetected by users.

it could also be operated by a malicious user. or escape its container by writing code.

[–] [email protected] 1 points 3 months ago

Well aware. Now how does having James Bond Evil Villain-like destruction switch prevent it?

We have decided to run the thought experiment of a malicious AI is stuck in a box and wants to break out to take over. Ok, if you are going to assume this 1960s b movie plot is likely why are you solving the problem so badly?

As a side note I find it amusing that nerds have decided that intelligence gets you what you want in life with no other factors involved. Given that we should know more than anyone else that intelligence in our society is overrated.