this post was submitted on 25 Nov 2023
1 points (100.0% liked)

Technology

58159 readers
3678 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] [email protected] -1 points 10 months ago* (last edited 10 months ago) (2 children)

It's so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can't punish AI for doing something wrong. AI does not require a raise for doing something right either

[–] [email protected] 0 points 10 months ago* (last edited 9 months ago) (1 children)

That is like saying you cant punish gun for killing people

edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

[–] [email protected] -1 points 10 months ago

Sorry, but this is not a valid comparison. What we're talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

[–] [email protected] 0 points 10 months ago (1 children)

You can't punish AI for doing something wrong.

Maybe I'm being pedantic, but technically, you do punish AIs when they do something "wrong", during training. Just like you reward it for doing something right.

[–] [email protected] -1 points 10 months ago

But that is during training. I insinuated that you can't punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

[–] [email protected] -1 points 10 months ago (1 children)

The only fair approach would be to start with the police instead of the army.

Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police

[–] [email protected] 0 points 10 months ago (1 children)

But that AI would have to be trained on existing cops, so it would just shoot every black person it sees

[–] [email protected] -1 points 10 months ago

My point being that there would be more motivation to filter Derek Chauvin type of cops from the AI library than a soldier with a trigger finger.

[–] [email protected] 0 points 10 months ago* (last edited 10 months ago) (1 children)

Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

[–] [email protected] -1 points 10 months ago* (last edited 10 months ago)

Especially one that is made to kill everybody else except their own. Let it replace the police. I'm sure the quality controll would be a tad stricter then

[–] [email protected] -1 points 10 months ago (1 children)

We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

[–] [email protected] 0 points 10 months ago (1 children)

Both honesty. AI can reduce accountability and increase the power small groups of people have over everyone else, but it can also go haywire.

[–] [email protected] -1 points 9 months ago

It will go haywire in areas for sure.

[–] [email protected] 0 points 10 months ago* (last edited 10 months ago) (1 children)

As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

[–] [email protected] 0 points 10 months ago (1 children)

Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

[–] [email protected] 0 points 10 months ago (1 children)

Imagine a mine that could recognize "that's just a child/civilian/medic stepping on me, I'm going to save myself for an enemy soldier." Or a mine that could recognize "ah, CenCom just announced a ceasefire, I'm going to take a little nap." Or "the enemy soldier that just stepped on me is unarmed and frantically calling out that he's surrendered, I'll let this one go through. Not the barrier troops chasing him, though."

There's opportunities for good here.

[–] [email protected] -1 points 10 months ago

Lmao are you 12?