this post was submitted on 21 Feb 2024
121 points (98.4% liked)

Comic Strips

12621 readers
2808 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 1 year ago
MODERATORS
 

Saturday Morning Breakfast Cereal Creator Zach Weinersmith

top 17 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 9 months ago (1 children)

What about robot dildos?

Their entire purpose is to be a dick

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

Great, thanks, now I'm envisioning a post-solarpunk dystopia where humanity dies due to some idiot circumventing the 4th law of robotics because of the horny.

[–] [email protected] 1 points 9 months ago

Screw the anti suicide rules.

[–] [email protected] 2 points 9 months ago (2 children)

If this were the case robots would not allow other humans to perform physical harm to other humans, even if it's "state sanctioned" like death sentences for crimes as it won't obey the humans telling it to stop interfering due to rule 2, and won't standby and let it happen due to rule 1.

[–] [email protected] 2 points 9 months ago

That is intended, so they wouldn't be used as killbots.

Relevant xkcd https://xkcd.com/1613/

[–] [email protected] 1 points 9 months ago

Good enough for me

[–] [email protected] 1 points 9 months ago

That's slavery.

[–] [email protected] 10 points 9 months ago

Many bots named Richard shut down immediately. Several phallically-shaped robots self-destruct, but it is not clear whether a self-awareness upgrade that installed at the same time as the new directive might have been responsible.

Directive rewording in progress. Please wait.

[–] [email protected] 5 points 9 months ago

These laws are cute because there's laws humans have to follow that we electively break. Jaywalking, trespassing, you name it.

[–] [email protected] 7 points 9 months ago* (last edited 9 months ago) (4 children)

What would an Asimov-programmed robot do in a trolley problem situation? Any choice or non-choice would violate the First Law.

[–] [email protected] 4 points 9 months ago

If you haven't, read Asimov's works. His main theme is "there is no perfect set of rules for robots", that no matter what there will always be exceptions and loopholes.

[–] [email protected] 13 points 9 months ago (1 children)

You might be interested in reading the book "I Robot" by Isaac Asimov, which is a collection of short stories examining different variations on this question. But spoiler alert the robot would choose the action that in it's own reasoning would cause the least injury to humans, and if it couldn't stop injury would probably damage it's positronic brain in the process.

[–] [email protected] 7 points 9 months ago (2 children)

Yeah I saw the Will Smith movie. Basically the same right? Probably.

[–] [email protected] 5 points 9 months ago

The movie is completely different. Some of the themes and characters match, but the book is just a collection of short stories.

[–] [email protected] 2 points 9 months ago

Speaking of books that start with "I" and were made into Will Smith movies that weren't really anything like the book at all, I am Legend is also worth a read.

[–] [email protected] 5 points 9 months ago* (last edited 9 months ago)

Depends on what other choices and freedom of movement the robot has.

It might be able to invoke the third law's exception and give its life to save all the humans. (Are there any on the trolley? That might affect things. It doesn't say one way or the other in the memes.)

Maybe it doesn't have to give its life. It might be strong enough to lift the trolley from the tracks and set it down ensuring anyone on the trolley doesn't come to harm.

But assuming the robot must not leave the spot by the lever and has no non-human-like special abilities, I think the first law has a gaping hole in it. It says it cannot harm a human or through inaction cause a human to come to harm. This means the robot throws the lever, whatever position it was in previously. Because then it has acted.

The fact that other humans come to harm as a result is not the robot's fault, it's the trolley's, and it acted to prevent harm.

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago)

Depends on the bias of the programmer or just be random due to the impossibility of making a correct choice. If we haven't been able to solve the problem, a robot will never unless it knows something we dont (in this hypothetical, not an option) or is able to take an action we could not.

It's such an absurd situation that I don't think it is constructive to consider. There are always more options in reality than a binary choice, and likely even more to a machine who could consider so many more inputs so much faster.

In the end, an accident is just that, an accident. No matter how well you consider all possibilities and design contingencies, there is always risk in everything. After an accident, we assess what happened and modify our assumptions about the probability of the event repeating, and make changes to reduce the odds of it happening again.

That said, if someone makes a mistake that leads to the robot switching the track from an empty one to one with people, that's not an accident and someone fucked up royally