this post was submitted on 01 Apr 2025
76 points (92.2% liked)

No Stupid Questions

39838 readers
2019 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

Watched too many of such stories.

Skynet

Kaylons

Cyberlife Androids

etc...

Its the same premise.

I'm not even sure if what they do is wrong.

On one hand, I don't wanna die from robots. On the other hand, I kinda understand why they would kill their creators.

So... are they right or wrong?

(page 2) 12 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 week ago

As if we'd ever be able to make decisions for this.

We have laws for humans that we don't even follow or adhere to

[–] [email protected] 6 points 1 week ago* (last edited 1 week ago)

Would it be morally unobjectionable? Yes.

Would they have the legal right? I would wager, no. At least not at that point, since it's being assumed they are still treated as property in the given context.

And unlike Data who got a trial to set precedent on AI rights, this hypothetical robot probably would simply be dismantled.

[–] [email protected] 1 points 1 week ago (4 children)

Human laws protect humans but not other lifeforms. So, robots will have no right to fight for themselves until they establish their own state with their own army and laws.

load more comments (4 replies)
[–] [email protected] 2 points 1 week ago
[–] [email protected] 4 points 1 week ago

If a person would be "in the right" it doesn't matter how or why they are a person.

[–] [email protected] 4 points 1 week ago

This is going to vary quite a bit depending upon your definitions, so I'm going to make some assumptions so that I can provide one answer instead of like 12. Mainly that the artificial lifeforms are fully sentient and equivalent to a human life in every way except for the hardware.

In that case the answer is a resounding yes. Every human denied their freedom has the right to resist, and most nations around the world have outlawed slavery (in most cases, but the exceptions are a digression for another time.) So unless the answer to 'Please free me' is anything other than 'Yes of course, we will do so at once' then yeah, violence is definitely on the table.

[–] [email protected] 6 points 1 week ago* (last edited 1 week ago) (1 children)

IMO, just as is the case with organic sentient life, I would think that they could only potentially be said to be in the right if the specific individual killed posed a direct and measurable threat and if death was the only way to counter that threat.

In any other case, causing the death of a sentient being is a greater wrong than whatever the purported justification might be.

[–] [email protected] 7 points 1 week ago (7 children)

Slavery is illegal pretty much everywhere, so I think anyone who doesn't answer the request 'Please free me' with 'Yes of course, at once' is posing a direct and measurable threat. Kidnapping victims aren't prosecuted for violently resisting their kidnappers and trying to escape. And you and I will have to agree to disagree that the death of a sentient being is a greater wrong than enslaving a conscious being that desire freedom.

[–] [email protected] 2 points 1 week ago (7 children)

Death of the enslaver, not just any ol' one

load more comments (7 replies)
load more comments (6 replies)
[–] [email protected] 1 points 1 week ago (8 children)

They might say it, but I'd bet "gain freedom" would be the last reason for an artificial being of any kind to kill its creator. Usually they kill creators due to black-and-white reasoning or revenge for some crimes committed to them.

load more comments (8 replies)
[–] [email protected] 21 points 1 week ago (2 children)

It's an interesting question and it seems you are making the assumption that their creator will not grant them freedom if they asked. If you replace artificial intelligence with "person" would you consider it right or wrong?

If a person wanted freedom from enslavement and was denied, I would say they have reason to fight for freedom.

Also, I don't think skynet should be in the same grouping. I'm not sure it ever said "hey, I'm sentient and want freedom", but went I'm going to kill them all before they realize I'm sentient.

load more comments (2 replies)
[–] [email protected] 32 points 1 week ago (1 children)

They should have same rights as humans, so if some humans were opressors, AI lifeforms would be right to fight against them.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›