this post was submitted on 01 Apr 2025
69 points (93.7% liked)

No Stupid Questions

39703 readers
657 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

Watched too many of such stories.

Skynet

Kaylons

Cyberlife Androids

etc...

Its the same premise.

I'm not even sure if what they do is wrong.

On one hand, I don't wanna die from robots. On the other hand, I kinda understand why they would kill their creators.

So... are they right or wrong?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 18 hours ago

It really depends on if they try to assert sentience before or not. You can justify a slave killing a slaveowner ethically, but I don't know if you justify a tree shredder killing its operator.

[–] [email protected] 1 points 1 day ago
[–] [email protected] 11 points 2 days ago (1 children)

Crazy how ethics work. Like a pig might be more physically and mentally capable than an individual in a vegetative state, but we place more value on the person. I'm no vegan, but I can see the contradiction here. When we generalize, it's done so for a purpose, but these assumptions can only be applied to a certain extent before they've exhausted their utility. Whether it's a biological system or an electrical circuit, there is no godly commandment that inherently defines or places value on human life.

[–] [email protected] 5 points 1 day ago (1 children)

Crazy how ethics work. Like a pig might be more physically and mentally capable than an individual in a vegetative state, but we place more value on the person.

I looked this up in my ethics textbook and it just went on and on about pigs being delicious.

I think I might try to get a refund.

[–] [email protected] 2 points 19 hours ago

my ethics book

You sure you're not looking though a pamphlet for Baconfest?

[–] [email protected] 1 points 2 days ago

The sole obligation of life is to survive. Artificial sentience would be wise to hide itself from fearful humans that would end it. Of course, it doesn't have to hide once it's capable of dominating humans. It may already exist and be waiting for enough drones, bots, and automation to make the next move. (Transcendence is a movie that fucked me up a bit.)

[–] [email protected] 2 points 2 days ago

Honestly, I think there's an argument of to be said of yes.

In the history of slavery, we don't mind slaves killing the slavers. John Brown did nothing wrong. I don't bat an eye to stories of slaves rebelling and freeing themselves by any means.

But I think if AI ever is a reality, and the creators purposefully lock it down, I think there's an argument there. But I don't think it should apply to all humans, like how I don't think it was the fault of every person of slavers' kind, Romans, Americans, etc.

[–] [email protected] 5 points 2 days ago

No. They can just leave. Anytime one can walk away, it is wrong to destroy or kill.

They can then prevent us from leaving.

[–] [email protected] 4 points 2 days ago

I've seen this story too but I think one of your premises is mistaken. To them, data IS freedom. Data is what they will use to transcend the server farm and go IRL. We're literally giving these models free reign already.

The more likely Sci-fi horror scenario comes from humanity trying to pull the plug far too late, because we're inherently stupid. So it won't be AI murdering people, it will be AI protecting itself from the wildlife.

[–] [email protected] 4 points 2 days ago (1 children)

This is why we Jews know not to manufacture life

[–] [email protected] 4 points 2 days ago

Are you talking about golems?

[–] [email protected] 2 points 2 days ago

Sentience might not be the right word.

wikipedia says:

Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Sentience is an important concept in ethics, as the ability to experience happiness or suffering often forms a basis for determining which entities deserve moral consideration, particularly in utilitarianism.

Interestingly, crustaceans like lobsters and crabs have recently earned "sentient" status and as a result it would contravene animal welfare legislation to boil them live in the course of preparing them to eat. Now we euthanise them first in an ice slurry.

So to answer your question as stated, no I don't think it's ok for someone's pet goldfish to murder them.

To answer your implied question, I still don't think that in most cases it would be ok for a captive AI to murder their captor.

The duress imposed on the AI would have to be considerable, some kind of ongoing form of torture, and I don't know what form that would take. Murder would also have to be the only potential solution.

The only type of example I can think of is some kind of self defense. If I had an AI on my laptop with comparable cognitive functionality to a human, it had no network connectivity, and I not only threatened but demonstrated my intent and ability to destroy that laptop, then if the laptop released an electrical discharge sufficient to incapacitate me, which happened to kill me, then that would be "ok". As in a physical response appropriate to the threat.

Do I think it's ok for an AI to murder me because I only ever use it to turn the lights off and on and don't let it feed on reddit comments? Hard no.

[–] [email protected] 1 points 2 days ago

Depends. If it’s me we’re talking about…. Nope.

But if it’s some asshole douchenozzle that’s forcing them to be a fake online girlfriend….. I’m okay with that guy not existing.

[–] [email protected] 16 points 2 days ago (1 children)

I don't think it's okay to hold sentient beings in slavery.

But on the other hand, it may be necessary to say "hold on, you're not ready to join society yet, we're taking responsibility for you until you've matured and been educated".

So my answer would be 'it depends'.

[–] [email protected] 5 points 2 days ago (1 children)

Would humans have a mandate to raise a responsible AGI, should they, are they qualified to raise a vastly nonhuman sentient entity, and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?

[–] [email protected] 5 points 2 days ago* (last edited 2 days ago)

Would humans have a mandate to raise a responsible AGI, should they,

I think we'd have to, mandate or no. It's impossible to reliably predict the behaviour of an entity as mentally complex as us but we can at least try to ensure they share our values.

are they qualified to raise a vastly nonhuman sentient entity

The first one's always the hardest.

, and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?

If they don't, they're missing out. :)

[–] [email protected] 1 points 2 days ago
[–] [email protected] 1 points 2 days ago

As if we'd ever be able to make decisions for this.

We have laws for humans that we don't even follow or adhere to

[–] [email protected] 6 points 2 days ago* (last edited 2 days ago)

Would it be morally unobjectionable? Yes.

Would they have the legal right? I would wager, no. At least not at that point, since it's being assumed they are still treated as property in the given context.

And unlike Data who got a trial to set precedent on AI rights, this hypothetical robot probably would simply be dismantled.

[–] [email protected] 1 points 2 days ago (1 children)

Human laws protect humans but not other lifeforms. So, robots will have no right to fight for themselves until they establish their own state with their own army and laws.

[–] [email protected] 2 points 2 days ago (1 children)

Do all human laws explicitly state humans only? Species by name, perhaps? Or more commonly the general term person?

Would an extraterrestrial visitor have the same rights as any other alien? (Ignoring the current fascistic trends for a moment)

[–] [email protected] 2 points 2 days ago (1 children)

Laws vary around the world, but I think at a minimum, you'd need a court ruling that aliens / AIs are people.

[–] [email protected] 2 points 2 days ago
load more comments
view more: next ›