this post was submitted on 08 Mar 2025
962 points (98.3% liked)

Technology

67422 readers
4175 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 10 comments
sorted by: hot top controversial new old
[–] mechoman444@lemmy.world 14 points 2 weeks ago (1 children)

I still remember "death panels" from the Obama era.

Now it's ai.

Whatever.

[–] Grass@sh.itjust.works 14 points 2 weeks ago

everything republicans complained about can be done under Trump twice as bad, twice as evil and they will be 'happy' and sing his praises

[–] FaceDeer@fedia.io 47 points 2 weeks ago

Yeah, I'd much rather have random humans I don't know anything about making those "moral" decisions.

If you're already answered, "No," you may skip to the end.

So the purpose of this article is to convince people of a particular answer, not to actually evaluate the arguments pro and con.

[–] Imgonnatrythis@sh.itjust.works 30 points 2 weeks ago (4 children)

That's not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how "nice" they might be or how many vocal advocates they might have. This paper just states that current AIs aren't very good at what we would call moral judgment.

It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%

[–] phdepressed@sh.itjust.works 9 points 2 weeks ago (1 children)

Creatinin in urine was used as a measure of kidney function for literal decades despite African Americans having lower levels despite worse kidneys by other factors. Creatinine level is/was a primary determinant of transplant eligibility. Only a few years ago some hospitals have started to use inulin which is a more race and gender neutral measurement of kidney function.

No algorithm matters if the input isn't comprehensive enough and cost effective biological testing is not.

[–] Imgonnatrythis@sh.itjust.works 2 points 2 weeks ago (2 children)

Well yes. Garbage in garbage out of course.

load more comments (2 replies)
[–] StructuredPair@lemmy.world 7 points 2 weeks ago

Everyone likes to think that AI is objective, but it is not. It is biased by its training which includes a lot of human bias.

[–] MsPenguinette@lemmy.world 16 points 2 weeks ago (1 children)

Tho those complicated outcome trends can have issues with things like minorities having worse health outcomes due to a history of oppression and poorer access to Healthcare. Will definitely need humans overseeing it cause health data can be misleading looking purely at numbers

[–] Imgonnatrythis@sh.itjust.works -1 points 2 weeks ago

I wouldn't say definitely. AI is subject to bias of course as well based on training, but humans are very much so, and inconsistently so too. If you are putting in a liver in a patient that has poorer access to healthcare they are less likely to have as many life years as someone that has better access. If that corellates with race is this the junction where you want to make a symbolic gesture about equality by using that liver in a situation where it is likely to fail? Some people would say yes. I'd argue that those efforts towards improved equality are better spent further upstream. Gets complicated quickly - if you want it to be objective and scientifically successful, I think the less human bias the better.

[–] sunzu2@thebrainbin.org 8 points 2 weeks ago (3 children)

I agree with you but also

It seems like algorithms would be the most objective way to do this

Algo is just another tool corpos and owners use to abuse. They are not independent, they represent interest of their owners and they oppress pedon class.

load more comments (3 replies)
load more comments