this post was submitted on 10 Jul 2025
327 points (93.8% liked)

Technology

72774 readers
1499 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.

The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 day ago (3 children)

That's such a fucking stupid idea.

Care to elaborate why?

From my point of view I don't see a problem with that. Or let's say: the potential risks highly depend on the specific setup.

[–] [email protected] 1 points 1 day ago

Unless the videos have proper depth maps and identifiers for objects and actions they're not going to be as effective as, say, robot arm surgery data, or vr captured movement and tracking. You're basically adding a layer to the learning to first process the video correctly into something usable and then learn from that. Not very efficient and highly dependant on cameras and angles.

[–] [email protected] 0 points 1 day ago (1 children)

Imagine if the Tesla autopilot without lidar that crashed into things and drove on the sidewalk was actually a scalpel navigating your spleen.

[–] [email protected] 1 points 1 day ago (1 children)

Absolutely stupid example because that kind of assumes medical professionals have the same standard as Elon Musk.

[–] [email protected] 2 points 1 day ago (1 children)

Elon Musk literally owns a medical equipment company that puts chips in peoples brains, nothing is sacred unless we protect it.

[–] [email protected] 0 points 1 day ago

Into volunteers it's not standard practise to randomly put a chip in your head.

[–] [email protected] 0 points 1 day ago* (last edited 1 day ago) (1 children)

Being trained on videos means it has no ability to adapt, improvise, or use knowledge during the surgery.

Edit: However, in the context of this particular robot, it does seem that additional input was given and other training was added in order for it to expand beyond what it was taught through the videos. As the study noted, the surgeries were performed with 100% accuracy. So in this case, I personally don't have any problems.

[–] [email protected] -1 points 1 day ago* (last edited 1 day ago) (1 children)

I actually don't think that's the problem, the problem is that the AI only factors for visible surface level information.

AI don't have object permanence, once something is out of sight it does not exist.

[–] [email protected] 2 points 1 day ago

If you read how they programmed this robot, it seems that it can anticipate things like that. Also keep in mind that this is only designed to do one type of surgery.

I'm cautiously optimist.

I'd still expect human supervision, though.