this post was submitted on 27 Jun 2025
327 points (98.8% liked)

Funny

10381 readers
899 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 2 years ago
MODERATORS
 
top 34 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 3 hours ago

Let’s be honest, that poor excuse for a robotic monkey is a better and more loving patent than what most of us got.

[–] [email protected] 18 points 20 hours ago (2 children)

This is an interesting comparison because the wire monkey study suggests that we need physical contact from a caregiver more than nourishment. In the case of AI, we’re getting some sort of mental nourishment from the AI, but no physical contact.

The solution? AI tools integrated into either hyper-realistic humanoid robots, or human robo-puppets.

Or, we could also leverage our advancing technology to support the working class by implementing UBI through a reduction in production costs and an evening out of wealth and resources.

But who wants that? I, a billionaire, sure don’t.

[–] [email protected] 2 points 18 hours ago (3 children)

How about just hug a real human. Problem solved

[–] [email protected] 2 points 15 hours ago

slavery was made illegal decades ago

[–] [email protected] 1 points 15 hours ago

They might not be able to feed your brain

[–] [email protected] 5 points 17 hours ago

How will they sell a human at the lowest cost? People have to eat and sleep.

[–] [email protected] 5 points 18 hours ago (2 children)

I mean last week it was all over the news that Mattel and OpenAI made a deal to put chatgpt in toys such as Barbie.

[–] [email protected] 3 points 15 hours ago

Oh freaky! That’s a huge liability though. I don’t see that happening with a model anywhere close to what we’re using in ChatGPT.

[–] [email protected] 5 points 17 hours ago

Put that shit in a furby or a 1993 toy biz voice bot.

[–] [email protected] 5 points 20 hours ago

I feel more like I've got the wire monkey mother from that same experiment.

[–] [email protected] 22 points 21 hours ago (2 children)

ELIZA, the first chatbot created in the 60s just used to parrot your response back to you:

I'm feeling depressed

Why do you think you're feeling depressed

It was incredibly basic and the inventor Weizenbaum didn't think it was particularly interesting but got his secretary to try it and she became addicted. So much so that she asked him to leave the room while she "talked" to it.

She knew it was just repeating what she said back to her in the form of a question but she formed a genuine emotional bond with it.

Now that they're more sophisticated it really highlights how our idiot brains just want something to talk to whether we know it's real or not doesn't really matter.

[–] [email protected] 17 points 20 hours ago (1 children)

One of the last posts I read on Reddit was a student in a CompSci class where the professor put a pair of googly eyes on a pencil and said, "I'm Petie the Pencil! I'm not sentient but you think I am because I can say full sentences." The professor then snapped the pencil in half that made the students gasp.

The point was that humans anamorphize things that seem human, assigning them characteristics that make us bond to things that aren't real.

[–] [email protected] 10 points 19 hours ago

That or the professor was stronger than everyone thought

[–] [email protected] 6 points 20 hours ago (1 children)

Depends. I think I’m on the autistic spectrum, I just don’t see them as equal, but as tools.

[–] [email protected] 6 points 20 hours ago (2 children)

I'm not in the autistic spectrum. They aren't equals and they are barely tools.

[–] [email protected] 5 points 17 hours ago

They are good tools for communicating with the robots in management. ChatGPT, please output some corpobullshit to answer this form I was given and have no respect for.

[–] [email protected] 0 points 19 hours ago

I don’t know what I am but I don’t feel shit for no fucking robot. That arm that squeegees hydraulic fluid back into itself, fuck em.

[–] [email protected] 7 points 22 hours ago

Yes... very apt comparison.

Cloth AI will love and comfort us until the end of our days.

Which will be soon, because only Wire Computer provides us with actual sustenance.

[–] [email protected] 22 points 1 day ago (2 children)

Does anyone know the name of this monkey or experiment? It’s kind of harrowing seeing the expression on its face. It looks desperate for affection to the point of dissociation.

[–] [email protected] 16 points 23 hours ago (1 children)

The experiment was done by harry harlow, but I don’t think the name of the monkey was given, could have just been a number :(

[–] [email protected] 10 points 21 hours ago

Thank you so much. I’ve found a Wikipedia page on him and his research so I’ll give it a read. The poor money. https://en.wikipedia.org/wiki/Harry_Harlow

[–] [email protected] 20 points 23 hours ago (2 children)
[–] [email protected] 7 points 19 hours ago

Absolute horror.

[–] [email protected] 15 points 23 hours ago

The context makes it even more heartbreaking.

[–] [email protected] 23 points 1 day ago (2 children)

A colleague is all in on AI. She sends these elaborate notes generated by AI from our transcript that she is so proud of. I really hope she hasn't read any of them because they're often quite disconnected from what occurred on the call. If she is reading them and sending them anyway.... Wow.

[–] [email protected] 13 points 22 hours ago (2 children)

Probably not reading them. A family member told me at their work someone had an LLM summarize an issue spread out over a long email chain and sent the summary to their boss, who had an LLM summarize the summary.

[–] [email protected] 3 points 14 hours ago* (last edited 14 hours ago)

From experience, people who tend to do this wouldn't understand the issue even if they spent all the time in the world reading the email chain, or attending the meeting.

That's what gives them a false sense that AI is helping, because AI is as good as themselves with comprehension and then saying plausible things that aren't real. On the upside, it takes 2 seconds instead of an hour.

[–] [email protected] 7 points 17 hours ago

Most people don't read them. It reminds me of back before the AI days when you would have to spend time writing up those email summaries to send the team only that nobody reads them. I proved this to my boss that for 4 weeks straight I embedded that to the first person to email this +email gets $50. I never had to pay out because I was right 4 weeks in a row before I stopped. So many emails and newsletters in companies are done just because it's just how it's done for proper communication. Its just mindless busy work that wastes my time.

[–] [email protected] 6 points 22 hours ago (1 children)

I didn't know those were off. About a year ago we were playing with Zoom's AI meeting recorder and it was astonishing how accurate the summary was. Hell, it could even tell when I was joking, which was a bit eerie.

[–] [email protected] 4 points 20 hours ago

I've not had much of an issue, my guess is her prompts aren't great or she's combining it with really poorly taken notes?

[–] [email protected] 21 points 1 day ago* (last edited 1 day ago) (2 children)

We love cloth mother, way better than wire mother, gotta say

[–] [email protected] 6 points 21 hours ago

Where does scrub daddy factor into this?

[–] [email protected] 4 points 23 hours ago (1 children)

Damn, wire mother is going dig into my brain.