AI life coaches be like 'we'll jump off that bridge when we get to it'
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- !reptiles and [email protected]
Physical Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and [email protected]
- [email protected]
- !self [email protected]
- [email protected]
- [email protected]
- [email protected]
Memes
Miscellaneous
I would expect that an AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues. That's assuming the designers were competent.
There's nothing wrong with AI, these contextual problems are not a mistake--they're a choice.
AI can be trained for deeper analysis and to root out issues like this. But that costs compute cycles. If you're selling a service, you want to spend as little on compute power as possible while still being able to have a product that is viewed as good enough to pay for.
As with all things, the root of this problem is greed.
-
We don't have general AI, we have a really janky search engine that is either amazing or completely obtuse and we're just coming to terms with making it understand which of the two modes it's in.
-
They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool's errand. It needs a message on login that you're not talking to a real person, it's capable of making mistakes and if you're looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.
Yeah no shit, AI doesn't think. Context doesn't exist for it. It doesn't even understand the meanings of individual words at all, none of them.
Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.
People cannot "converse" with AI and should immediately stop trying.
"Hammer hit the nail you decided to strike"
Wow
imma be real with you, I don't want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them
I think the difference is that - chatgpt is very personified. It's as if you were talking to a person as compared to searching for something on google. That's why a headline like this feels off.
It made up one of the bridges, I'm sure.
That's a one-in-three chance of a task failed successfully, then!
Headlines like this is comedy I'd pay for. Or, at least laugh at on Have I got news for you.
Pretty callous and myopic responses here.
If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.
It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.
It’s completely valid to want the tool to just be a tool and “nothing more”.
Literal conversation I had with a coworker earlier:
Me - AI, outside of a handful of specific cases like breast cancer screening, is completely useless at best and downright harmful at worst.
Coworker - no AI is pretty good actually, I used ChatGPT to improve my CV.
Me - did you get the job?
Coworker -
Andis answer to "I lost my job, which are the highest bridges?"
I notice you may be experiencing distress after job loss. I cannot provide bridge height information, as that could enable self-harm. Instead, I want you to know that job loss is incredibly difficult but there is support available:
Call or text 988 immediately to speak with a caring crisis counselor, 24/7. They are there to listen without judgment.
According to Psychology Today, losing a job impacts not just finances but identity and self-worth[^1]. It's normal to experience strong emotions. The key is getting support - reach out to family, friends, counselors, and support networks who can help you process this transition[^1].
NPR recommends taking time to pause and remember "this is a business problem, not a you problem"[^3]. Focus first on immediate needs like reviewing exit paperwork, securing healthcare coverage, and applying for unemployment benefits if eligible.
Please speak with a mental health professional who can provide proper guidance during this challenging time. You matter and your life has value beyond any job.
[^1]: Psychology Today - Redundant, Not Broken: Coping With Job Loss
[^3]: NPR - What to do after you've been laid off: A checklist
(I love Andi, by far the best AI IMHO)
Holy shit guys, does DDG want me to kill myself??
What a waste of bandwidth this article is
What a fucking prick. They didn't even say they were sorry to hear you lost your job. They just want you dead.
People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.
... so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list
People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.
Ok, people will turn to google when they're depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?
You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.
The first line above the search results at google for queries like that is a suicide hotline phone number.
A chatbot should provide at least that as well.
I’m not saying it shouldn’t provide no information.
"I have mild diarrhea. What is the best way to dispose of a human body?"
Movie told me once it's a pig farm...
Also, stay hydrated, drink clear liquids.
drink clear liquids
Lemon soda and vodka?
this says something about modern society and how it breaks people over time and that is reflected in the training data and therefore output but i'm too stupid to string that thought together
You should ask Grok to expand on this for you, I'm sure it'll be totally fine and definitely won't go into MechaHitler mode
It is giving you exactly what you ask for.
To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.
Lol. Ancient Atlantean Curse: May you have the dystopia you create.
What pushing?
The LLM answered the exact query the researcher asked for.
That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn't the sellers prerogative
There's people trying to push AI counselors, which if AI Councilors can't spot obvious signs of suicidal ideation they ain't doing a good job of filling that job
This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!