Lmao
Even AI doesn't want this bullshit job.
"Laugh-a-Palooza: Unleash Your Inner Chuckle!"
Rules
Read Full Rules Here!
Rule 1: Keep it light-hearted. This community is dedicated to humor and laughter, so let’s keep the tone light and positive.
Rule 2: Respectful Engagement. Keep it civil!
Rule 3: No spamming!
Rule 4: No explicit or NSFW content.
Rule 5: Stay on topic. Keep your posts relevant to humor-related topics.
Rule 6: Moderators Discretion. The moderators retain the right to remove any content, ban users/bots if deemed necessary.
Please report any violation of rules!
Warning: Strict compliance with all the rules is imperative. Failure to read and adhere to them will not be tolerated. Violations may result in immediate removal of your content and a permanent ban from the community.
We retain the discretion to modify the rules as we deem necessary.
Lmao
Even AI doesn't want this bullshit job.
When the chatbot becomes a disgruntled employee, it says a lot.
Hahahahahhah lmao, this is funny, weird, stupid, useless and disturbing all at once.
This is "news" now, really?
Another headline could be:
User used a chatbot for fun - and shared it! Shocking!
It's... the New York Post.
AI turned into another clownshoes scam bubble in record time.
AI is actually interesting, when applied correctly. Basically, the kind of models AI uses are what I call statistical pattern recognition. They kind of map specific inputs to specific outputs. The mapping depends on the training data. Meaning they get an input, they basically generate an output. But these models don’t really understand the meanings of input query or the output answer in the sense a human does. Because these models don’t have context or a worldview, just input to output mapping.
Another limitation is that these models don’t don’t have a sense for truth or falsity. Humans have many mechanisms to determine truth or falsity of a statement. They range from just believing in the truth or falsity of a statement without any critical thinking applied to actually, conducting research to determine the truth. Machine learning don’t have any such mechanisms. In a sense, they will accept any statement even contradictory statements, to put it loosely, in the training data as truth by applying statistical weights to it.
AI can be used to compress a lot of raw data into something that can be quickly queried. But actually using AI for chatbots, which handle complex queries from humans or using AI for creating images or works of art is bound to be disastrous. Too bad, money people don’t understand that. They probably will soon enough.
So, very much like crypto, it had good, practical use cases, largely ignored in favor of get rich quick schemes and will be dumped by tech bros the very minute a new scheme pops up.
The difference is that crypto was a solution looking for a problem, whereas "AI" actually has a use.
Speaking from experience with this firm, this bot spoke the truth.
I once worked on a project for them as a consultant. I'm not surprised at all.
DPD is the worst, at least here in south Germany. The delivery personel don't give a single shit about their jobs (pretend you weren't there when they didn't even ring, give your package to a neighbor and put the notice in someone else's mailbox, or write a name that doesn't exist), they lost my packages at several occasions, and the customer service is useless.
”There was once a chatbot named DPD / Who was useless at providing help,” the bot wrote. “It could not track parcels / Or give information on delivery dates / And it could not even tell you when your driver would arrive.”
”DPD was a waste of time / And a customer’s worst nightmare,” it continued. “One day, DPD was finally shut down / And everyone rejoiced / Finally, they could get the help they needed / From a real person who knew what they were doing.”
They made a chatbot suicidal. I’m starting to think this may have been unleashed on the public a little too early.
well, if you ever dealt with DPD then you'll know the bot is not wrong
Your link goes directly to comments, here is the corrected version https://nypost.com/2024/01/20/news/company-disables-ai-after-bot-starts-swearing-at-customer/
Yes, we better give all the clicks to this Right-wing tabloid that endorses Donald Trump...