this post was submitted on 27 Jan 2025
235 points (81.0% liked)

A Boring Dystopia

10261 readers
194 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
 
(page 2) 38 comments
sorted by: hot top controversial new old
[–] [email protected] 36 points 1 week ago (3 children)

The AI is stuck in 2023 as it can not bare the dystopia of 2025.

load more comments (3 replies)
[–] [email protected] 57 points 1 week ago (1 children)

LLMs can't believe anything. It's based on training data up until 2023, so of course it has no "recollection" (read: sources) about current events.

An LLM isn't a search engine nor an oracle.

[–] [email protected] 3 points 1 week ago (1 children)

Geez I know that, everybody knows it's just a chatbot. I thought it was a bit funny to share this conversation in this sub but most of the replies are people lecturing about the fact that AI is not sentient and blablabla

[–] [email protected] 18 points 1 week ago (1 children)

Ah, I believe this community is for posting about actual real things that make our society look like a boring dystopia. Not a fictual thing that might be funny.

So that might explain why people are responding the way they do.

[–] [email protected] 10 points 1 week ago (5 children)

Maybe I'm not interpreting the goal of this community it right.

I think it's funny that a bot locked in 2023 would tell me that all the things that -actually happened- in the past week are not plausible, and that I'm probably just inventing a dystopian scenario.

load more comments (5 replies)
[–] [email protected] 1 points 1 week ago (1 children)

Deepseek is Chinese trash, it also refuses to acknowledge the tiananmen square massacre.

[–] [email protected] 1 points 1 week ago

the web version does censor information but it doesn't when you run it locallly.

[–] [email protected] 17 points 1 week ago (1 children)

this has gotta be like astroturfing or something are we really citing LLM content in year of our lord 2025 ?? like gorl

two things can be true:

1 musk IS a nazi

2 LLMs are majorly sucky and trained with old data. the one OP is citing in particular doesn’t even know what year it is 🗿

what are we doing here? stop outsourcing common sense to ARTIFICIAL INTELLIGENCE of all things. we are cooked. 😭

load more comments (1 replies)
[–] [email protected] 72 points 1 week ago (2 children)

It's spicy autocorrect running on outdated training data. People expect too much from these things and make a huge deal when they get disappointed. It's been said earlier in the thread, but these things don't think or reason. They don't have feelings or hold opinions and they don't believe anything.

It's just the most advanced autocorrect ever implemented. Nothing more.

[–] [email protected] -1 points 1 week ago (1 children)

The recent DeepSeek paper shows that this is not the case, or at the very least that reasoning can emerge from "advanced autocorrect".

[–] [email protected] 4 points 1 week ago (12 children)

I doubt it's going to be anything close to actual reasoning. No matter how convincing it might be.

load more comments (12 replies)
[–] [email protected] 4 points 1 week ago

It's just the most advanced autocorrect ever implemented.

That's generous.

[–] [email protected] 9 points 1 week ago

Do people actually bother reading that shit? You know for a fact that it’s inaccurate trash delivered by a deeply-flawed program.

[–] [email protected] 12 points 1 week ago

we are currently in 2023,

input. more input!

[–] [email protected] 144 points 1 week ago

The AI "refuses" to "believe" it's 2025 as well. AI is not sentient, not aware, and has no beliefs... AI has less understanding about what it's talking about than the average crypto bro. Just because it's sophisticated, complicated, and incredibly well honed at selected tasks does not mean it's intelligent. It's both an incredibly advanced parrot and less intelligent than a parrot at the same time. Stop expecting it to have knowledge, opinions, a worldview, values, and morals. It doesn't. At best, sometimes, it has been trained to mimic those things.

[–] [email protected] 14 points 1 week ago

It truly is a stochastic parrot, and you can spot the style it has been trained on.

[–] [email protected] 17 points 1 week ago

A great bit of gaslighting, "The very real Nazi salute that he did is not real."

[–] [email protected] 115 points 1 week ago (3 children)

Knowledge database hasn’t been updated. Seems like a mountain made of a molehill.

[–] [email protected] 2 points 1 week ago

It has been, I've had it spit out plenty of info on recent developments even without giving it access to search the internet. I think the "you're from 2023" bit of information just hasn't been updated

[–] [email protected] 6 points 1 week ago (1 children)

It also seems to resist the suggestion that something new has happened, especially someone known for supporting fascism back in 2023 being even more fascist in 2025.

[–] [email protected] 3 points 1 week ago (1 children)

Probably just a side effect of the company tweaking the training data so people can't go "oh, in 2025, new research indicated that it is fine to use glue to keep your pizza together if you eat it while skydiving off of the golden gate bridge", and have it parrot it as fact.

[–] [email protected] 3 points 1 week ago

it is fine to use glue to keep your pizza together if you eat it while skydiving off of the golden gate bridge

Who leaked my Valentine's Day plans? 😤

[–] [email protected] 34 points 1 week ago (1 children)

100% this, I’ve seen this exact claim a half dozen times now. I know we all want to made a big conspiracy where big tech is censoring everything, but Hanlon’s Razor tells us it’s just a poorly designed system that has no training data after 2023, so asking it about current events will always cause responses like this.

[–] [email protected] 2 points 1 week ago

The tiananmen square massacre did not happen after 2023, and it denies this. So I think your idea has already been disproven by users of DeepSeek. I use it myself, but I'm not under the illusion that these things are more pure than the people who create them.

load more comments
view more: ‹ prev next ›