this post was submitted on 17 Dec 2024
574 points (92.6% liked)

memes

10666 readers
2616 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 3 days ago (2 children)

ChatGPT is a tool under development and it will definitely improve in the long term. There is no reason to shit on it like that.

Instead, focus on the real problems: AI not being open-source, AI being under the control of a few monopolies, and there being little to none regulations that ensure it develops in a healthy direction.

[–] [email protected] 2 points 3 days ago

AI is pretty over-rated but the Anti-AI forces way overblow the problems associated with AI.

[–] [email protected] 0 points 3 days ago

it will definitely improve in the long term.

Citation needed

There is no reason to shit on it like that.

Right now there is, because of how wrong it and other AIs can be, with the average person using the first answer as correct without double checking

[–] [email protected] 6 points 3 days ago

I wonder where people can go. Wikipedia maybe. ChatGPT is better than google for answering most questions where getting the answer wrong won't have catastrophic consequences. It is also a good place to get started in researching something. Unfortunately, most people don't know how to assess the potential problems. Those people will also have trouble if they try googling the answer, as they will choose some biased information source if it's a controversial topic, usually picking a source that matches their leaning. There aren't too many great sources of information on the internet anymore, it's all tainted by partisans or locked behind pay-walls. Even if you could get a free source for studies, many are weighted to favor whatever result the researcher wanted. It's a pretty bleak world out there for good information.

[–] [email protected] 6 points 4 days ago (1 children)

Reject proprietary LLMs, tell people to "just llama it"

[–] [email protected] 19 points 4 days ago (1 children)
[–] [email protected] 6 points 3 days ago (1 children)

Top is proprietary llms vs bottom self hosted llms. Bothe end with you getting smacked in the face but one looks far cooler or smarter to do, while the other one is streamlined web app that gets you there in one step.

[–] [email protected] 0 points 3 days ago

But when it is open source, nobody gets regularly slain and the planet progressively destroyed due to mega conglomerate entities automating class violence

[–] [email protected] 3 points 4 days ago

This is why so much research has been going into AI lately. The trend is already to not read articles or source material and base opinions off click bait headlines, so naturally relying on AI summaries and search results will soon come next. People will start to assume any generated response from a 'trusted search ai' is true, so there is a ton of value in getting an AI to give truthful and correct responses all of the time, and then be able to edit certain responses to inject whatever truth you want. Then you effectively control what truth is, and be able to selectively edit public opinion by manipulating what people are told is true. Right now we're also being trained that AI may make things up and not be totally accurate- which gives those running the services a plausible excuse if caught manipulating responses.

I am not looking forward to arguing facts with people citing AI responses as their source for truth. I already know if I present source material contradicting them, they lack the ability to actually read and absorb the material.

[–] [email protected] 57 points 4 days ago* (last edited 4 days ago) (3 children)

Ugh. Don’t get me started.

Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.

Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.

I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.

The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.

AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.

Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago) (1 children)

Wait, when did you do this? I just tried this for my town and researched each aspect to confirm myself. It was all correct. It talked about the natives that once lived here, how the land was taken by Mexico, then granted to some dude in the 1800s. The local attractions were spot on and things I've never heard of. I'm...I'm actually shocked and I just learned a bunch of actual history I had no idea of in my town 🤯

[–] [email protected] 2 points 3 days ago (1 children)

I did that test late last year, and repeated it with another town this summer to see if it had improved. Granted, it made less mistakes - but still very annoying ones. Like placing a tourist info at a completely incorrect, non-existent address.

I assume your result also depends a bit on what town you try. I doubt it has really been trained with information pertaining to a city of 160.000 inhabitants in the Netherlands. It should do better with the US I’d imagine.

The problem is it doesn’t tell you it has knowledge gaps like that. Instead, it chooses to be confidently incorrect.

[–] [email protected] 1 points 3 days ago

Only 85k pop here, but yeah. I imagine it's half YMMV, half straight up luck that the model doesn't hallucinate shit.

[–] [email protected] 7 points 3 days ago* (last edited 3 days ago) (2 children)

I know this is off topic, but every time i see you comment of a thread all i can see is the pepsi logo (i use the sync app for reference)

[–] [email protected] 2 points 3 days ago

Voyager doesn't show user PFPs at all. :/

[–] [email protected] 9 points 3 days ago (1 children)

You know, just for you: I just changed it to the Coca Cola santa :D

[–] [email protected] 3 points 3 days ago (1 children)

Spreading the holly day spirit

[–] [email protected] 1 points 3 days ago (1 children)

We are all dutch on this blessed day

[–] [email protected] 2 points 3 days ago

We are all gekoloniseerd

[–] [email protected] 8 points 4 days ago (2 children)

I gave it a math problem to illustrate this and it got it wrong

If it can’t do that imagine adding nuance

[–] [email protected] -1 points 3 days ago* (last edited 3 days ago)

Ymmv i guess. I've given it many difficult calculus problems to help me through and it went well

[–] [email protected] 11 points 4 days ago (1 children)

Well, math is not really a language problem, so it's understandable LLMs struggle with it more.

[–] [email protected] 11 points 4 days ago (1 children)

But it means it’s not “thinking” as the public perceives ai

[–] [email protected] 5 points 4 days ago (2 children)

Hmm, yeah, AI never really did think. I can't argue with that.

It's really strange now if I mentally zoom out a bit, that we have machines that are better at languange based reasoning than logic based (like math or coding).

[–] [email protected] 1 points 3 days ago

Not really true though. Computers are still better at math. They're even pretty good at coding, if you count compiling high-level code into assembly as coding.

But in this case we built a language machine to respond to language with more language. Of course it's not going to do great at other stuff.

[–] [email protected] 20 points 4 days ago (1 children)

And then google to confirm the gpt answer isn't total nonsense

[–] [email protected] 18 points 4 days ago (2 children)

I've had people tell me "Of course, I'll verify the info if it's important", which implies that if the question isn't important, they'll just accept whatever ChatGPT gives them. They don't care whether the answer is correct or not; they just want an answer.

[–] [email protected] 0 points 3 days ago (1 children)

Well yeah. I'm not gonna verify how many butts it takes to swarm mount everest, because that's not worth my time. The robot's answer is close enough to satisfy my curiosity.

[–] [email protected] 0 points 3 days ago

For the curious, I got two responses with different calculations and different answers as a result. So it could take anywhere from 1.5 to 7.5 billion butts to swarm mount everest. Again, I'm not checking the math because I got the answer I wanted.

[–] [email protected] 4 points 4 days ago

That is a valid tactic for programming or how-to questions, provided you know not to unthinkingly drink bleach if it says to.

[–] [email protected] 9 points 4 days ago (1 children)

Have they? Don't think I've heard that once and I work with people who use chat gpt themselves

[–] [email protected] 3 points 4 days ago

I'm with you. Never heard that. Never.

[–] [email protected] 3 points 4 days ago

"Let's ask MULTIVAC!"

load more comments
view more: next ›