this post was submitted on 18 May 2025
245 points (93.9% liked)

Ask Lemmy

31767 readers
985 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 1 day ago* (last edited 1 day ago) (1 children)

You're showing your ignorance if you think the whole world has access to fit education. And I say fit because there's a huge difference learning from books made for Americans and AI tailored experiences just for you. The difference is insane and anyone who doesn't understand that should really go out more and I'll leave it at that.

Just the amount of frictionless that AI removes makes learning so much more accessible for huge percentage of population. I'm not even kidding, as an educator, LLM is the best invention since the internet and this will be very apparent in 10 years, you can quote me on this.

[–] [email protected] 10 points 1 day ago (2 children)

You shouldn't trust anything the LLM tells you though, because it's a guessing machine. It is not credible. Maybe if you're just using it for translation into your native language? I'm not sure if it's good at that.

If you have access to the internet, there are many resources available that are more credible. Many of them free.

[–] [email protected] 1 points 1 day ago (1 children)

You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine

You trust tons of other uncertain probability-based systems though. Like the weather forecast, we all trust that, even though it 'guesses' the future weather with some other math

[–] [email protected] 1 points 1 day ago (1 children)

That's really not the same thing at all.

For one, no one knows what the weather will be like tomorrow. We have sophisticated models that do their best. We know the capital of New Jersey. We don't need a guessing machine to tell us that.

[–] [email protected] 1 points 1 day ago (1 children)

For things that require a definite, correct answer, an LLM just isn't the best tool for it. However if the task is something with many correct answers, or no correct answer, like for instance writing computer code (if its rigorously checked against its actually not that bad) or for analyzing vast amounts of text quickly, then you could make the argument that its the right tool for the job.

[–] [email protected] 1 points 1 day ago

Many people have found that using LLMs for coding is a net negative. You end up with sloppy, vulnerable, code that you don't understand. I'm not sure if there have been any rigorous studies about it yet, but it seems very plausible. LLMs are prone to hallucinating, so you're going to get it telling you to import libraries that don't exist, or use parts of the standard library that don't exist.

It also opens up a whole new security threat vector of squatting. If LLMs routinely try to install a library from pypi that doesn't exist, you can create that library and have it do whatever you want. Vibe coders will then run it, and that's game over for them.

So yeah, you could "rigorously check" it but a. all of us are lazy and aren't going to do that routinely (like, have you used snapshot tests?), b. it's going to anchor you around whatever it produced, making it harder to think about other approaches, and c. it's often slower overall than just doing a good job from the start.

I imagine there are similar problems with analyzing large amounts of text. It doesn't really understand anything. To verify it's correct, you would have to read the whole thing yourself anyway.

There are probably specialized use cases that are good- I'm told AI is useful for like protein folding and cancer detection- but that still has experts (I hope) looking at the results.

To your point, I think people are trying to use these LLMs for things with definite answers, too. Like if I go to google and type in "largest state in the US" it uses AI. This is not a good use case.

[–] [email protected] -5 points 1 day ago

Again you're just showing your ignorance how actually available this is to people outside of your immediate circle, maybe you should travel a bit and open up your mind.