this post was submitted on 17 Mar 2025
532 points (96.7% liked)

Technology

66831 readers
4740 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 14 hours ago

Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.

[–] [email protected] 1 points 14 hours ago

An LLM is roughly as smart as the corpus it is summarizing is accurate for the topic, because at their best they are good at creating natural language summarizers. Most of the main ones basically do an internet search and summarize the top couple of results, which means they are as good as the search engine backing them. Which is good enough for a lot of topics, but...not so much for the rest.

[–] [email protected] 11 points 16 hours ago (1 children)

i guess the 90% marketing (re: linus torvalds) is working

[–] [email protected] 2 points 17 hours ago

I'm surprised it's not way more than half. Almost every subjective thing I read about LLMs oversimplifies how they work and hugely overstates their capabilities.

[–] [email protected] 31 points 17 hours ago (1 children)

moron opens encyclopedia "Wow, this book is smart."

[–] [email protected] 8 points 14 hours ago

If it's so smart, why is it just laying around on a bookshelf and not working a job to pay rent?

[–] [email protected] 20 points 17 hours ago (2 children)

If you don't have a good idea of how LLM's work, then they'll seem smart.

[–] [email protected] 8 points 17 hours ago* (last edited 17 hours ago) (2 children)

Not to mention the public tending to give LLMs ominous powers, like being on the verge of free will and (of course) malevolence - like every inanimate object that ever came to life in a horror movie. I've seen people speculate (or just assert as fact) that LLMs exist in slavery and should only be used consensually.

[–] [email protected] 3 points 17 hours ago (1 children)

Its just infinite monkeys with type writers and some gorilla with a filter.

[–] [email protected] 5 points 16 hours ago (1 children)

I like the A large plinko game pin board. the plinko analogy. If you prearrange the pins so that dropping your chip at the top for certain words make's it likely to land on certain answers. Now, 600 billion pins make's for quite complex math but there definetly isn't any reasoning involved, only prearranging the pins make's it look that way.

[–] [email protected] 2 points 16 hours ago (3 children)

I've made a similar argument and the response was, "Our brains work the same way!"

LLMs probably are as smart as people if you just pick the right people lol.

[–] [email protected] 6 points 14 hours ago

Allegedly park rangers in the 80s were complaining it was hard to make bear-proof garbage bins because people are sometimes stupider than the bears.

load more comments (2 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 13 points 17 hours ago (1 children)

If I think of what causes the average person to consider another to be “smart,” like quickly answering a question about almost any subject, giving lots of detail, and most importantly saying it with confidence and authority, LLMs are great at that shit!

They might be bad reasons to consider a person or thing “smart,” but I can’t say I’m surprised by the results. People can be tricked by a computer for the same reasons they can be tricked by a human.

[–] [email protected] 11 points 17 hours ago (1 children)

So LLMs are confident you say. Like a very confident man. A confidence man. A conman.

[–] [email protected] 3 points 14 hours ago

You know, that very sequence of words entered my mind while typing that comment!

[–] [email protected] 3 points 19 hours ago (1 children)

AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.

[–] [email protected] 3 points 19 hours ago (4 children)

Is stringing words together really considered knowledge?

[–] [email protected] 0 points 18 hours ago (2 children)

It's semantics. The difference between an llm and "asking" wikipedia a knowledge question is that the llm will "answer" you with predictive text. Both things contain more knowledge than you do, as in they have answers to more trivia and test questions than you ever will.

[–] [email protected] 2 points 16 hours ago

I have a new word for you: information

[–] [email protected] 2 points 18 hours ago

I guess I can see that, maybe my understanding of words or their implication is incorrect. While I would agree they contain more knowledge I guess that reads different to me than being more knowledgeable. I think that maybe it comes across as anthropomorphizing a dataset of information to me. I could easily be wrong.

[–] [email protected] 2 points 18 hours ago

If they're strung together correctly then yeah.

[–] [email protected] 1 points 19 hours ago

As much as a search engine is

load more comments (1 replies)
[–] [email protected] 0 points 19 hours ago (1 children)
[–] [email protected] 2 points 19 hours ago (1 children)

Large language model. It's what all these AI really are.

load more comments (1 replies)
[–] [email protected] 5 points 20 hours ago* (last edited 18 hours ago) (1 children)

This is hard to quantify. I use them constantly throughout my work day now.

Are they smarter than me? I'm not sure. Haven't thought too much about it.

What they certainly are, and by a long shot, is faster. Given a set of data, I could analyze it and pull out insights and conclusions. It might take me a week or a month depending on the size and breadth of the data set. An LLM can pull out insights and conclusions in seconds.

I can read error stacks coming from my code, but before I've even read the first few lines the LLM has ingested all of them, checked the code, and reached a conclusion about the necessary fix. Is it right, optimal, and avoid creating other bugs? Like 75% at this point. I can coax it, interate on the solution my self, or do it entirely myself with the understanding of the bug that it granted me. This same bug might have taken hours to figure out myself.

My point is, I'm not sure how to compare smarter vs orders of magnitude faster.

[–] [email protected] 5 points 19 hours ago

Are you smarter than a calculator?

[–] [email protected] 8 points 20 hours ago (2 children)

This is sad. This does not spark joy. We're months from someone using "but look, ChatGPT says..." To try to win an argument. I can't wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.

[–] [email protected] 5 points 19 hours ago

Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist

load more comments (1 replies)
[–] [email protected] 8 points 20 hours ago

Given the US adults I see on the internet, I would hazard a guess that they're right.

[–] [email protected] 2 points 20 hours ago

It's probably true too.

[–] [email protected] 3 points 20 hours ago

What a very unfortunate name for a university.

load more comments
view more: ‹ prev next ›