this post was submitted on 03 Jul 2025
819 points (99.6% liked)

Technology

72669 readers
3607 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -1 points 1 week ago* (last edited 1 week ago) (6 children)

I did check some of the references.

What I dont understand is why you would perceive this content as more trustworthy if I didn't say it's AI.

Nobody should trust blindly some anonymous comment on a forum. I have to check what the AI blurbs out but you can just gobble the comment of some stranger without exercising yourself some critical thinking?

As long as I'm transparent on the source and especially since I did check some of it to be sure it's not some kind of hallucination...

There shouldn't be any difference of trust between some random comment on a social network and what some AI model thinks on a subject.

Also it's not like this is some important topic with societal implications. It's just a technical question that I had (and still doesn't) that doesn't mandate researching. None of my work depends on that lib. So before my comment there was no information on compatibility. Now there is but you have to look at it critically and decide if you want to verify or trust it.

That's why I regret this kind of stubborn downvoting where people just assume the worse instead of checking the actual data.

Sometime I really wonder if I'm the only one supposed to check my data? Aren't everybody here capable of verifying the AI output if they think it's worth the time and effort?

Basically, downvoting here is choosing "no information" rather than "information I have to verify because it's AI generated".

Edit: Also I could have just summarized the AI output myself and not mention AI. What then? Would you have checked the accuracy of that data? Critical thinking is not something you use "sometimes" or just "on some comments".

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago) (3 children)

You realize that if we wanted to see an ~~AI~~ LLM response, we'd ask an ~~AI~~ LLM ourselves. What you're doing is akin to :

Hey guys, I've asked google if the new png is backward compatible, and here are the first links it gave me, hope this helps : [list 200 links]

[–] [email protected] -2 points 1 week ago (2 children)

I understand that. It's the downvoting of the clearly marked as AI LLM response. Is it detrimental to the conversation here to have that? Is it better to share nothing rather than this LLM output?

Was this thread better without it?

Is complete ignorance of the PNG compatibility preferable to reading this AI output and pondering how true is it?

[list 200 links]

Now I think this conversation is getting just rude for no reason. I think the AI output was definitely not the "I'm lucky" result of a Google search and the fact that you choose that metaphor is in bad faith.

[–] [email protected] 4 points 1 week ago

Was this thread better without it?

Yes.

I, and I assume most people, go into the comments on Lemmy to interact with other people. If I wanted to fucking chit-chat with an LLM (why you'd want to do that, I can't fathom), I'd go do that. We all have access to LLMs if we wish to have bullshit with a veneer of eloquency spouted at us.

load more comments (1 replies)
load more comments (1 replies)
load more comments (3 replies)