this post was submitted on 17 Jul 2024
371 points (99.2% liked)

Open Source

30146 readers
82 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 month ago (10 children)

Anyone else immediately get a migraine trying to read the first 2 paragraphs/sentences of that article?

[–] [email protected] 3 points 1 month ago (5 children)
[–] [email protected] 3 points 1 month ago (1 children)

It was struggling harder than I was ;-)

[–] [email protected] 8 points 1 month ago* (last edited 1 month ago) (2 children)

I noticed those language models don't work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.

They are useful as a TLDR but shouldn't be taken as fact, at least not yet and for the foreseeable future.

A bit off topic, but I've read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That's why it's so important to mark ~~AI~~ LLM generated texts (which the TLDR bots do).

[–] [email protected] 3 points 1 month ago (1 children)

I think the Internet would benefit a lot, if peope would mark their Informations with sources!

  • source my brain
[–] [email protected] 2 points 1 month ago

Yeah that's right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.

[–] [email protected] 5 points 1 month ago

Not calling ML and LLM "AI" would also help. (I went offtopic even more)

load more comments (3 replies)
load more comments (7 replies)