Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
After 2 years it's quite clear that LLMs still don't have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.
Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don't use it for anything that has to do communication, I find it unnecessary and disrespectful, since it's quite clear when the output is from a LLM.
For these reasons, I generally think it's a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce "AI" (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don't understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.