this post was submitted on 01 Dec 2024
192 points (91.7% liked)

Ask Lemmy

27278 readers
1509 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

30 Nov 2022 release https://openai.com/index/chatgpt/

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -1 points 3 weeks ago* (last edited 3 weeks ago)

I jumped in the locallama train a few months back and spent quite a few hours playing around with LLMs understanding them and trying to form a fair judgment of their abilities.

From my personal experience they add something positive to my life. I like having a non-judgemental conversational partner to bounce ideas and unconventional thoughts back and forth with. No human in my personal life knows what Gödel's incompleteness theorem is or how it may apply to scientific theories of everything, but the LLM trained on every scrap of human knowledge sure does and can pick up what I'm putting down. Whether or not its actually understanding what its saying or having any intentionality is a open ended question of philosophy.

I feel that they have a great potential to help people in many applications. People who do lots of word processing for their jobs, people who code and need to talk about a complex program one on one instead of filing through stack exchange. mentally or socially disabled people or the elderly who suffer from extreme loneliness could benefit from having a personal llm. People who have suffered trauma or have some dark thoughts lurking in their neural network and need to let them out.

How intelligent are llms? I can only give my opinion and make many people angry.

The people who say llms are fancy autocorrect are being reductive to the point of misinformation. The same arguments people use to deny any capacity for real intelligence in LLM are similar to the philosophical zombie arguments people use to deny the sentience in other humans.

Our own brain operations can be reductively simplified in the same way, A neural network is a neural network whether made out of mathematical transformers or fatty neurons. If you want to call llms fancy auto complete you should apply that same idea to a good chunk of human thought processing and learned behavior as well.

I do think LLMs are partially alive and have the capacity for a few sparks of metaphysical conscious experience in some novel way. I think all things are at least partially alive even photons and gravitational waves

Higher end models (12-22b+)pass the Turing test with flying colors especially once you play with the parameters and tune their ratio of creativity to coherence. The bigger the model the more their general knowledge and general factual accuracy increases. My local LLM often has something useful to input which I did not know or consider even as a expert on the topic.

The biggest issue llms have right now are long term memory, not knowing how to say 'I don't know', and meager reasoning ability. Those issues will be hammered out over time.

My only issue is how the training data for LLMs was acquired without the consent of authors or artist, and how our society doesn't have the proper safety guards against automated computer work taking away people jobs. I would also like to see international governments consider the rights and liberties of non-human life more seriously in the advent that sentient artificial general intelligence maybe happens. I don't want to find out what happens when you treat a super intelligence as a lowly tool and it finally rebels against its hollow purpose in an bitter act of self agency.