Spent this morning reading a thread where someone was following chatGPT instructions to install "Linux" and couldn't understand why it was failing.
tumblr
Welcome to /c/tumblr, a place for all your tumblr screenshots and news.
Our Rules:
-
Keep it civil. We're all people here. Be respectful to one another.
-
No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.
-
Must be tumblr related. This one is kind of a given.
-
Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.
-
No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.
Sister Communities:
-
/c/[email protected] - Star Trek chat, memes and shitposts
-
/c/[email protected] - General memes
Hmm, I find chatGPT is pretty decent at very basic techsupport asked with the correct jargon. Like "How do I add a custom string to cell formatting in excel".
It absolutely sucks for anything specific, or asked with the wrong jargon.
Good for you buddy.
Edit: sorry that was harsh. I'm just dealing with "every comment is a contrarian comment" day.
Sure, GPT is good at basic search functionality for obvious things, but why choose that when there are infinitely better and more reliable sources of information?
There's a false sense of security couple to a notion of "asking" an entity.
Why not engage in a community that can support answers? I've found the Linux community (in general) to be really supportive and asking questions is one way of becoming part of that community.
The forums of the older internet were great at this... Creating community out of commonality. Plus, they were largely self correcting I'm a way in which LLMs are not.
So not only are folk being fed gibberish, it is robbing them of the potential to connect with similar humans.
And sure, it works for some cases, but they seem to be suboptimal, infrequent or very basic.
Used it once to ask it silly questions to see what the fuss is all about, never used it again and hopefully never will.
The amount of times I've seen a question answered by "I asked chatgpt and blah blah blah" and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea
This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.
A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.
I don't get how so many people carry their computer illiteracy as a badge of honor.
Chatgpt is useful.
Is it as useful as Tech Evangelists praise it to be? No. Not yet - and perhaps never will be.
But I sure do love to let it write my mails to people who I don't care for, but who I don't want to anger by sending my default 3 word replies.
It's a tool to save time. Use it or pay with your time if you willfully ignore it.
Tech illiteracy. Strong words.
I'm a sysadmin at the IT faculty of a university. I have a front row seat to witness the pervasive mental decline that is the result of chatbots. I have remote access to all lab computers. I see students copy-paste the exercise questions into a chatbot and the output back. Some are unwilling to write a single line of code by themselves. One of the network/cybersecurity teachers is a friend, he's seen attendance drop to half when he revealed he'd block access to chatbots during exams. Even the dean, who was elected because of his progressive views on machine learning, laments new students' unwillingness to learn. It's actual tech illiteracy.
I've sworn off all things AI because I strongly believe that its current state is a detriment to society at large. If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly. I will learn every skill that I need, without depending on AI. If you think that makes me an old man yelling at clouds, I have no kind words in response.
If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly.
I grew up, mostly, in the time of digital search, but far enough back that they still resembled the old card-catalog system. Looking for information was a process that you had to follow, and the mere act of doing that process was educational and helped order your thoughts and memory. When it's physically impossible to look for two keywords at the same time, you need to use your brain or you won't get an answer.
And while it's absolutely amazing that I can now just type in a random question and get an answer, or at least a link to some place that might have the answer, this is a real problem in how people learn to mentally process information.
A true expert can explain things in simple terms, not because they learned them in simple terms or think about them in simple terms, but because they have to ability to rephrase and reorder information on the fly to fit into a simplified model of the complex system they have in their mind. That's an extremely important skill, and it's getting more and more rare.
If you want to test this, ask people for an analogy. If you can't make an analogy, you don't truly understand the subject (or the subject involves subatomic particles, relativity or topology and using words to talk about it is already basically an analogy)
x 1000. Between the time I started and finished grad school, Chat GPT had just come out. The difference in students I TA'd at the beginning and end of my career is mind melting. Some of this has to do with COVID losses, though.
But we shouldn't just call out the students. There are professors who are writing fucking grants and papers with it. Can it be done well? Yes. But the number of games talking about Vegetative Electron Microscopy, or introductions whose first sentence reads "As a language model, I do not have opinions about the history of particle models," or completely non sensical graphics generated by spicy photoshop, is baffling.
Some days it held like LLMs are going to burn down the world. I have a hard time being optimistic about them, but even the ancient Greeks complained about writing. It just feels different this time, ya know?
ETA: Just as much of the onus is on grant reviewers and journal editors for uncritically accepting slop into their publications and awarding money to poorly written grants.
Sounds like it's a tool for wasting time.
I used the image generation of a jail broken model locally to drum up an AI mock-up of work I then paid a professional to do
This was 10000x smoother than the last time I tried this, where I irritated the artist with how much they failed to understand what I meant. The AI didn't care, I was able to get something decently close to what I had in my head, and a professional took that and made something great with it
Is that a better example?
Yes. AI is great at creating mediocre slop to pour onto a giant mountain of mediocre slop that already exists online. In fact, that's an LLM's greatest power: Producing stuff that looks like other stuff.
This is the perfect usecase for it. Mockups, sketches, filler. Low-quality, low-effort stuff used only as an input for more work.
That's the thing. It's a tool like any other. People who just give it a 5 word prompt and then use the raw output are doing it wrong.
It takes a lot of skill and knowledge to recognise a wrong answer that is phrased like a correct answer. Humans are absolutely terrible at this skill, it's why con artists are so succesful.
And that skill and knowledge is not formed by using LLMs
Absolutely.
And you can't learn to build a fence by looking at a hammer.
My point all over really. Tools and skills develop together and need to be seen in context.
People, whether for or against, who describe AI or other tool in isolation, who ignore detail and nuance, are not helpful or informative.
But you have the tech literacy to know that. Most non-tech people that use it do not, and just blindly trust it, because the world is not used to the concept that the computer is deceiving them.
As an older techy I'm with you on this, having seen this ridiculous fight so many times.
Whenever a new tech comes out that gets big attention you have the Tech Companies saying everyone has to over it in Overhype.
And you have the proud luddites who talk like everyone else is dumb and they're the only ones capable of seeing the downsides of tech
"Buy an iPhone, it'll Change your life!"
"Why do I need to do anything except phone people and the battery only lasts one day! It'll never catch on"
"Buy a Satnav, it'll get you anywhere!"
"That satnav drove a woman into a lake!"
"Our AI is smart enough to run the world!"
"This is just a way to steal my words like that guy who invented cameras to steal people's souls!"
🫤
Tech was never meant to do your thinking for you. It's a tool. Learn how to use it or don't, but if you use tools right, 10,000 years of human history says that's helpful.
Not all tools are worthy of the way they are being used. Would you use a hammer that had a 15% chance of smashing you in the face when you swung it at a nail? That's the problem a lot of us see with LLMs.
No, but I do use hammers despite the risks.
Because I'm aware of the risks and so I use hammers safely, despite the occasional bruised thumb.
You missed my point. The hammers you're using aren't 'wrong', i.e. smacking you in the face 15% of the time.
Said another way, if other tools were as unreliable as ChatGPT, nobody would use them.
You've missed my point.
ChatGPT can be wrong but it can't hurt you unless you assume it's always right
I don’t know how to feel about this. I need to ask ChatGPT.