this post was submitted on 01 Dec 2024
192 points (91.7% liked)

Ask Lemmy

27269 readers
1509 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

30 Nov 2022 release https://openai.com/index/chatgpt/

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 3 weeks ago

I have a book that I'm never going to write, but I'm still making notes and attempting to organize them into a wiki.

using almost natural conversation, i can explain a topic to the gpt, make it ask me questions to get me to write more, then have it summarize everything back to me in a format suitable for the wiki. In longer conversations, it will also point out possible connections between unrelated topics. It does get things wrong sometimes though, such as forgetting what faction a character belongs to.

I've noticed that gpt 4o is better for exploring new topics as it has more creative freedom, and gpt o1 is better for combining multiple fragmented summaries as it usually doesn't make shit up.

[–] [email protected] 16 points 3 weeks ago

I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.

I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.

[–] [email protected] 11 points 3 weeks ago (1 children)

I used it once to write a polite "fuck off" letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can't imagine people just using whatever it spots out.

[–] [email protected] 1 points 3 weeks ago

just made it bland, and simplified

Not always, but for the most part, you need to tell it more about what you're looking for. Your prompts need to be deep and clear.

"change it to a relaxed tone, but make it make me feel emotionally invested, 10th grade reading level, add descriptive words that fit the text, throw an allegory, and some metaphors" The more you tell it, the more it'll do. It's not creative. It's just making it fit whatever you ask it to do. If you don't give enough direction, you'll just get whatever the random noise rolls, which isn't always what you're looking for. It's not uncommon to need to write a whole paragraph about what you want from it. When I'm asking it for something creative, sometimes it takes half a dozen change requests. Once in a while, I'll be so far off base, I'll clear the conversation and just try again. The way the random works, it will likely give you something completely different on the next try.

My favorite thing to do is give it a proper outline of what I need it to write, set the voice, tone, objective, and complexity. Whatever it gives back, I spend a good solid paragraph critiquing it. when it's > 80% how I like it, I take the text and do copy edits on it until I'm satisfied.

It's def not a magic bullet for free work. But it can let me produce something that looks like I spent an hour on it when I spent 20 minutes, and that's not nothing.

[–] [email protected] 6 points 3 weeks ago

Not much. Every single time I asked it for help, it or gave me a recursive answer (ex: If I ask "how do I change this setting?" It answers: by changing this setting), or gave me a wrong answer. If I can't already find it on a search engine, then it's pretty useless to me.

[–] [email protected] 6 points 3 weeks ago

Main effect is lots of whinging on Lemmy. Other than that, minimal impact.

[–] [email protected] 9 points 3 weeks ago

It's my rubber duck/judgement free space for Homelab solutions. Have a problem: chatgpt and Google it's suggestions. Find a random command line: chatgpt what does this do.

I understand that I don't understand it. So I sanity check everything going in and coming out of it. Every detail is a place holder for security. Mostly, it's just a space to find out why my solutions don't work, find out what solutions might work, and as a final check before implementation.

[–] [email protected] 10 points 3 weeks ago* (last edited 3 weeks ago)

It's changed my job: I now have to develop stupid AI products.

It has changed my life: I now have to listen to stupid AI bros.

My outlook: it's for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we're fucked. And if they can't then this was all a huge waste of time and energy.

Alternative outlook: if this was a tool given to the people to help their lives, then that'd be cool and even forgive some of the terrible parts of how the models were trained. But that's not how it's happening.

[–] [email protected] 5 points 3 weeks ago

my face hurts from all the extra facepalms

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago)

I get an email from corporate about once a week that mentions it in some way. It gets mentioned in just about every all hands meeting. I don’t ever use it. No one on my team uses it. It’s very clearly not something that’s going to benefit me or my peers in the current iteration, but damn… it’s clear as day that upper management wants to use it but they don’t know how to implement it.

[–] [email protected] 4 points 3 weeks ago (1 children)

The only thing I have to worry about is not to waste my time to respond to LLM trolls in lemmy comments. People admitting to use LLM to me in conversation instantly lose my respect and I consider them lazy dumbfucks :p

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

You can lose respect for me if you want; I generally hate LLMs, but as a D&D DM I use them to generate pictures I can hand out to my players, to set the scene. I'm not a good enough artist and I don't have the time to become good enough just for this purpose, nor rich enough to commission an artist for a work with a 24h turnaround time lol.

I'm generally ok with people using LLMs to make their lives easier, because why not?

I'm not ok with corporations using LLMs that have stolen the work of others, to reduce their payroll or remove the fun/creative parts of jobs, just so some investors get bigger dividends or execs get bigger bonuses

[–] [email protected] 2 points 3 weeks ago (1 children)

I’m generally ok with people using LLMs to make their lives easier, because why not?

Because 1) it adds to killing our climate and 2) it increases dependencies on western oligarchs / technocrats who are generally horrible people and enemies of the public.

[–] [email protected] 1 points 3 weeks ago (1 children)

I agree, but the crux of my post is that it doesn't have to be that way - it's not inherent to the training and use of LLMs.

I think your second point is what makes the first point worse - this is happening at an industrial scale, with the only concern being profit. We pay technocrats for the use of their services, and they use that money to train more models without a care for the deviation it causes.

I think a lot of the harm caused by model training can be forgiven if the models were used for the betterment of quality of life of the masses, but they're not, they're mainly used to enrich technocrats and business owners at any expense.

[–] [email protected] 2 points 3 weeks ago

Well - there's nothing left to argue about - I do believe we have bigger climate killers than large computing centers, but it is a worrying trend to spend that much energy for an investment bubble on what is essentially an somewhat advanced word prediction. However, if we could somehow get the wish.com version of Tony Stark and other evil pisswads to die out, then yes, using LLMs for some creative ideas is a possibility. Or for references to other sources that you can then check.

However, the way those models are being trained is aimed at impressing naive people and that's very dangerous, because those people mistake impressively coherent sentences for understanding and are willing to talk about automating tasks upon which lives depend.

[–] [email protected] 12 points 3 weeks ago

After 2 years it's quite clear that LLMs still don't have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.

Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don't use it for anything that has to do communication, I find it unnecessary and disrespectful, since it's quite clear when the output is from a LLM.

For these reasons, I generally think it's a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce "AI" (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don't understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.

[–] [email protected] 42 points 3 weeks ago (4 children)

I absolutely hate AI. I'm a teacher and it's been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don't bother to think critically about the answers the AI gives and just assume it's 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.

People seem to think there's an "easy" way to learn with AI, that you don't have to put in the time and practice to learn stuff. News flash! You can't outsource creating neural pathways in your brain to some service. It's like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.

Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don't allow any electronic devices during exams.

[–] [email protected] 3 points 3 weeks ago (1 children)

Are you teaching in university? Also you said "%99 of students uses ChatGPT", are there really very few people who don't use AI?

[–] [email protected] 2 points 3 weeks ago

In classes I taught in university recently I only noticed less than %5 extremely obvious Ai helped papers. The majority is too bad to even be ai, and around 10% of good to great papers.

[–] [email protected] 1 points 3 weeks ago

Sounds like your curriculum needs updating to incorporate the existence of these tools. As I'm sure you know, kids - especially smart ones - are going to look for the lazy solution. An AI-detection arms race is wasting time and energy, plus mostly exercising the wrong skills.

AVID could be a resource for teaching ethics and responsible use of AI. https://avidopenaccess.org/resource/ai-and-the-4-cs-critical-thinking/

[–] [email protected] 4 points 3 weeks ago

I'm generally ok with the concept of externalizing memory. You don't need to memorize something if you memorize where to get the info.

But you still need to learn how to use the data you look up, and determine if it's accurate and suitable for your needs. Chat gpt rarely is, and people's blind faith in it is frightening

[–] [email protected] 11 points 3 weeks ago

As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I've seen people wasting hours like that. Its insane.

[–] [email protected] 5 points 3 weeks ago

I love using it for writing scripts that need to sanitize data. One example I had a bash script that looped through a csv containing domain names and ran AXFR lookups to grab the DNS records and dump into a text file.

These were domains on a Windows server that was being retired. The python script I had Copilot write was to clean up the output and make the new zone files ready for import into PowerDNS. Made sure the SOA and all that junk was set. Pdns would import the new zone files into a SQL backend.

Sure I could've written it myself but I'm not a python developer. It took about 10 minutes of prompting, checking the code, re-prompting then testing. Saved me a couple hours of work easy.

I use it all the time to output simple automation tasks when something like Ansible isn't apropos

[–] [email protected] 5 points 3 weeks ago

It helps me tremendously with language studies, outside of that I have no use for it and do actively detest the unethical possibilities of it

[–] [email protected] 31 points 3 weeks ago* (last edited 3 weeks ago)

Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre's entrance, with the previously described characters reacting in their own ways.

I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I'm short on time.

My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it's a super valuable tool.

[–] [email protected] 16 points 3 weeks ago* (last edited 3 weeks ago)

I got into linux right around when it was first happening, and I dont think I would've made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.

probably a very friendly expert or mentor or even just a regular established linux user could've done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me

load more comments
view more: next ›