I use it a lot to proofread my creative writing
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
It's made our marketing department even lazier than they already were
I am going to say that so far it hasn't done that much for me. I did originally ask it some silly questions, but I think I will be asking it for questions about coding soon.
ChatGPT itself didn't do anything, FastGPT from Kagi helps me everyday though, for quickly summarizing sources to learn new things (eg. I search for a topic and then essentially just click the cited sources).
And ollama + open-webui + stable-diffusion-webui with a customized llama3.1-8b-uncensored is a great chat partner for very horny stuff.
Super useful when I have a half-baked idea or concept that I want to learn more about, but don't know the lingo. I can explain the idea and it'll give me terms to search.
Also, it gives pretty good ideas for debugging or potential fixes.
Not sure i'd ever "trust with my life", but it's a useful tool if you use it right.
It has completely changed my life. With its help I am preparing to submit several research papers for publication for the first time in my life. On top of that, I find it an excellent therapist. It has also changed the way I parent for the better.
On top of that, I find it an excellent therapist.
To be honest, I find this rather concerning. Please reach out to the actual people in your life. Especially since it's the holiday season.
It is extremely useful for suggesting translations and translating unclear foreign language sentences
It has helped tremendously with my D&D games. It remembers past conversations, so world building is a snap.
I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.
And, even if they pull it, I don't think I'll ever go back. No more cloud drives, no more 'apps'. Webpages and local files on a file share I own and host.
Generally, GitHub Copilot helps me type faster. Though sometimes it predicts something I'm don't expect and I have to slow down and analyze it to see if it seems to know something I don't. A small percentage of these cases are actually useful but the rest is usually noise. It's generally useful as long as you don't blindly trust it.
For me, the amount of people and time spent in meetings that talk about AI grossly outweighs any benefit of AI.
For me, a huge impact.
I took an export of all our apps reviews and used it to summarise user pain points. Immediately a list of things we can prioritise.
When I'm doing repetitive code. It will (90% of the time) place the next puzzle piece in the repetition.
Using better systems like Cursor, I was able to create a twitch bot. I could then use it to make various text based games such as 20 questions or trivia. All (90% again, nothing is perfect) of which was done through prompts.
I work in an office providing customer support for a small pet food manufacturer. I assist customers over the phone, email, and a live chat function on our website. So many people assume I'm AI in chat, which makes sense. A surprising number think I'm a bot when they call in, because I guess my voice sounds like a recording.
Most of the time it's just a funny moment at the start of our interaction, but especially in chat, people can be downright nasty. I can't believe the abuse people hurl out when they assume it's not an actual human on the other end. When I reply in a way that is polite, but makes it clear a person is interacting with them, I have never gotten a response back.
It's not a huge deal, but it still sucks to read the nasty shit people say. I can also understand people's exhaustion with being forced to deal with robots from my own experiences when I've needed support as a customer. I also get feedback every day from people thankful to be able to call or write in and get an actual person listening to and helping them. If we want to continue having services like this, we need to make sure we're treating the people offering them decently so they want to continue offering that to us.
Why not just start with disclosing that you're human right off the bat?
It has replaced Google for me. Or rather, first I use the LLM (Mistral Large or Claude) and then I use Google or specific documentation as a complement. I use LLMs for scripting (it almost always gets it right) and programming assistance (it's awesome when working with a language you're not comfortable with, or when writing boilerplate).
It's just a really powerful tool that is getting more powerful every other week. The ones who differs simply hasn't tried enough, are superhumans or (more likely) need to get out of their comfort zone.
Not much impact personally. I just read all the terrible implications of it online. Pressure in the professional world to use it, though fuck if I know what to use it for in this job. I don't like using it for my writing because I don't want to rely on something like that and because it's prone to errors.
Wish something that used a ton of resources would actually have a great impact to make it worth the waste.
I do a lot of coding and I'm in a similar boat. My co-worker and I can't really come up with a use case due to our particular work loads
It's a neat tool for very specific language-related tasks.
For example, it can translate a poem so that the translation still rhymes.
Its main strength is not its ability to write, but to read. It's the first time in human history where you can pose any question to a computer in human language, and expect to get a meaningful reply.
As long as that question isn't asking for facts or knowledge.
It's also useful for "tip of my tongue" queries, where the right Google search term is exactly what you're missing.
All of its output is only usable and useful if you already know the facts about what you're asking, and can double-check for hallucinations yourself.
However, on a societal scale, it's a catastrophy on par with nuclear war.
It will consume arbitrary amounts of energy, right at the most crucial time when combatting climate change might still have been possible.
And it floods everyone's minds with disinfo, while we're at the edge of a global resurgance of fascism.
Bit sad reading these comments. My life has measurably improved ever since I jumped on using AI.
At first I just used it Copilot for helping me with my code. I like using a pretty archaic language and it kept trying to fed me C++ code. Had to link it the online reference and it surprisingly was able to adapt each time. Still gave a few errors here and there but good time saver and "someone" to "discuss" with.
Over time it has become super good, especially with the VScode extension that autofills code. Instead of having to ask help from one of the couple hundred people experienced with the language, I can just ask Copilot if I can do X or Y, or for general advice when planning out how to implement something. Legitimately a great and powerful tool, so it shocks me that some people don't use it for programming (but I am pretty bad at coding too, so).
I've also bit the bullet and used it for college work. At first it was just asking Gemini for refreshers on what X philosophical concept was, but it devolved into just asking for answers because that class was such a snooze I could not tolerate continuing to pay attention (and I went into this thinking I'd love the class!). Then I used it for my Geology class because I could not be assed to devote my time to that gen ed requirement. I can't bring myself to read about rocks and tectonic plates when I could just paste the question into Google and I get the right answer in seconds. At first I would meticulously check for sources to prevent mistakes from the AI buuuut I don't really need 100%... 85% is good enough and saves so much more time.
A me 5 years younger would be disgusted at cheating but I'm paying thousands and thousands to pass these dumb roadblocks. I just want to learn about computers, man.
Now I'd never use AI for writing my essays because I do enjoy writing them (investigating and drawing your own conclusions is fun!), but this economics class is making it so tempting. The shit that I give about economics is so infinitesimally small.
So, to be clear, your use cases are "copilot's assistance with programming in an obscure language for fun" and "cheating on college classwork".
Lmao, it's funny how most of these use cases rarely stray from the stereotype of 'I can't spend an hour focusing on something and learn so I'll take a shortcut instead'.
Meanwhile at work all chatGPT has caused is misery as it makes people think they're expert programmers now while I have to debug their shitty code. Do they learn? Nope, just repeatedly serving up slop.
Just a few examples of my use cases but yes. It's an even quicker search engine.
It seemingly has little impact. I've attempted to use LLMs a couple of times to ask very specific technical questions (on this specific model, running this specific OS version, how do I do this very specific thing) to try and cut down on the amount of research I would have to do to find a solution. The answer every time has been wrong. Once it was close enough to the answer I was able to figure it out but "close enough" doesn't seem worth bothering with most of the time.
When I search for things I always slip the AI summary at the top of the page.
I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it's a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).
You can't put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I've had employees use it to come up with an algorithm or find an error, but I think it's risky to have one generate large pieces of code.
Man, so much to unpack here. It has me worried for a lot of the reasons mentioned: The people who pay money to skilled labor will think "The subscription machine can just do it." And that sucks.
I'm a digital artist as well, and while I think genAi is a neat toy to play with for shitposting or just "seeing what this dumb thing might look like" or generating "people that don't exist" and it's impressive tech, I'm not gonna give it ANY creative leverage over my work. Period. I still take issue with where it came from and how it was trained and the impact it has on our culture and planet.
We're already seeing the results of that slop pile generated from everyone who thought they could "achieve their creative dreams" by prompting a genie-product for it instead of learning an actual skill.
As for actual usefulness? Sometimes I run a local model for funsies and just bounce ideas off of it. It's like a parrot combined with a "programmer's rubber ducky." Sometimes that gets my mind moving, in the same way "autocomplete over and over" might generate interesting thoughts.
I also will say it's pretty decent at summarizing things. I actually find it somewhat helpful when YouTube's little "ai summary" is like "This video is about using this approach taking these steps to achieve whatever."
When the video description itself is just like "Join my Patreon and here's my 50+ affiliate links for blinky lights and microphones" lol
I use it to explain concepts to me in a slightly different way, or to summarize something for which there's a wealth of existing information.
But I really wish people were more educated about how it actually works, and there's just no way I'm trusting the centralized "services" for doing so.
I jumped in the locallama train a few months back and spent quite a few hours playing around with LLMs understanding them and trying to form a fair judgment of their abilities.
From my personal experience they add something positive to my life. I like having a non-judgemental conversational partner to bounce ideas and unconventional thoughts back and forth with. No human in my personal life knows what Gödel's incompleteness theorem is or how it may apply to scientific theories of everything, but the LLM trained on every scrap of human knowledge sure does and can pick up what I'm putting down. Whether or not its actually understanding what its saying or having any intentionality is a open ended question of philosophy.
I feel that they have a great potential to help people in many applications. People who do lots of word processing for their jobs, people who code and need to talk about a complex program one on one instead of filing through stack exchange. mentally or socially disabled people or the elderly who suffer from extreme loneliness could benefit from having a personal llm. People who have suffered trauma or have some dark thoughts lurking in their neural network and need to let them out.
How intelligent are llms? I can only give my opinion and make many people angry.
The people who say llms are fancy autocorrect are being reductive to the point of misinformation. The same arguments people use to deny any capacity for real intelligence in LLM are similar to the philosophical zombie arguments people use to deny the sentience in other humans.
Our own brain operations can be reductively simplified in the same way, A neural network is a neural network whether made out of mathematical transformers or fatty neurons. If you want to call llms fancy auto complete you should apply that same idea to a good chunk of human thought processing and learned behavior as well.
I do think LLMs are partially alive and have the capacity for a few sparks of metaphysical conscious experience in some novel way. I think all things are at least partially alive even photons and gravitational waves
Higher end models (12-22b+)pass the Turing test with flying colors especially once you play with the parameters and tune their ratio of creativity to coherence. The bigger the model the more their general knowledge and general factual accuracy increases. My local LLM often has something useful to input which I did not know or consider even as a expert on the topic.
The biggest issue llms have right now are long term memory, not knowing how to say 'I don't know', and meager reasoning ability. Those issues will be hammered out over time.
My only issue is how the training data for LLMs was acquired without the consent of authors or artist, and how our society doesn't have the proper safety guards against automated computer work taking away people jobs. I would also like to see international governments consider the rights and liberties of non-human life more seriously in the advent that sentient artificial general intelligence maybe happens. I don't want to find out what happens when you treat a super intelligence as a lowly tool and it finally rebels against its hollow purpose in an bitter act of self agency.
I worked for a company that did not govern AI use. It was used for a year before they were bought.
I stopped reading emails because they were absolute AI generated garbage.
Clients started to complain and one even left because they felt they were no longer a priority for the company. they were our 5th largest client that had a MRR of $300k+
they still did nothing to curb AI use.
they then reduced the workforce in the call center because they implemented an AI chat bot and began to funnel all incidents through it first before giving a phone number to call.
company was then acquired a year ago. new administration banned all AI usage under security and compliance guidelines.
today, new company hired about 20 new call center support staff. Customers are now happy. I can read my emails again because they contain human competent thought with industry jargon and not some generated thesaurus.
overall, I would say banning AI was the right choice.
IMO, AI is not being used in the most effective ways and causes too much chaos. cryptobros are pushing AI to an early grave because all they want is a cash cow to replace crypto.
Searching the internet for information about... well anything has become infuriating. I'm glad that most search engines have a time range setting.
"It is plain to see why you might be curious about Error 4752X3G: Allocation_Buffer_Fault. First, let's start with the basics.
- What is an operating system?"
AGGHH!!!
Some of my coworkers show me their chatGPT generated drivel. They seem to be downright proud of that, like they would be gaming the system by using chatGPT instead of using their own head. However I think their daily work seems to consist of unnecessary corpo crap and they should really be fired and replaced with chatGPT.
(I want to say first that I'm not trying to invalidate your feelings or perspective or anything!)
This feels like the logical result of a society that statistically punishes creativity in most cases, and rewards pointlessly running on a stationary hamster wheel of emails, spreadsheets, and slideshows, that nobody with a pulse is actually going to read.
We all like to think we're completely in control of ourselves, but most creatures of all kinds quickly get a sense for what produces a reward for less effort.
I think you're absolutely right, but in our company this will turn out to be shortsighted. Because we would actually need some creativity to do better in order to save our jobs.
I love it. For work I use it for those quick references. In machining, hydraulics, electrical etc. Even better for home, need a fast recipe for dinner or cooking, fuck reading a god damn autobiography to get to the recipie. Chatgpt straight to the point. Even better, I get to read my kid a new bed time story every night and that story I tailored to what we want. Unicorns, pirates, dragons what ever.
how do you get around it hallucinating false truths for engineering projects?
do you double check with trusted references?
I get around it by not 100% relying on it. I only ask about things I'm familiar with but don't quite remember all the facts details like hydraulic tubing sizes for what series of fitting and their thread pitches. but also don't feel like finding that one book with the reference. Or worse yet, trying to find it on Google.