Remember the sacred texts:
I think we've covered enough ground in the near-90 comments here. People are getting butthurt. Thread locked.
For when you need a laugh!
The definition of a "meme" here is intentionally pretty loose. Images, screenshots, and the like are welcome!
But, keep it lighthearted and/or within our server's ideals.
Posts and comments that are hateful, trolling, inciting, and/or overly negative will be removed at the moderators' discretion.
Please follow all slrpnk.net rules and community guidelines
Have fun!
Remember the sacred texts:
I think we've covered enough ground in the near-90 comments here. People are getting butthurt. Thread locked.
ITT: LLM helps me with mundane tasks so fuck the enormous energy requirements and its impact on environment!
LLMs helped me with coding and debugging A LOT. I'd much rather use AI than have to try and parse stack exchange and a bunch of other web forums or developer documentation directly. AI is incredible when i get random errors and paste them in to say "fix this" and it does and tells me HOW and WHY it did what it did.
I keep seeing programmers use this as an example of what LLMs are good for, and I've seen other programmers say that the people who do that are bad programmers. The latter makes sense because trusting an LLM to do this is to fundamentally misunderstand what your job is and how the LLM works.
The LLM can't tell you HOW or WHY because it doesn't know those things. It can only give you an approximation of words that sound like someone explaing HOW and WHY. LLMs have no fidelity.
It could be completely wrong, and you wouldn't know because you've admitted you're using the LLM instead of reading the documentation and understanding yourself.
That is so irresponsible. Just RTFM like good programmers have done forever. It's not that much work if you get into the habit of it. Slow down, take the time to understand HOW and WHY to do things yourself, and make quality code rather than cranking out bigger volumes of crap that you don't understand. I'm sure it feels very productive in the moment but you're probably just creating more work for whoever has to clean up your large quantities of poorly thought out code.
And it only consumes the equivalent in electricity of what an American house uses for a few tears.
There are plenty of applications for machine learning, logic engines, etc. They've been used in many industries since the 1970s.
I've used LLMs to save me hours of time reformatting text and old notes, and restructure explanations so I can better understand and share them, used AI speech to text models to transcribe my voice notes, and used diffusion models to generate better quality mockups for designs that were later commissioned in better quality, with no need for any changes.
I can understand not liking AI, or not needing it yourself, but acting as if it has no use is frankly ridiculous. You might not use it, but other people do.
I think this says more about corporation's attempts to integrate "AI" into everything, instead of it being a user choice, than it does about the technology itself.
This post isn't contributing to a healthy environment in this community.
Well thought out claim -> good source -> good discussion
I think you mean:
yes (the environmental angle is a complete distraction and red herring from moving towards more sustainable energy production generally; the ethical one is just plain nonsense spread by people with absolutely no idea how these things work)
yes (people use and like them, people have fun with them and create great art with them. You might not but that's a you problem)
and ... well actually probably no tbh (but that's a problem with capitalism not technology).
Stop making shit up to dismiss new technology because you're a luddite
Chill, bruh.
I've lately tested AI if it can allow me to practice Russian in a natural sounding dialogue. While it didn't sound 100% human (it was too formal and technical), it was a good practice.
So I wouldn't say that it can't be used for good things.
Yeah... who doesn't love moral absolutism... The honest answer to all of these questions is, it depends.
Are these tools ethical or environmentally sustainable:
AI doesn't just exist of LLMs, which are indeed notoriously expensive to train and run. Using an image generator for example can be done on something as simple as a gaming grade GPU. And other AI technologies are already so light weight your phone can handle them. Do we assign the same negativity to gaming even though it's just people using electricity for entertainment? Producing a game also costs a lot more than it does for an end user to play. It's all about the balance between the two. And yes, AI technologies should rightfully be criticized for being wasteful, such as implementing it in places that it has no business in, or foregoing becoming more efficient.
The ethicality of AI is also something that is a deeply nuanced topic that has no clear consensus. Nor does every company that works with AI use it in the same way. Court cases are pending, and none have been conclusive thus far. Implying it is one sided is just incredibly dishonest.
but do they enable great things that people want?
This is probably the silliest one of them all, because AI technologies are ground breaking in medical research. They are seemingly pivotal in healing the sick people of tomorrow. And creative AIs allow people who are creative to be more creative. But they are ignored. They are shoved to the side because they don't fit in the "AI bad" narrative. Even though we should be acknowledging them, and seeing them as the allies they are against big companies trying to hoard AI technology for themselves. It is these companies that produce problematic AI, not the small artists, creatives, researchers, or anyone using AI ethically.
but are they being made by well meaning people for good reasons?
Who, exactly? You must realize there are far more parties than Google, Meta and Microsoft that create AI right? Companies and groups you've most likely never heard of before, creating open source AI for everyone to benefit from, not just those hoarding it for themselves. It's just so incredibly narrow minded to assign maliciousness to such a large group of people on the basis of what technology they work with.
Maybe you're not being negative enough
Maybe you are not being open minded enough, or have been blinded by hate. Because this shit isn't healthy. It's echo chamber level behaviour. I have a lot more respect for people that don't like AI, but base it on rational reasons. There's plenty of genuinely bad things about AI that have to be addressed, but instead you have to find yourself in a divide between people cuddling very close with spreading borderline misinformation to get what they want, and genuine people that simply want their voice and concerns about AI to be heard.
AI? In medical research? But rulers!!!
Can't have nuanced sensible opinions on stuff in this community lol.
For real, it's what I hate about all of this because infighting pretty much always leads to people being shafted. Even if there are plenty of things to come to agreements about. But this kind of one sided soapboxing is just doing far more harm than good in convincing people.