this post was submitted on 18 May 2025
245 points (93.9% liked)

Ask Lemmy

31767 readers
855 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 4) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 15 points 2 days ago

Make AIs OpenSource by law.

[–] [email protected] 3 points 2 days ago

Most importantly, I wish countries would start giving a damn about the extreme power consumption caused by AI and regulate the hell out of it. Why do we need to lower our monitors refresh rate while there is a ton of energy used by useless AI agents instead that we should get rid of?

[–] [email protected] 2 points 2 days ago

Ban it until the hard problem of consciousness is solved.

[–] [email protected] 11 points 2 days ago

Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.

[–] [email protected] 20 points 2 days ago (1 children)

Regulate its energy consumption and emissions. As a whole, the entire AI industry. Any energy or emissions in effort to develop, train, or operate AI should be limited.

If AI is here to stay, we must regulate what slice of the planet we're willing to give it. I mean, AI is cool and all, and it's been really fascinating watching how quickly these algorithms have progressed. Not to oversimplify it, but a complex Markov chain isn't really worth the energy consumption that it currently requires.

A strict regulation now, would be a leg up in preventing any rogue AI, or runaway algorithms that would just consume energy to the detriment of life. We need a hand on the plug. Capitalism can't be trusted to self regulate. Just look at the energy grabs all the big AI companies have been doing already (xAI's datacenter, Amazon and Google's investments into nuclear). It's going to get worse. They'll just keep feeding it more and more energy. Gutting the planet to feed the machine, so people can generate sexy cat girlfriends and cheat in their essays.

We should be funding efforts to utilize AI more for medical research. protein folding , developing new medicines, predicting weather, communicating with nature, exploring space. We're thinking to small. AI needs to make us better. With how much energy we throw at it we should be seeing something positive out of that investment.

load more comments (1 replies)
[–] [email protected] 33 points 2 days ago (1 children)

The technology side of generative AI is fine. It's interesting and promising technology.

The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.

We need legislation to catch up. We also need society to be able to catch up. We can't let the AI bros continue to foist more "helpful tools" on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.

[–] [email protected] 5 points 2 days ago

I agree, but I’d take it a step further and say we need legislation to far surpass the current conditions. For instance, I think it should be governments leading the charge in this field, as a matter of societal progress and national security.

[–] [email protected] 14 points 2 days ago* (last edited 2 days ago) (1 children)

I'm perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!

It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I'd say that'll happen when pigs fly, but police choppers exist so idk

load more comments (1 replies)
[–] [email protected] 12 points 2 days ago

I don't dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that

[–] [email protected] 21 points 2 days ago (2 children)

I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.

[–] [email protected] 6 points 2 days ago

Have you tried filtering, translating, or summarizing your inbox through AI? /s

load more comments (1 replies)
[–] [email protected] 8 points 2 days ago* (last edited 2 days ago)

More regulation, supervised development, laws limiting training data to be consensual.

[–] [email protected] 22 points 2 days ago (5 children)

Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who's said the phrase "ai" and redistribute their wealth.

load more comments (5 replies)
[–] [email protected] 0 points 2 days ago

Shut it off until they figure out how to use a reasonable amount of energy and develop serious rules around it

[–] [email protected] 5 points 3 days ago

2 chicks at the same time.

[–] [email protected] 6 points 3 days ago

I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.

This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.

[–] [email protected] 14 points 3 days ago

I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.

I would much rather see the field advanced by scientific and academic interests. Let's focus on solving problems that help everyone instead of temporarily boosting profit margins.

I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.

[–] [email protected] 33 points 3 days ago

TBH, it's mostly the corporate control and misinformation/hype that's the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people's capacity for critical thinking.

ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights..

So yeah, uh... Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.

[–] [email protected] 22 points 3 days ago

Rename it to LLMs, because that's that it is. When the hype label is gone, it won't get shoved into everywhere for shits and giggles and be used for stuff it's actually useful for.

[–] [email protected] 2 points 3 days ago

License it's usage

[–] [email protected] 9 points 3 days ago (3 children)

I’d like for it to be forgotten, because it’s not AI.

[–] [email protected] 10 points 3 days ago

It is. Just not agi

[–] [email protected] 7 points 3 days ago

It's AI in so far as any ML is AI.

[–] [email protected] 5 points 3 days ago

Thank you.

It has to come from the C suite to be "AI". Otherwise it's just sparkling ML.

load more comments
view more: ‹ prev next ›