this post was submitted on 18 May 2025
245 points (93.9% liked)

Ask Lemmy

31767 readers
1102 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 5) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 41 points 3 days ago

Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I'd love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.

I don't love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I'm not sure it's enough to really matter. I think we need bigger "what if" scenarios to handle that.

[–] [email protected] 21 points 3 days ago (1 children)

There's too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who's controlling it and who's not.

We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else's artwork really really fast...that is all it does, at least for the public sector currently.

For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.

AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don't inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn't been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it's interesting to them.

There's too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.

Here are further resources if you didn't get enough ranting.

lemmy.world's fuck_ai community

System Crash Podcast

Tech Won't Save Us Podcast

Better Offline Podcast

load more comments (1 replies)
[–] [email protected] 23 points 3 days ago (3 children)

Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

load more comments (3 replies)
[–] [email protected] 46 points 3 days ago (14 children)

They have to pay for every copyrighted material used in the entire models whenever the AI is queried.

They are only allowed to use data that people opt into providing.

[–] [email protected] 9 points 3 days ago (2 children)

What about models folks run at home?

[–] [email protected] 17 points 3 days ago

Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.

load more comments (1 replies)
load more comments (13 replies)
[–] [email protected] 4 points 3 days ago (1 children)

I'm not super bothered by Tue copyright issue - the copyright system is barely serving people these days anyway. Blow it up.

I'm deeply troubled by the obscene power use. It might be worth it if it was a good tool. But it's not.

I haven't gone out of my way to use AI anything, but it's been stuffed into everything. And it's truly bad at it's job. AI is like a precocious 8-year-old, butting into every conversation. And it gives the right answer at about the rate a ln 8-year-old does. When I do a web search, I then need to do another one to check the AI's answer. Or scroll down a page to get past the AI answers to real sources. When someone uses it to summarize a meeting, I then need to read through that summary to make sure the notes are accurate. And it doesn't know to ask when it doesn't understand something like a proper secretary would. When I go looking for reference images, I have to check to make sure they're real and not hallucinations.

It gets in my way and slows me down. It needed at least another decade of development before being deployed at all, never mind at the scale it has, and it needs to be opt-in, not crammed into everything. And until it can be relied on, it shouldn't be allowed to suck down as much electricity as it does.

load more comments (1 replies)
[–] [email protected] 13 points 3 days ago

Stop selling it a loss.

When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody's going to want it.

[–] [email protected] 5 points 3 days ago

I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people's work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn't be, but could be) forced to shut down or start over properly. But it's far too late now since it's everywhere there is a GPU running, even if models don't progress past current state.

That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there's commonality in regards to not being sure what's going on inside the box and if it's really doing what it's told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it's too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it's the easiest way to throw money at a problem and reap profits, which is all they care about.

[–] [email protected] 4 points 3 days ago* (last edited 3 days ago)

Ideally the whole house of cards crumbles and AI goes the way of 3D TV's, for now. The world as it is now is not ready for AGI. We would quickly end up in a " I have no mouth and I must scream" scenario.

Otherwise, what everyone else has posted are good starting points. I would just add that any data centers used for AI have to be powered 100% by renewable energy.

[–] [email protected] 4 points 3 days ago
[–] [email protected] 8 points 3 days ago

Honestly, at this point I'd settle for just "AI cannot be bundled with anything else."

Neither my cell phone nor TV nor thermostat should ever have a built-in LLM "feature" that sends data to an unknown black box on somebody else's server.

(I'm all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they're neighbors with Napster.)

[–] [email protected] 3 points 3 days ago

I'm not against AI itself—it's the hype and misinformation that frustrate me. LLMs aren't true AI - or not AGI as the meaning of AI has drifted - but they've been branded that way to fuel tech and stock market bubbles. While LLMs can be useful, they're still early-stage software, causing harm through misinformation and widespread copyright issues. They're being misapplied to tasks like search, leading to poor results and damaging the reputation of AI.

Real AI lies in advanced neural networks, which are still a long way off. I wish tech companies would stop misleading the public, but the bubble will burst eventually—though not before doing considerable harm.

[–] [email protected] 0 points 3 days ago (1 children)

Asteroid. There's no good way out of this.

[–] [email protected] 0 points 3 days ago (1 children)

If you think death is the answer the polite thing is to not force everyone to go along with you.

load more comments (1 replies)
[–] [email protected] 4 points 3 days ago

I want everyone to realize that the only reason AI seems intelligent is because it speaks English.

[–] [email protected] 8 points 3 days ago (1 children)

I want OpenAI to collapse.

[–] [email protected] 3 points 3 days ago

Many people with positive sentiments towards AI also want that.

[–] [email protected] 11 points 3 days ago (2 children)

I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me

[–] [email protected] 8 points 3 days ago (1 children)

It feels like we're being delivered the sort of stuff we'd consider flim-flam if a human did it, but lapping it up bevause the machine did it.

"Sure, boss, let me write this code (wrong) or outline this article (in a way that loses key meaning)!" If you hired a human who acted like that, we'd have them on an improvement plan in days and sacked in weeks.

[–] [email protected] 2 points 3 days ago (1 children)

So you dislike that the people selling LLMs are hyping up their product? They know they’re all dumb and hallucinate, their business model is enough people thinking it’s useful that someone pays them to host it. If the hype dies Sam Altman is back in a closet office at Microsoft, so he hypes it up.

I actually don’t use any LLMs, I haven’t found any smart ones. Text to image and image to image models are incredible though, and I understand how they work a lot more.

[–] [email protected] 5 points 3 days ago

I expect the hype people to do hype, but I'm frustrated that the consumers are also being hypemen. So much of this stuff, especially at the corporate level, is FOMO rather than actually delivered value.

If it was any other expensive and likely vendor-lockin-inducing adventure, it would be behind years of careful study and down-to-the-dime estimates of cost and yield. But the same people who historically took 5 years to decide to replace an IBM Wheelwriter with a PC and a laser printer are rushing to throw AI at every problem up to and including the men's toilet on the third floor being clogged.

[–] [email protected] 6 points 3 days ago* (last edited 3 days ago)

But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument.

The math around it is unimportant, frankly. The issue with AI isn't about GANN networks alone, it's about the licensing of the materials used to train a GANN and whether or not companies that used materials to train a GANN had proper ownership rights. Again, like the post I made, there's an easy argument to make that OpenAI and others never licensed the material they used to train the AI, making the whole model poisoned by copyright theft.

There's plenty of uses of GANNs that are not problematic. Bespoke solution for predicting the outcomes of certain equations or data science uses that involve rough predictions on publically sourced statistics (or privately owned.) The problem is that these are not the same uses that we call "AI" today -- and we're actually sleeping on much better uses of neural networks by focusing on a pie in the sky AGI nonsense being pushed by companies that are simply pushing highly malicious, copyright infringing products to make a quick buck on the stock market.

[–] [email protected] 9 points 3 days ago (1 children)

I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can't validate what it finds you shouldn't be using it).

I'm not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn't be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).

[–] [email protected] 4 points 3 days ago

That's what I'd like to see more of, too -- Use it to cure fucking cancer already. Make it free to the legit medical institutions, train doctors how to use it. I feel like we're sitting on a goldmine and all we're doing with it is stealing other people's intellectual property and making porn and shitty music.

[–] [email protected] 5 points 3 days ago* (last edited 3 days ago)

What I want from AI companies is really simple.

We have a thing called intellectual property in the United States of America. If I decided to make a Jellyfin instance that I charged access to, containing material I didn't own, somehow advertising this service on the stock market as a publicly traded company, you would bet your ass that I'd have a 1 way ticket to a defense seat in court.

AI companies, otherwise, operate entirely on data they don't own and don't pay licensing for ANY of the materials that are used to train their neural networks. So, in their eyes, any image, video (tv show/movie) or book that happens to be posted on the Internet is fair game in their eyes. This isn't how intellectual property works for individuals, so why exactly would a publicly traded company have an exception to this rule?

I work a lot in the world of FOSS and have a firm understanding that just because code is there doesn't make it yours. This is why we have the GPL for licensing. In fact, I'll take it a step further and say that the entirety of AI is one giant licensing nightmare, especially coding AI that isn't actually attributing license details with the code they're sampling from. (Sampling code being notably different than, say, learning from. Learning implies self-agency, and not corporate ownership.)

It feels to me that the AI bubble has largely been about pushing AI so hard and fast that people were investing in something with a dubious legal state in the US. Nobody stopped to ask whether or not the data that Facebook had on their website (for example, they aren't alone in this) was actually theirs to own, and what the repercussions for these types of decisions are.

You'll also note that Tech and Social Media companies are quick to take ownership of data when it benefits them (artists works, intellectual property that isn't theirs, random user posts about topics) and quick to deny ownership when it becomes legally burdensome (CSAM, illicit drug deals, etc.) to a degree that no individual would be granted. Hell, I'm not even sure a "small" tech startup would be granted this level of double-speak and hypocrisy.

With this in mind, I am simply asking that AI companies pay for the data that they're using to train AI. Additionally, laws must be in place that allows for the auditing of all materials used to train an AI with the legal intent of verifying that all parties are paid accordingly. This is how every other business works. If this were somehow granted an exception, wouldn't it be braindead easy to run every "service" through an AI layer in order to bypass any and all copyright laws?

Otherwise, if facebook and others want to claim that data hosted on their website is theirs to own and train off of -- well, great, but there should be no exceptions to this and they should not be allowed to host materials they then have no ownership over. So pictures of IP they don't own or materials they want to claim they have no ownership over must be removed from the platform. I would much prefer the first of these two options, however.

edit: I should note, that AI for educational purposes could be granted an exception for this under fair use (for university) but would still also be required to site all sources used to produce the works in question (which is normal for academics, in the first place.) and would also come with some strict stipulations on using this AI as a "product" (it would basically be moot, much like some research papers). This basically the furthest I'm willing to give these companies.

load more comments
view more: ‹ prev next ›