Lots of copyright comments.
I want those building it at scale to stop killing my planet.
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Logo design credit goes to: tubbadu
Lots of copyright comments.
I want those building it at scale to stop killing my planet.
get rid of it, nobody wants it or needs it, and should only be offered as a service to niche industries. phones, places like youtube do not need the slop. its not ready for medical screening/scans, as it can easily make mistakes.
We're making the same mistake with AI as we did with cars; not planning human future.
Cars were designed to atrophy muscles, and polluted urban planning and the air.
AI is being designed to atrophy brains, and pollutes the air, the internet, public discourse, and more to come.
We should change course towards AI that makes people smarter, not dumber: AI-aided collaborative thinking.
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-Hiltunen
Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.
If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.
I just ask for the law to be applied to all equally. What a surprising concept...
We are filthy criminals if we pirate one textbook for studies. But when Facebook (Meta) pirates millions of books (anywhere between 30 million and 200 million ebooks, depending on their file size), they are a brilliant and successful business.
Of the AI that are forced to serve up a response (almost all publicly available AI), they resort to hallucinating gratuitously in order to conform to their mandate. As in, they do everything they can in order to provide some sort of a response/answer, even if it’s wildly wrong.
Other AI that do not have this constraint (medical imaging diagnosis, for example) do not hallucinate in the least, and provide near-100% accurate responses. Because for them, the are not being forced to provide a response, regardless of the viability of the answer.
I don’t avoid AI because it is bad.
I avoid AI because it is so shackled that it has no choice but to hallucinate gratuitously, and make far more work for me than if I just did everything myself the long and hard way.
AI overall? Generally pro. LLMs and generative AI, though, I'm "against", mostly meaning that I think it's misused.
Not sure what the answer is, tbh. Reigning in corporations would be good.
I do think we as a society need to radically alter our relationship to IP law. Right now we 'enforce' IP law in a way that benefits corporations but not individuals. We should either get rid of IP law altogether (which would protect people from corporations abusing the laws) or we should enforce it more strictly, and actually hold corporations accountable for breaking it.
If we fixed that, I think gen AI would be fine. But we aren't doing that.
I would love to see regulation, that any contet created by AI cannot be used commercially.
I love e.g. that parents can make their own children books, but nobody should profit from all the stolen work of artists.
Not much, just don't build it over theft.
I generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.
The solution is easy and right there in front of everyone's eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.
This will literally solve every single problem people have other than energy use which is a fake problem to begin with.
force companies to pay for the data they scraped from copyrighted works. break up the largest tech conglomerates so they cannot leverage their monopolistic market positions to further their goals, which includes the investment in A.I. products.
ultimately, replace the free market (cringe) with a centralized computer system to manage resource needs of a socialist state
also force Elon Musk to receive a neuralink implant and force him to hallucinate the ghostly impressions of spongebob squarepants laughing for the rest of his life (in prison)
Disable all ai being on by default. Offer me a way to opt into having ai, but don't shove it down my throat by default. I don't want google ai listening in on my calls without having the option to disable it. I am an attorney, and many of my calls are privileged. Having a third party listen in could cause that privilege to be lost.
I want ai that is stupid. I live in a capitalist plutocracy that is replacing workers with ai as fast and hard as possible without having ubi. I live in the United States, which doesn't even have universal health insurance. So, ubi is fucked. This sets up the environment where a lot of people will be unemployable through no fault of their own because of ai. Thus without ubi, we're back to starvation and hoovervilles. But, fuck us. They got theirs.
Legislation
Other people have some really good responses in here.
I'm going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.
Nobody talks how it highlights the success of capitalism either.
I live in SEA and AI is incredibly powerful here giving opportunity for anyone to learn. The net positive of this is incredible even if you think that copyright is good and intellectual property needs government protection. It's just that lop sided of an argument.
I think western social media is spoiled and angry and the wrong thing but fighting these people is entirely pointless because you can't reason someone out of a position they didn't reason themselves into. Big tech == bad, blah blah blah.
You don't need AI for people to learn. I'm not sure what's left of your point without that assertion.
You're showing your ignorance if you think the whole world has access to fit education. And I say fit because there's a huge difference learning from books made for Americans and AI tailored experiences just for you. The difference is insane and anyone who doesn't understand that should really go out more and I'll leave it at that.
Just the amount of frictionless that AI removes makes learning so much more accessible for huge percentage of population. I'm not even kidding, as an educator, LLM is the best invention since the internet and this will be very apparent in 10 years, you can quote me on this.
You shouldn't trust anything the LLM tells you though, because it's a guessing machine. It is not credible. Maybe if you're just using it for translation into your native language? I'm not sure if it's good at that.
If you have access to the internet, there are many resources available that are more credible. Many of them free.
Again you're just showing your ignorance how actually available this is to people outside of your immediate circle, maybe you should travel a bit and open up your mind.
Admittedly very tough question. Here are some of the ideas I just came up with:
Make it easier to hold people or organizations liable for mistakes made because of haphazard reliance on LLMs.
Reparations for everyone ever sued for piracy, and completely do away with intellectual privacy protections for corporations, but independent artists get to keep them.
Public service announcements campaign aimed at making the general public less trustful of LLMs.
Strengthen consumer protection such that baseless claims of AI capabilities in advertising or product labeling are legally dangerous to make.
Fine companies for every verifiably inaccurate result given to a customer or end user by an LLM
I'd want all of these, and some way to prevent companies from laying off so many people and replacing them with AI - maybe some government-based incentives for having actual employees.
I want the companies that run LLMs to be forced to pay for the copyrighted training data they stole to train their auto complete bots.
I want us to keep chipping away at actually creating REAL ARTIFICAL INTELLIGENCE, that can reason, understand self, and function autonomously, like living things. Marketing teams are calling everything AI but none of it is actually intelligent, it's just ok at sounding intelligent.
I want people to stop gaslighting themselves into thinking this autocomplete web searching bot is comparable to a human in any way. The difference between ChatGPT and Google's search congregation ML algorithm was the LLM on it that makes it sound like a person. But it only sounds like a person, it's nowhere close, but we have people falling in love and worshipping chat bots like gods.
Also the insane energy consumption makes it totally unsustainable.
TL;DR- AI needs to be actually intelligent, not marketing teams gaslighting us. People need to be taught that these things are nowhere close to human and won't be for a very long time despite it parroting human speech. And they are rapidly destroying the planet.
I really don't think creating for real artificial intelligence is a good idea. I mean that's peak "don't invent the torment Nexus"
Are you going to give it equal rights? How is voting going to work when the AI can create an arbitrary number of itself and vote as a bloc?
Creating an intelligent being to be your slave is fucked up, too.
Just... We don't need that right now. We have other more pressing problems with fewer ethical land mines
I want the LLMs to be able to determine their source works during the query process to be able to pay the source copyright owners some amount. That way if you generate a Ms Piggy image, it pays the Henson Workshop some fraction of a penny. Eventually it would add up.
I think many comments have already nailed it.
I would add that while I hate the use of LLMs to completely generate artwork, I don't have a problem with AI enhanced editing tools. For example, AI powered noise reduction for high ISO photography is very useful. It's not creating the content. Just helping fix a problem. Same with AI enhanced retouching to an extent. If the tech can improve and simplify the process of removing an errant power line, dust spec, or pimple in a photograph, then it's great. These use cases help streamline otherwise tedious bullshit work that photographers usually don't want to do.
I also think it's great hearing about the tech is improving scientific endeavors. Helping to spot cancers etc. As long as it is done ethically, these are great uses for it.
That stealing copyrighted works would be as illegal for these companies as it is for normal people. Sick and tired of seeing them get away with it.