this post was submitted on 06 Sep 2024
1722 points (90.2% liked)

Technology

70266 readers
3845 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

(page 10) 43 comments
sorted by: hot top controversial new old
[–] [email protected] 14 points 8 months ago* (last edited 8 months ago) (2 children)

Honestly, if this somehow results in regulators being like "fuck it, piracy is legal now" it won't negatively impact me in any way..

Corporations have abused copyright law for decades, they've ruined the internet, they've ruined media, they've ruined video games. I want them to lose more than anything else.

The shitty and likely situation is they'll be like "fuck it corporate piracy is legal but individuals doing it is still a crime".

load more comments (2 replies)
[–] [email protected] 9 points 8 months ago* (last edited 8 months ago)

This take is correct although I would make one addition. It is true that copyright violation doesn’t happen when copyrighted material is inputted or when models are trained. While the outputs of these models are not necessarily copyright violations, it is possible for them to violate copyright. The same standards for violation that apply to humans should apply to these models.

I entirely reject the claims that there should be one standard for humans and another for these models. Every time this debate pops up, people claim some province based on ‘intelligence’ or ‘conscience’ or ‘understanding’ or ‘awareness’. This is a meaningless argument because we have no clear understanding about what those things are. I’m not claiming anything about the nature of these models. I’m just pointing out that people love to apply an undefined standard to them.

We should apply the same copyright standards to people, models, corporations, and old-school algorithms.

[–] [email protected] 95 points 8 months ago* (last edited 8 months ago) (20 children)

The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.

The idea of a "teensy" exception so that we can "advance" into a dark age of creative pointlessness and regurgitated slop, where humans doing the fun part has been made "unnecessary" by the unstoppable progress of "thinking" machines, would be hilarious, if it weren't depressing as fuck.

load more comments (20 replies)
[–] [email protected] 19 points 8 months ago (12 children)

While I agree that using copyrighted material to train your model is not theft, text that model produces can very much be plagiarism and OpenAI should be on the hook when it occurs.

load more comments (12 replies)
[–] [email protected] 21 points 8 months ago (4 children)

I thought the larger point was that they're using plenty of sources that do not lie in the public domain. Like if I download a textbook to read for a class instead of buying it - I could be proscecuted for stealing. And they've downloaded and read millions of books without paying for them.

load more comments (4 replies)
[–] [email protected] 166 points 8 months ago* (last edited 8 months ago) (9 children)

Here's an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.

And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I'd need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we're not allowed to see that but you can take whatever you want from us. Sounds fair.

[–] [email protected] 16 points 8 months ago (1 children)

Can you just give us the TLDE?

load more comments (1 replies)
load more comments (7 replies)
[–] [email protected] 8 points 8 months ago (1 children)

The ingredient thing is a bit amusing, because that's basically how one of the major fast food chains got to be so big (I can't remember which one it was ATM though; just that it wasn't McDonald's). They cut out the middle-man and just bought their own farm to start growing the vegetables and later on expanded to raising the animals used for the meat as well.

load more comments (1 replies)
[–] [email protected] 2 points 8 months ago

I hear you about the cheese bro.

[–] [email protected] 52 points 8 months ago (1 children)

Bullshit. AI are not human. We shouldn't treat them as such. AI are not creative. They just regurgitate what they are trained on. We call what it does "learning", but that doesn't mean we should elevate what they do to be legally equal to human learning.

It's this same kind of twisted logic that makes people think Corporations are People.

[–] [email protected] 68 points 8 months ago

You drank the kool-aid.

[–] [email protected] 60 points 8 months ago* (last edited 8 months ago)

"but how are we supposed to keep making billions of dollars without unscrupulous intellectual property theft?! line must keep going up!!"

[–] [email protected] 224 points 8 months ago (2 children)

If they can base their business on stealing, then we can steal their AI services, right?

[–] [email protected] 7 points 8 months ago (8 children)

How do you feel about Meta and Microsoft who do the same thing but publish their models open source for anyone to use?

[–] [email protected] 25 points 8 months ago (1 children)

Well how long to you think that's going to last? They are for-profit companies after all.

load more comments (1 replies)
load more comments (7 replies)
[–] [email protected] 172 points 8 months ago (3 children)

Pirating isn’t stealing but yes the collective works of humanity should belong to humanity, not some slimy cabal of venture capitalists.

[–] [email protected] 10 points 8 months ago (2 children)

Unlike regular piracy, accessing "their" product hosted on their servers using their power and compute is pretty clearly theft. Morally correct theft that I wholeheartedly support, but theft nonetheless.

load more comments (2 replies)
[–] [email protected] 4 points 8 months ago

Yes, that's exactly the point. It should belong to humanity, which means that anyone can use it to improve themselves. Or to create something nice for themselves or others. That's exactly what AI companies are doing. And because it is not stealing, it is all still there for anyone else. Unless, of course, the copyrightists get there way.

[–] [email protected] 34 points 8 months ago (1 children)

Also, ingredients to a recipe aren't covered under copyright law.

[–] [email protected] 3 points 8 months ago* (last edited 8 months ago) (4 children)

ingredients to a recipe may well be subject to copyright, which is why food writers make sure their recipes are "unique" in some small way. Enough to make them different enough to avoid accusations of direct plagiarism.

E: removed unnecessary snark

load more comments (4 replies)
[–] [email protected] 25 points 8 months ago (3 children)

Are the models that OpenAI creates open source? I don't know enough about LLMs but if ChatGPT wants exemptions from the law, it result in a public good (emphasis on public).

[–] [email protected] 6 points 8 months ago

OpenAI does not publish their models openly. Other companies like Microsoft and Meta do.

[–] [email protected] 49 points 8 months ago (1 children)

Nothing about OpenAI is open-source. The name is a misdirection.

If you use my IP without my permission and profit it from it, then that is IP theft, whether or not you republish a plagiarized version.

[–] [email protected] 8 points 8 months ago (8 children)

The STT (speech to text) model that they created is open source (Whisper) as well as a few others:

https://github.com/openai/whisper

https://github.com/orgs/openai/repositories?type=all

load more comments (8 replies)
load more comments
view more: ‹ prev next ›