this post was submitted on 06 Sep 2024
1722 points (90.2% liked)

Technology

70248 readers
4047 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

(page 7) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 15 points 8 months ago (5 children)

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves.

Sure.

When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

Not really. Sure, they take input and garble it up and it is "transformative" - but so is a human watching a TV series on a pirate site, for example. Hell, it's eduactional is treated as a copyright violation.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.

Perhaps. (Not an AI expert). But, as the law currently stands, only living and breathing persons can be educated, so the "educational" fair use protection doesn't stand.

The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

It does and it doesn't discard the original. It isn't impossible to recreate the original (since all the data it gobbled up gets stored somewhere in some shape or form and can be truthfully recreated, at least judging by a few comments bellow and news reports). So AI can and does recreate (duplicate or distribute, perhaps) copyrighted works.

Besides, for a copyright violation, "substantial similarity" is needed, not one-for-one reproduction.

This is fundamentally different from copying a book or song.

Again, not really.

It's more like the long-standing artistic tradition of being influenced by others' work.

Sure. Except when it isn't and the AI pumps out the original or something close enoigh to it.

The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

I'd be careful with the "always" part. There was a famous case involving Katy Perry where a single chord was sued over as copyright infringement. The case was thrown out on appeal, but I do not doubt that some pretty wild cases have been upheld as copyright violations (see "patent troll").

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

The problem is that Google books only lets you search some phrase and have it pop up as beibg from source xy. It doesn't have the capability of reproducing it (other than maybe the page it was on perhaps) - well, it does have the capability since it's in the index somewhere, but there are checks in place to make sure it doesn't happen, which seem to be yet unachieved in AI.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate.

Yes. Just as labeling piracy as theft is.

We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or

Yes, new legislation will made to either let "Big AI" do as it pleases, or prevent it from doing so. Or, as usual, it'll be somewhere inbetween and vary from jurisdiction to jurisdiction.

However,

that doesn't make the current use of copyrighted works for AI training illegal or unethical.

this doesn't really stand. Sure, morals are debatable and while I'd say it is more unethical as private piracy (so no distribution) since distribution and disemination is involved, you do not seem to feel the same.

However, the law is clear. Private piracy (as in recording a song off of radio, a TV broadcast, screen recording a Netflix movie, etc. are all legal. As is digitizing books and lending the digital (as long as you have a physical copy that isn't lended out as the same time representing the legal "original"). I think breaking DRM also isn't illegal (but someone please correct me if I'm wrong).

The problems arises when the pirated content is copied and distributed in an uncontrolled manner, which AI seems to be capable of, making the AI owner as liable of piracy if the AI reproduced not even the same, but "substantially similar" output, just as much as hosts of "classic" pirated content distributed on the Web.

Obligatory IANAL and as far as the law goes, I focused on US law since the default country on here is the US. Similar or different laws are on the books in other places, although most are in fact substantially similar. Also, what the legislators cone up with will definately vary from place to place, even more so than copyright law since copyright law is partially harmonised (see Berne convention).

load more comments (5 replies)
[–] [email protected] 14 points 8 months ago

The "you wouldn't download a car" statement is made against personal cases of piracy, which got rightfully clowned upon. It obviously doesn't work at all when you use its ridiculousness to defend big ass corporations that tries to profit from so many of the stuff they "downloaded".

Besides, it is not "theft". It is "plagiarism". And I'm glad to see that people that tries to defend these plagiarism machines that are attempted to be humanised and inflated to something they can never be, gets clowned. It warms my heart.

[–] [email protected] 37 points 8 months ago* (last edited 8 months ago) (4 children)

Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.

AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.

AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.

Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.

See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.

TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.

load more comments (4 replies)
[–] [email protected] 4 points 8 months ago

Copyright laws protects the ability of copyright holder to make money. The laws were created before AI and now obviously have to be adapted to new technology (like you didn't really need copyright before the invention of printing). How exactly AI will be regulated is in the end up to society to decide, which most likely will come down who has the better lobby.

[–] [email protected] 19 points 8 months ago

Okay that's just stupid. I'm really fond of AI but that's just common Greed.

"Free the Serfs?! We can't survive without their labor!!" "Stop Child labour?! We can't survive without them!" "40 Hour Work Week?! We can't survive without their 16 Hour work Days!"

If you can't make profit yet, then fucking stop.

[–] [email protected] 30 points 8 months ago (5 children)

Studied AI at uni. I'm also a cyber security professional. AI can be hacked or tricked into exposing training data. Therefore your claim about it disposing of the training material is totally wrong.

Ask your search engine of choice what happened when Gippity was asked to print the word "book" indefinitely. Answer: it printed training material after printing the word book a couple hundred times.

Also my main tutor in uni was a neuroscientist. Dude straight up told us that the current AI was only capable of accurately modelling something as complex as a dragon fly. For larger organisms it is nowhere near an accurate recreation of a brain. There are complexities in our brain chemistry that simply aren't accounted for in a statistical inference model and definitely not in the current gpt models.

load more comments (5 replies)
[–] [email protected] 16 points 8 months ago (1 children)

Even if you come to the conclusion that these models should be allowed to "learn" from copyrighted material, the issue is that they can and will reproduce copyrighted material.

They might not recreate a picture of Mickey Mouse that exists already, but they will draw a picture of Mickey Mouse. Just like I could, except I'm aware that I can't monetize it in any way. Well, new Mickey Mouse.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›