this post was submitted on 08 Aug 2024
219 points (83.7% liked)

Unpopular Opinion

6288 readers
173 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

I've recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples' works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate "new" content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people's "likeness." I understand the hate for AI generated shit (because it is shit). I really don't understand where all this hate for using public data for building a "statistical" model to "learn" general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don't think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that's really just a problem with capitalism, and productivity increases are generally considered good.

(page 2) 16 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 2 months ago* (last edited 2 months ago) (1 children)

I agree with some other comments that this is a question of public domain vs. copyright. However, even copyright has exceptions, notably fair use in the US.

TL;DR: If I can create art imitating [insert webcomic artist here] based on fair use, or use their work for artistic inspiration, it's legal, but when a machine does it, it's illegal?

One of the chief AI critics, Sarah Andersen, made a claim 9 months ago that when AI generated the following output for "Sarah Andersen comic", it clearly imitated her style, and if any AI company is to be believed, it's going to get more accurate with later models, possibly creating a believable comic including text.

Regardless of how accurately the AI can draw the comics (as long as they aren't effectively identical to a single specific comic of hers), shouldn't this just qualify as fair use? I can imitate SA's style too and make a parody comic, or even just go the lazy way and change some text like alt-right "memers" did (politics and unfunniness aside, I believe the comic should be legal if they replaced "© Sarah Andersen" with "Parody of comic by Sarah Andersen"). As long as the content is distributed as "homage", "parody", "criticism" etc., doesn't directly harm the Sarah Andersen's financial interests, and makes it clear that the author is clearly not her, I think there should be no issue even if it features likeness of trademarked characters, phrases and concepts.

Makes me ashamed there is a book by her in my house (my sister received it as a gift).

load more comments (1 replies)
[–] [email protected] 17 points 2 months ago (2 children)

It would be nice if the AI industry had one big positive effect by finally reigning in the overboarding copyright laws.

load more comments (2 replies)
[–] [email protected] 4 points 2 months ago* (last edited 2 months ago) (9 children)

Here’s an analogy that can be used to test this idea.

Let’s say I want to write a book but I totally suck as an author and I have no idea how to write a good one. To get some guidelines and inspiration, I go to the library and read a bunch of books. Then, I’ll take those ideas and smash them together to produce a mediocre book that anyone would refuse to publish. Anyway, I could also buy those books, but the end result would still be the same, except that it would cost me a lot more. Either way, this sort of learning and writing procedure is entirely legal, and people have been doing this for ages. Even if my book looks and feels a lot like LOTR, it probably won’t be that easy to sue me unless I copy large parts of it word for word. Blatant plagiarism might result in a lawsuit, but I guess this isn’t what the AI training data debate is all about, now is it?

However, if I pirated those books, that could result in some trouble. However, someone would need to read my miserable book, find a suspicious passage, check my personal bookshelf and everything I have ever borrowed etc. That way, it might be possible to prove that I could not have come up with a specific line of text except by pirating some book. If an AI is trained on pirated data, that’s obviously something worth the debate.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago) (1 children)

To expand on what you wrote, I’d equate the LLM output as similar to me reading a book. From here on out until I become senile, the book is part of memory. I may reference it, I may parrot some of its details that I can remember to a friend. My own conversational style and future works may even be impacted by it, perhaps even subconsciously.

In other words, it’s not as if a book enters my brain and then is completely gone once I’m finished reading it.

So I suppose then, that the question is moreso one of volume. How many works consumed are considered too many? At what point do we shift from the realm of research to the one of profiteering?

There are a certain subset of people in the AI field who believe that our brains are biological forms of LLMs, and that, if we feed an electronic LLM enough data, it’ll essentially become sentient. That may be for better or worse to civilization, but I’m not one to get in the way of wonder building.

load more comments (1 replies)
load more comments (8 replies)
[–] [email protected] 30 points 2 months ago (2 children)

This is not an opinion. You have made a statement of fact. And you are wrong.

At law, something being publicly available does not mean it is allowed to be used for any purpose. Copyright law still applies. In most countries, making something publicly available does not cause all copyrights to be disclaimed on it. You are still not permitted to, for example, repost it elsewhere without the copyright holder's permission, or, as some courts have ruled, use it to train an AI that then creates derivative works. Derivative works are not permitted without the copyright holder's permission. Courts have ruled that this could mean everything an AI generates is a derivative work of everything in its training data and, therefore, copyright infringement.

[–] [email protected] 17 points 2 months ago (1 children)

They have indeed made a statement of fact. But to the best of my knowledge it's not one that's got any definite controlling precedent in law.

You are still not permitted to, for example, repost it elsewhere without the copyright holder's permission

That's the thing. It's not clear that an LLM does "repost it elsewhere". As the OP said, the model itself is basically just a mathematical construct that can't really be turned back into the original work, which is possibly a sign that it's not a derivative work, but a transformative one, which is much more likely to be given Fair Use protection. Though Fair Use is always a question mark and you never really know if a use is Fair without going to court.

You could be right here. Or OP could. As far as I'm concerned anyone claiming to know either way is talking out of their arse.

load more comments (1 replies)
[–] [email protected] 24 points 2 months ago (3 children)

Saying that statistical analysis is derivative work is a massive stretch. Generative AI is just a way of representing statistical data. It’s not particularly informative or useful (it may be subject to random noise to create something new, for example), but calling it a derivative work in the same way that fan-fiction is derivative is disingenuous at best.

load more comments (3 replies)
[–] [email protected] 4 points 2 months ago (1 children)

"Statistiac" of course. And yes I would

[–] [email protected] 4 points 2 months ago (1 children)
load more comments (1 replies)
[–] [email protected] -1 points 2 months ago (4 children)
load more comments (4 replies)
[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (4 children)

As long as it's licensed as Creative Common of some sort. Copyrighted materials are copyrighted and shouldn't be used without concent , this protect also individuals not only corporations. (Excuse my English)

Edit: Your argument about probability and parameter size is inapplicable in my mind. The same can be said about jpeg lossy compression.

[–] [email protected] 7 points 2 months ago

Creative Commons would not actually help here. Even the most permissive licence, CC-BY, requires attribution. If using material for training material requires a copyright licence (which is certainly not a settled question of law), CC would likely be just the same as all rights reserved.

(There's also CC-0, but that's basically public domain, or as near to it as an artist is legally allowed to do in their locale. So it's basically not a Creative Commons licence.)

[–] [email protected] -1 points 2 months ago (2 children)

Could the copywrited material consumed potentially fall under fair use? There are provisions for research purposes.

[–] [email protected] 4 points 2 months ago

Just fyi the term is "copyrighted", not "copywrited". Copyright is about the right to copy, not anything about writing.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 58 points 2 months ago (1 children)

Generally the argument isn't public vs. private, it's public domain vs. copyright.

You want to train an LLM using the contents of Project Gutenberg? Great, go for it!

You want to train an LLM using bootlegged epubs stolen from Amazon? Now that's a different deal.

[–] [email protected] 5 points 2 months ago

Sure - they'd need to at least loan the epubs just like a human would need to if wanting to read them.

load more comments
view more: ‹ prev next ›