Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
[email protected]
[email protected]
Icon attribution | Banner attribution
view the rest of the comments
No that's not necessarily true at all. You can take a stock stable diffusion model and output these images right Now without additional training. The whole point of diffusion models is that you don't need 100% training data coverage to create new images outside of the original dataset. Having learned the concept of child from any normal image set of children and learning the concept of nudity/porn from legal adult images is more than enough to create a blended concept of the two.
Those children are the abuse victims.
The children don't exist. The concept of child is learned from a series of 1s and 0s
Then it's abuse because the law says so and how the images are generated is irrelevant.
Believe it or not, US law isn't the moral center of the universe not to mention its disproportionate use against groups it doesn't like.
This case is in the US.
Having never used an AI generator, generic generated images wouldn't be an actual match to the dataset images, right? It would just be generating features it understands to be associated with the concept of a child, which would make the claim that the dataset children are the abuse targets a stretch, unless there's some other direct or indirect harm to them. An immediate exception being a person writing a prompt attempting to create a specific facsimile of an individual.
That's nice, still illegal.
While true, it also wasn't really what my post was responding to. Thanks though.
The thread is discussing why it's considered abuse if you can't point to a victim. The answer turned out to be "because the law says so."
If you read the law you posted, it doesn't actually address the question of victimhood. Also, I don't really get why you're still trying to force an unrelated point into this part of the discussion. Maybe find another place in the thread where someone thinks it's legal and go talk to them.
If you don't think you sound like you're saying it's legal/should be legal I have very bad news for you.
Seeing that it's been explicitly stated that that's not the case, you strike me as a dude that cares less about the meaning of words than the opportunity to argue about them.