this post was submitted on 22 May 2024
297 points (97.1% liked)

News

23259 readers
3455 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 5 months ago* (last edited 5 months ago) (1 children)

Shouldn't the company's who have the CSAM face consequences for possession of it? Seems like a double standard.

The government should be shutting down the source material.

[–] [email protected] 4 points 5 months ago (1 children)

In the eyes of the law, intent does matter, as well as how it's responded to.
For csam material, you have to knowingly possess it or have sought to possess it.

The AI companies use a project that indexes everything on the Internet, like Google, but with publicly available free output.

https://commoncrawl.org/

They use this data via another project, https://laion.ai/ , which uses the data to find images with descriptions attached, do some tricks to validate that the descriptions make sense, and then publish a list of "location of the image, description of the image" pairs.

The AI companies use that list to grab the images train an AI on them in conjunction with the description.

So, people at Stanford were doing research on the laion dataset when they found the instances of csam. The laion project pulled their datasets from being available while things were checked and new safeguards put in place.
The AI companies also pulled their models (if public) while the images were removed from the data set and new safeguards implemented.
Most of the csam images in the dataset were already gone by the time the AI companies would have attempted to access them, but some were not.

A very obvious lack of intent to acquire the material, in fact a lack of awareness the material was possessed at all, transparency in response, taking steps to prevent further distribution, and taking action to prevent it from happening again both provides a defensive against accusations, and will make anyone interested less likely to want to make those accusations.

On the other hand, the people who generated the images were knowingly doing so, which is a nono.

[–] [email protected] 1 points 5 months ago (1 children)

They wouldn't be able to generate it had there been none in the training data, so I assume the labelling and verification systems you talk about aren't very good.

[–] [email protected] 1 points 5 months ago (1 children)

That's not accurate. The systems are designed to generate previously unseen concepts or images by combining known concepts.

It's why it can give you an image of a pony using a hangglider, despite never having seen that. It knows what ponies look like, and it knows what hanggliding looks like, so it can find a way to put both into the image. Where it doesn't know, it will make stuff up from what it does know, often requiring potentially very detailed user explanation to describe how a horse would fit in a hangglider, or that it shouldn't have a little person sticking out of it's back.

[–] [email protected] 0 points 5 months ago (1 children)

I think it would just create adults naked with children's faces unless it actually had CSAM... Which it probably does have.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (1 children)

Again, that's not how it works.

Could you hypothetically describe csam without describing an adult with a child's head, or specifying that it's a naked child?
That's what a person trying to generate csam would need to do, because it doesn't have those concepts.
If you just asked it directly, like I said "horse flying a hangglider" before, you would get what you describe because it's using the only "naked" it knows.
You would need to specifically ask it to demphasize adult characteristics and emphasize child characteristics.

That doesn't mean that it was trained on that content.

For context from the article:

The DOJ alleged that evidence from his laptop showed that Anderegg "used extremely specific and explicit prompts to create these images," including "specific 'negative' prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults."