My guy, you should stop feeding the troll. I can keep coming up with bullshit indefinitely. The intent of my original facetious reply was to point out how ridiculous it is to react to a clearly ridiculous and unrealistic suggestion as if it was the most seriously considered expression of an actual policy suggestion ever. But it turns out some people just can't not take every single thing that is said with the utmost seriousness.
wandermind
Of course a nuked country will be a nuked country. That's beside the point, moving the goalposts.
No, they can return after the country has been glassed.
Yes, but in reality nobody is going to nuke anybody, and certainly not because a random internet user vents their frustration at the situation with a clearly metaphorical and exaggerated request. Your reply was an overly literal reading of the comment, like replying to "go fuck yourself" with "...you realize that's not possible, right?"
I simply replied to your literal interpretation with a literal interpretation of my own.
Sure you can, move the civilians out first.
I'm not as against these "sad narratives" as you are, but I still think that this one just doesn't make much sense. Photons hit random planets and stuff all of the time, so arguably hitting a living sentient being is one of the coolest things that could happen to a photon.
I thought you were talking about "makaronivelli" before you specified the milk was for drinking.
So at best we don't know whether or not AI CSAM without CSAM training data is possible. "This AI used CSAM training data" is not an answer to that question. It is even less of an answer to the question "Should AI generated CSAM be illegal?" Just like "elephants get killed for their ivory" is not an answer to "should pianos be illegal?"
If your argument is that yes, all AI CSAM should be illegal whether or not the training used real CSAM, then argue that point. Whether or not any specific AI used CSAM to train is an irrelevant non sequitur. A lot of what you're doing now is replying to "pencils should not be illegal just because some people write bad stuff" with the equivalent of "this one guy did some bad stuff before writing it down". That is completely unrelated to the argument being made.
So why are you posting all over this thread about how CSAM was included in the training set if that is in your opinion ultimately irrelevant with regards to the topic of the post and discussion, the morality of using AI to generate CSAM?
I first thought this was a bad idea by Paypal but you opened my eyes