As I said in my other comment, the model does not have to be trained on CSAM to create images like this.
Comment on US man used AI to generate 13,000 child sexual abuse pictures, FBI alleges
sxt@lemmy.world 5 months agoIf the model was trained on csam then it is dependent on abuse
Daxtron2@startrek.website 5 months ago
Jimmyeatsausage@lemmy.world 5 months ago
That irrelevant, any realistic depiction of children engaged in sexual activity meets the legal definition of csam. Even using filters on images of consenting adults could qualify as csam if the intent was to make the actors appear underage.
ASeriesOfPoorChoices@lemmy.world 5 months ago
doesn’t even have to be that realistic.
Darrell_Winfield@lemmy.world 5 months ago
That’s a heck of a slippery slope I just fell down.
If responses generated from AI can be held criminally liable for their training data’s crimes, we can all be held liable for all text responses from GPT, since it’s being trained on reddit data and likely has access to multiple instances of brigading, swatting, man hunts, etc.
laughterlaughter@lemmy.world 5 months ago
You just summarized the ongoing ethical concerns experts and common folk alike have been talking about in the past few years.