Iconoclast
@Iconoclast@feddit.uk
- Comment on 4Chan responds to £520,000 Ofcom fine with AI picture of hamster 5 days ago:
My parents’ porn VHS collection didn’t ask for my age and neither did my grand-dad’s titty magazines hidden in the tractor shed. Internet wasn’t even a thing for most people then yet I had already seen plenty of porn before I turned 10.
It clearly did affect me and I don’t deny that but I doubt it would’ve made much of a difference had I have to wait untill I was 18 for the flood gates to open. Nobody would’ve seen me for months if that was the case.
- Comment on Europe takes first step to banning AI-generated child sexual abuse images 1 week ago:
There’s an argument to be made that if the system was trained on real CSAM, then using it to generate such imagery would be immoral - but otherwise I don’t think it is, and this feels like a moral panic.
CSAM is by definition evidence of a crime having happened. You can’t create it without hurting a real human being - that’s why it’s illegal. That logic doesn’t apply to simulated images or cartoons. It might be in bad taste, but nobody was hurt in the making of it, and I’m not aware of any solid evidence that viewing such content makes someone more likely to commit the real crime. Same as there’s no proven link between violent movies/games and increased real-world violence.
There’s really no limit to how far this can be taken. In the past the line was clear: was a child hurt? If yes, illegal. Now we’re effectively moving toward banning violent video games and cartoons. Tomorrow it’s stick-figure fight scenes, and soon you’re not even allowed to think about it.
Of course I’m being hyperbolic here - just trying to make a point. I don’t think “I don’t like it” is justification for banning something if it can’t be shown to cause actual harm. If solid science ever proves it increases the likelihood of offending against real humans, then yeah, that’s different. But I don’t think we have that evidence. Even most pedophiles never offend. The vast majority of people in prison for child sexual abuse are just plain old rapists with no particular fixation on kids - they’re simply easy targets.
- Comment on [deleted] 2 weeks ago:
seeing downvotes on what was simply a friendly introduction post is a bit disheartening.
If it’s any comfort, they didn’t even read your introduction. They saw “AI” in the title and that instantly made you the “other.”
A major chunk of users here are highly ideological and always on the hunt for the enemy. Saying non-critical things of AI is more than enough evidence for them to back up their assumptions about you.
- Comment on Twitter Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage 3 weeks ago:
Nobody’s paying them specifically for sharing disinformation. They’ve been paid for driving engagement as content creators. The whole point of the article is that the platform is stopping payments to these people precisely because they’re spreading disinformation.
Platforms letting creators in on ad revenue generated by engagement with their content isn’t exactly a new thing. But if you then switch to spreading lies for profit, of course they should get kicked out of the program.