Yep. They are allowed to use your photos to “improve the service,” which AI training would totally qualify under in terms of legality. No notice to you required if they rip your entire album of family photos so an AI model can get 0.00000000001% better at generating pictures of fake family photos.
Prove_your_argument@piefed.social 1 day ago
Amazon Photos syncing, if I had to guess. It was marketed a free unlimited backup for amazon prime users.
AmbitiousProcess@piefed.social 1 day ago
ImgurRefugee114@reddthat.com 1 day ago
Unlikely IMO. Maybe some… But if they scraped social media sites like blogs, Facebook, or Twitter, they would end up with dumptrucks full. Ask any one who has to deal with UGC: it pollutes every corner of the net and it’s damn near everywhere. The proliferation of local models capable of generating photorealistic materials has only made the situation worse. It was rare to uncover actionable cases before, but the signal to noise ratio is garbage now.
ZoteTheMighty@lemmy.zip 1 day ago
But if they’re uniquely good at producing CSAM, odds are it’s due to a proprietary dataset.
ImgurRefugee114@reddthat.com 22 hours ago
This is why I use the word ‘proliferation,’ in the nuclear sense. Since the days of SD1, these illegal capabilities have become more and more prevalent in the local image model space. The advent of model merging, mixing, and retraining/finetunes, have caused a significant increase in the proportion of model releases that have been contaminated.
What you’re saying is ultimately true, but it was more true in the early days. Animated, drawn, and CGI content has always been a problem, but photorealistic capability was very limited and rare, often coming from homebrewed proprietary finetunes published on shady forums. Since then, they’ve become much more prolific. It’s estimated that roughly between a fourth and a third of photorealistic SDXL-based models released on civit.ai during 2025 have some degree of capability.
Just as LLM benchmark test answers have contaminated open source models, illegal capabilities gained from illegal datasets have also contaminated image models; to the point where there are plenty of well-intentioned authors unknowingly contributing to the problem. There are some who go out of their way to poison models (usually with false association training on specific keywords) but few bother, or even known, to do so.
ColeSloth@discuss.tchncs.de 23 hours ago
They wouldn’t be bothered to try and hide that they were pulled from those public services.
They 100% know that if they revealed that they used everyone’s private photos backed up to Amazon cloud as fodder for their AI that it would puss people off and they’d lose some business out of the deal.
ImgurRefugee114@reddthat.com 22 hours ago
Well another factor is providence: they don’t keep around exactly where they got their data from. Sometimes on a set level, but almost never on an individual sample. “We found csam somewhere on maybe reddit or imgur or pinterest” is practically worthless
captainlezbian@lemmy.world 21 hours ago
Yeah my bet is Facebook and maybe some less reputable sites. Surely they didn’t scrape 8chan right?
phx@lemmy.world 1 hour ago
Yeah, a lot of people seem to think that these companies built these AI’s by buying or building some sort of special training set/data, when in reality no such thing really existed.
They’ve basically just scraped every bit of data they can. When it comes to big corps, at least some of that data is likely from scraping customer’s data. There’s also scraping of the Internet in general, including sites such as Reddit (which is a big reason why they locked down their API, they wanted to sell that data) but many have also been caught with a ton of ‘pirated ’ data from torrents etc.
I’m sure there was a certain amount of sludge in customers’ synced files, and sites like Reddit, but I’d also hazard a guess that the stuff grabbed from torrents etc likely had some truly heinous materials that they simply added to what was getting force-fed to AI, especially the early ones