Comment on AI-Generated Fake War Images Passed Off as Real

<- View Parent
kuberoot@discuss.tchncs.de ⁨1⁩ ⁨week⁩ ago

I don’t think this is a realistic proposal - this is a technological advancements. You might be able to force companies to put invisible steganographic signatures in their services’ output, maybe provide some method for hashing the output to provide a way to determine if an image was generated by them…

But what’s stopping them from using the underlying model on the side, off the books. They could sell/leak the model to external entities. If they just generate outputs without any watermarks, those systems won’t be able to detect them, potentially only lending more legitimacy to those fakes.

And, ultimately, nothing’s stopping independent organizations from developing their own models capable of generating such fakes. What help is it that big companies are limited, if the technology needed to generate images is already known, and might end up easily reproducible by anybody sooner than later?

That said, individual instances of such illegal/immoral services should be dealt with - it’s horrible, but I believe those are inevitable. Pandora’s box has been opened by creating the technology, it was going to happen sooner or later, and we have to deal with the results.

source
Sort:hotnewtop