Comment on AI-Generated Fake War Images Passed Off as Real
SubArcticTundra@lemmy.ml 1 week agoThe most terrifying thing is that humanity still hasn’t invented a way to stop the development of harmful technology. It’s a race to the bottom. Some might say that we’ve managed it with nukes, but I can’t imagine the nukes solution being used for AI.
hendrik@palaver.p3x.de 1 week ago
I think there is a fundamental issue with stopping technology. A lot of it is dual-use. You can stab someone with a kitchen knife. Kill someone with an axe. There are legitimate uses for guns... You can use the internet to do evil things. Yet, no one wants to cut their steak with a spoon... I think the same thing applies to AI. It's massively useful to have machine translation at hand, voice recognition. Smartphone cameras, and even smart assistants and chatbots. And I certainly hope they'll help with some of the big issues of the 21st century. I don't think you want to outlaw things like that, unless you're the Amish people.
superkret@feddit.org 1 week ago
But what you could do is hold the companies that make AI accountable for its output, same as with any other software.
“We don’t know what it does” shouldn’t be an excuse when your AI distributes misinfo, libel and slander, and you profit from it.
hendrik@palaver.p3x.de 1 week ago
Yes, that'd be my approach, too. They need to be forced put in digital watermarks so everyone can check if an article is from ChatGPT, or if an image is fake. We could easily do this with regulation and hefty fines. More or less robust watermarks are available and anything would be better than nothing. OpenAI even developed a text watermarking solution. They just don't activate it. (https://www.theverge.com/2024/8/4/24213268/openai-chatgpt-text-watermark-cheat-detection-tool)
Another pet peeve of mine are these "nude" apps that swap faces or generate nude pictures from someones photos. There are services out there that happily generate nudes from children's pictures. I've filed a report with some European CSAM program, after that outcry in Spain where some school kid generated unethical images of their classmates. (Just in case the police doesn't read the news...) And half a year later, that app was still online. I suppose it still is... I really don't know why we allow things like that.
kuberoot@discuss.tchncs.de 1 week ago
I don’t think this is a realistic proposal - this is a technological advancements. You might be able to force companies to put invisible steganographic signatures in their services’ output, maybe provide some method for hashing the output to provide a way to determine if an image was generated by them…
But what’s stopping them from using the underlying model on the side, off the books. They could sell/leak the model to external entities. If they just generate outputs without any watermarks, those systems won’t be able to detect them, potentially only lending more legitimacy to those fakes.
And, ultimately, nothing’s stopping independent organizations from developing their own models capable of generating such fakes. What help is it that big companies are limited, if the technology needed to generate images is already known, and might end up easily reproducible by anybody sooner than later?
That said, individual instances of such illegal/immoral services should be dealt with - it’s horrible, but I believe those are inevitable. Pandora’s box has been opened by creating the technology, it was going to happen sooner or later, and we have to deal with the results.
dragonfucker@lemmy.nz 1 week ago
AI problems could easily be solved with a carbon tax.