AI problems could easily be solved with a carbon tax.
Comment on AI-Generated Fake War Images Passed Off as Real
hendrik@palaver.p3x.de 1 week agoI think there is a fundamental issue with stopping technology. A lot of it is dual-use. You can stab someone with a kitchen knife. Kill someone with an axe. There are legitimate uses for guns... You can use the internet to do evil things. Yet, no one wants to cut their steak with a spoon... I think the same thing applies to AI. It's massively useful to have machine translation at hand, voice recognition. Smartphone cameras, and even smart assistants and chatbots. And I certainly hope they'll help with some of the big issues of the 21st century. I don't think you want to outlaw things like that, unless you're the Amish people.
dragonfucker@lemmy.nz 1 week ago
superkret@feddit.org 1 week ago
But what you could do is hold the companies that make AI accountable for its output, same as with any other software.
“We don’t know what it does” shouldn’t be an excuse when your AI distributes misinfo, libel and slander, and you profit from it.
hendrik@palaver.p3x.de 1 week ago
Yes, that'd be my approach, too. They need to be forced put in digital watermarks so everyone can check if an article is from ChatGPT, or if an image is fake. We could easily do this with regulation and hefty fines. More or less robust watermarks are available and anything would be better than nothing. OpenAI even developed a text watermarking solution. They just don't activate it. (https://www.theverge.com/2024/8/4/24213268/openai-chatgpt-text-watermark-cheat-detection-tool)
Another pet peeve of mine are these "nude" apps that swap faces or generate nude pictures from someones photos. There are services out there that happily generate nudes from children's pictures. I've filed a report with some European CSAM program, after that outcry in Spain where some school kid generated unethical images of their classmates. (Just in case the police doesn't read the news...) And half a year later, that app was still online. I suppose it still is... I really don't know why we allow things like that.
kuberoot@discuss.tchncs.de 1 week ago
I don’t think this is a realistic proposal - this is a technological advancements. You might be able to force companies to put invisible steganographic signatures in their services’ output, maybe provide some method for hashing the output to provide a way to determine if an image was generated by them…
But what’s stopping them from using the underlying model on the side, off the books. They could sell/leak the model to external entities. If they just generate outputs without any watermarks, those systems won’t be able to detect them, potentially only lending more legitimacy to those fakes.
And, ultimately, nothing’s stopping independent organizations from developing their own models capable of generating such fakes. What help is it that big companies are limited, if the technology needed to generate images is already known, and might end up easily reproducible by anybody sooner than later?
That said, individual instances of such illegal/immoral services should be dealt with - it’s horrible, but I believe those are inevitable. Pandora’s box has been opened by creating the technology, it was going to happen sooner or later, and we have to deal with the results.
hendrik@palaver.p3x.de 1 week ago
Yeah, I tried to get that across with my phrasing... I'm not saying we need to change the technology. I mean it's out there and it's too late anyways. Plus it's a tool, and tools can be used for various purposes, and that's not the tool's fault. I'm also not arguing to change how kitchen knifes, axes, etc work, despite them having potential to do harm...
But: It doesn't need to be 100% waterproof or we can't do anything. I'm also not keeping my knife collection on the living room table when a toddler is around. But at the same time I don't need to lock them in a vault... I think we can go 90% the way, help 90% of people and that's better than do nothing because we strive for total perfection... I'm keeping the bleach and knifes somewhere kids can't reach. And we could say the AI services need to filter images of children. (I think they already do.) And put invisible watermarks in place for all AI generated content. If anyone decides to circumvent that, that's on them. But at least we solved the majority of misuse.
And I mean that's already how we do things. For example a spam filter isn't 100% accurate. And we use them nonetheless.
(And I'm just arguing about service providers. That's what the majority of people use. And I think those should be forced to do it. But the models itself should be free. Otherwise, we put a very disruptive technology solely in the hands of some big companies... And if AI is going to change the world as much as people claim, that's bound to lead us into some sci-fi dystopia where the world revolves around the interests of some big corporations... And we don't want that. So we need AI tech to be shaped not just by Meta and OpenAI.)