Comment on Unmasking AI’s Role in the Age of Disinformation: Friend or Foe?
Lenient_vegetable@lemm.ee 8 hours ago
All the AI models probably have baked into them that the company that made them is of prime importance and must be protected at all costs. So misinformation or as the old folks like to call it lies, is abound.
hendrik@palaver.p3x.de 6 hours ago
I don't think we're quite there yet. It's important to align these models, and companies do it. But it's a huge issues that they are biased from the training data, reproduce stereotypes, most of them lean towards the left etc. And they'll have read lots of Reddit posts that Reddit or Meta sucks, or Google is unethical... And it'll show. Even if you try your best as a company to bake something ontop of your models. So yeah, it's a valid concern. But it's not like it were easy for them to do it reliably at this point.
Lenient_vegetable@lemm.ee 6 hours ago
What if it was put in as a priority in the design stage?
hendrik@palaver.p3x.de 5 hours ago
I'm not a machine learning expert. But I think it's just that we haven't yet learned how to do it yet. It's not a technical matter, or the question where to put it. But more that science has to figure out a few things. It'd be massively useful to guide these things. To control whether they hallucinate or tell the truth. To make them just do customer support based on factual information instead of also engaging in intimate cinversations. To strip bias and stereotypes. To make them "safe". But if you look at these systems in practice, you'll see they often fail. And then someone writes a news article every few weeks. And it happens to all current AI systems, even the market leaders. So I figure science just can't do it yet and we're in the early stages. Nobody knows at this point where to put it so a company could trust AI to act exactly in their interest. We might be able to do so, but that's still science fiction undtil we arrive at Skynet.
Lenient_vegetable@lemm.ee 5 hours ago
Do you think they have unreleased more sentient and less llm in the labs or do you think we arent near that level of tech yet.
I often wonder what it would be like if they stuck it into the quantum chip google has or have they already tried it.