A story about an AI generated article contained fabricated, AI generated quotes.
Archived version: archive.is/…/ars-technica-pulls-article-with-ai-f…
Submitted 22 hours ago by BrikoX@lemmy.zip to technology@lemmy.zip
A story about an AI generated article contained fabricated, AI generated quotes.
Archived version: archive.is/…/ars-technica-pulls-article-with-ai-f…
It sucks that of all articles this happened to, it was to the “OpenClaw Hit-piece one”.
That was such a ridiculous event and having a good article by a big outlet covering it is good, but it has now been retracted and totally overshadowed by this.
I don’t understand how hard it is to just like, not cheat.
Have some self-respect.
Because of money. Why pay someone to do actual work when you can get an AI to plagiarise and hallucinate for free?
AI does not lie. People using untrustworthy AI lie when they promote it as their own work.
I’m pretty sure it lies
Saying Generative AI lies is attributing the ability to reason to it. That’s not what it’s doing. It can’t think. It doesn’t “understand”.
So at best it can fabricate information by choosing the statistically best word that comes next based on its training set. That’s why there is a distinction between Generative AI hallucinations and actual lying. Humans lie. They tell untruths because they have a motive to. The Generative AI can’t have a motive.
People made AI to lie. When companies make something that does not work and promote it as reliable, that’s on the people doing that.
When faulty products are used by people, that’s on them.
I can no more blame AI than I could a car used during a robbery . Both are tools
AI does not lie.
Last year AI claimed “bleach” is a popular pizza topping. Nobody claimed this as their own work. It’s just what a chatbot said.
Are you saying AI didn’t lie? Is bleach a popular pizza topping?
What it did was assemble words based on a statistical probability model. It’s not lying because it doesn’t want to deceive, because it has no wants and no concept of truth or deception.
Of course, it sure looks like it’s telling the truth. Google engineered it that way, putting it in front of actual search results. IMO the head liar is Sundar Pichai, the man who decided to show it to people.
To be able to lie, you need to know what truth is. AI doesn’t know that, these tools don’t have the concept of right vs wrong nor truth vs lie.
What they do is assemble words based on statistical patterns of languages.
“bleach is a popular pizza topping”, from the “perspective” of AI, is just a sequence of words that works in the English language, it has no meaning to them.
Being designed to create language patterns in a statistical way is the reason why they hallucinate, but you can’t call those “lies” because AI tools have no such concept.
AI has a high rate of hallucinations…
How was this not caught by the editor?
Replaced the editor with AI too.
terminatortwo@piefed.social 22 hours ago
What a shame. I’ve subscribed to ars for years. Their response was disappointing, it doesn’t talk about what happened and what they’re doing to make sure it doesn’t happen again.
Nothing about how they handled them makes me trust that they won’t do it again.
zkfcfbzr@lemmy.world 20 hours ago
I think their response is perfectly reasonable. They took the article down and replaced it with an explanation of why, and posted an extremely visible retraction with open comments on their front page. They even reached out and apologized to the person who had the made-up quote attributed to them.
There are so many other outlets that would have just quietly taken the original article down without notice, or perhaps even just left it up.
TheOneCurly@feddit.online 20 hours ago
But like what am I supposed to do when senior ai reporter Benj writes his next piece? Ars works because the writers are generally experienced in the topics and do analysis and provide insight. Do we just accept that chatgpt is the new head ai writer with a meat puppet? They need to address the trust issue before this is resolved.
ooterness@lemmy.world 22 hours ago
I wouldn’t go that far. The article was posted Friday afternoon, and blew up over the weekend. Once the problem was known, the article was taken down quickly. We’ll see what happens when the editorial staff is back in the office in Monday.
terminatortwo@piefed.social 22 hours ago
They already posted their response: https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
BrikoX@lemmy.zip 20 hours ago
Assuming they are not lying about their internal policies (nobody disputed that at the moment), it’s already not allowed and this was writer fuck-up. Benj Edwards “Senior AI Reporter”, co-author of that article took the blame for it.
The article was also removed after 1 hour and 42 minutes on a Friday. That’s faster than most other publications able to include update note in my experience (when they bother in the first place).
Apart from punishing this writer for breaking the internal policy I’m not sure what else they can do here to satisfy your concerns.