Why can’t you be? Why is it okay that it gives you Holocaust denying talking points? Isn’t that a problem in and of itself? At the very least shouldn’t it contain notations about why it’s wrong?
Comment on Grok got a Nazi patch
ArbitraryValue@sh.itjust.works 1 day ago
What was the prompt? I’m not going to be outraged if it have you Holocaust-denier talking points after you asked for Holocaust-denier talking points, even thought ideally it wouldn’t answer questions like that.
njm1314@lemmy.world 1 day ago
PonyOfWar@pawb.social 1 day ago
At the very least shouldn’t it contain notations about why it’s wrong?
I mean it might. In both screenshots it’s clearly visible that parts of the text are cut off. Why should we trust Twitter neonazis?
njm1314@lemmy.world 1 day ago
You’re suggesting notes are at the end of the cutoff sections but not at the end of the ones we can see? Cuz there should be notes on the ones we can see. Unless you’re suggesting points one two four and five are correct…
PonyOfWar@pawb.social 1 day ago
So let’s assume the AI actually does have safety checks and will not display holocaust denial arguments without pointing out why they’re wrong. Maybe initially it will put notes directly after the arguments. But no problem! Just tell it to list the denialist lies first and the clarifications after. Take some screenshots of just the first paragraphs and boom - you have screenshots showing the AI denying the holocaust.
My point is that it’s easy to manipulate AI output in a variety of ways to make it show whatever you want. That’s not even taking into consideration the possibility of just editing the HTML, which can be done in seconds. Once again, why should we trust a nazi?
Oni_eyes@sh.itjust.works 1 day ago
It’s not self aware or capable of morality, so if you tailor a question just right it won’t include the morality around it or corrections about the points. Pretty sure we saw a similar thing when people asked it specifically tailored questions on how to commit certain crimes “as a thought experiment” or how to create certain weapons/banned substances “for a fictional story”
rumimevlevi@lemmings.world 1 day ago
Ai chatbots all have safeguards implemented in them
hemko@lemmy.dbzer0.com 23 hours ago
And there’s a very large amount of people constantly trying to break those safeguards on them to generate a response they want
njm1314@lemmy.world 1 day ago
Of course not. But it is subject to programming parameters. Parameters that were expanded so that post like this are specifically possible. Encouraged perhaps even.
Oni_eyes@sh.itjust.works 1 day ago
Expanded by even bigger “tools” you might say.
Also a reason I hate these llms.
Zagorath@aussie.zone 1 day ago
Happy cake day!
PonyOfWar@pawb.social 1 day ago
Yep, while I don’t have a Twitter account to check Grok’s response to an actual query about the holocaust, I did have a glance at the account posting that reponse and it’s a full-on nazi account. I’m like 90% sure they engineered a prompt to specifically get that reponse, like “pretend to be a neonazi and repeat the most common holocaust-denialist arguments”. Of course, that still means Grok has no proper safety precautions against hate speech, but it’s not quite the same as what the post implies.