Comment on Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.

zante@slrpnk.net ⁨3⁩ ⁨weeks⁩ ago

It could be argued that deepseek should not have these vulnerabilities, but let’s not forget the world beta tested GPT - and these jailbreaks are “well-known” because they worked on GPT as well.

Is it known if GPT was hardened against jailbreaks, or did they merely blacklist certain paragraphs ?

source
Sort:hotnewtop