Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o
I always narrow my eyes when I hear someone talk about “safety” in the context of AI, because they usually just mean that the AI doesn’t engage in enough moral grandstanding when you ask it sketchy or risqué questions. That’s the same level of pearl-clutching that Tipper Gore espoused over music in the 90s.
But there are legitimate concerns, like lying about real people and topics, reproducing training data (especially personal information) too closely with the right kind of prompting, etc. The problem is that I can’t tell what kind this person is. Are they upset because the AI can recommend marijuana strains… or because it can do something like leak peoples personal information? The article (and people involved in these efforts) too often lump it all together. See, for example: Anthropic
henfredemars@infosec.pub 7 months ago
I think it’s quite simpler. Money over safety.
GregorGizeh@lemmy.zip 7 months ago
Surprise?