Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show.
The regular way of teaching LLMs new patterns of retrieving information, by giving human feedback, doesn’t help counter covert racial bias, the study showed.
Instead, it found that it could teach language models to “superficially conceal the racism they maintain on a deeper level”.
Wow AI is speedrunning American conservatism, it took decades to figure out they gotta put a smoke machine in front of the racism
RobotToaster@mander.xyz 8 months ago
> Train AI on humans
> It acts like humans
Image
CitizenKong@lemmy.world 8 months ago
Even worse, train AI on how humans act on the internet.