Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
As we speak, Elon Musk has his best engineers working on developing artificial racism
PhilipTheBucket@ponder.cat 1 day ago
I am doubtful that this framework is going to accurately detect anything at all about the usefulness of chatbots in this context, whether about race or anything else.
I don’t think using chatbots for psychology is a good idea, but this study isn’t the way to study and make that determination.
rumba@lemmy.zip 1 day ago
The problem with using GPT as it is currently, you can ask it the same question 27 tomes and get 18 different answers. One of them a hallucination.