Comment on ChatGPT's 'hallucination' problem hit with another privacy complaint in EU

Technus@lemmy.zip ⁨6⁩ ⁨months⁩ ago

This is an inherent, likely unfixable issue with LLMs because they simply don’t know right from wrong, or truth from fiction. All they do is output words that are likely to go together.

It’s literally just the Predictive Text game, or the “type <some prompt> and let your keyboard finish the sentence” meme. It’s not the same algorithms (autocorrect is much less sophisticated) but they’re surprisingly similar in how they actually function.

You can try to control what an LLM outputs by changing the prompt or adjust the model with negative feedback for certain combinations of words or phrases, but you can’t just tell it “don’t make up lies about people” and expect that to work.

source
Sort:hotnewtop