Comment on How Transformers Think: The Information Flow That Makes Language Models Work

A_A@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

[feed-forward sublayers] … these layers are the mechanism used to gradually learn a general, increasingly abstract understanding of the entire text being processed.

in my opinion, this is the part that people who hates LLMs (large language models) chooses to ignore.

source
Sort:hotnewtop