spankmonkey@lemmy.world 4 days ago
By design, LLMs can get faster but cannot be more accurate without a massive intentional approach to verifying their datasets, which isn’t feasible because that would counter anything jot fact based as LLMs don’t understand context. Basically, the training approach means that they get filled with whatever the builders can get their hand on and then they fall back to web searches which return all kinds of unreliable stuff because LLMs don’t have a way of verifying reliability.
Even if they were perfect, they will not be able to keep up with the content flood of new information that comes out every minute when used as general purpose answer anything tools.
What AI actually excels at is pattern matching in controlled settings.
slate@sh.itjust.works 4 days ago
And now, lots of web searches return results of AI SEO slop chock full of incorrect information, which then fules subsequent training sets and LLM web searches and creates a negative feedback loop that could destroy the internet.
spankmonkey@lemmy.world 4 days ago
The AI SEO slop is already destroying the internet, although that negative feedback loop is certainly accelerating it.
Ramblingman@lemmy.world 3 days ago
Apparently gpt-5 is much worse, or so the subreddit dedicated to it says. I wonder if that loop has already started?