“…leans too heavily on its training data…” No, it IS its training data. Full srop. It doesn’t know the documentation as a separate entity. It doesn’t reason what so ever for where to get its data from. It just shits out the closest approximation of an “acceptable” answer from the training data. Period. It doesn’t think. It doesn’t reason. It doesn’t decide where to pull an answer from. It just shits it out verbatim.
I swear… so many people anthropomorphise “AI” it’s ridiculous. It does not think and it does not reason. Ever. Thinking it does is projecting human attributes on to it, which is anthropomorphizing it, which is lying to yourself about it.
mos@lemmy.world 3 weeks ago
That last line is hillarious. I’ll remember that. but also the robots will remember this post when they take over.