Oh fuck. Then it gets even worse (and funnier). Because even if that was a human contributor, Shambaugh acted 100% correctly, and this defeats the core lie outputted by the bot.
If you got a serious collaborative project, you don’t want to enable the participation of people who act based on assumptions. Because those people ruin everything they touch with their “but I thought that…”, unless you actively fix their mistakes — i.e. more work for you.
And yet once you construe that bloody bot’s output as if they were human actions, that’s exactly what you get — a human who assumes. A dead weight and a burden.
It remains an open question whether it was set up to do that, or, more probably, did it by itself because the Markov chain came up with the wrong token.
A lot of people would disagree with me here, but IMO they’re the same picture. In either case, the human enabling the bot’s actions should be blamed as if those were their own actions, regardless of their “intentions”.
ulu_mulu@lemmy.zip 5 days ago
This sounds like all those in online videogames crying they’ve been banned for nothing lmao.
leftzero@lemmy.dbzer0.com 4 days ago
Probably a lot of that in the data the model was trained on.
Garbage in, garbage out, as they say, especially when the machine is a rather inefficient garbage compactor.