heroin
Not harmful and psychosis inducing enough.
They’re more like PCP.
Comment on Folk are getting dangerously attached to AI that always tells them they're right
DarrinBrunner@lemmy.world 16 hours ago
Damn, we’re so easy to manipulate.
Do you and yours a big favor and stay away from that shit like it’s heroin.
heroin
Not harmful and psychosis inducing enough.
They’re more like PCP.
Why not a mix of both?
Flattery gets you everywhere… handsome ;)
saltesc@lemmy.world 15 hours ago
I use it, but have established a realistic mindset that it’s alwqys confidentially incorrect and in many cases I’m better off walking away and just doing the thing myself.
In saying that, I’ve also established a mindset that people who actively rely on genAI must be low on intelligence. Not only lacking in knowledge or pursuing knowledge of whatever they’re using it for, but genuinely of a mental calibre that is unable to discern or realise its low performance.
nightshade@piefed.social 7 hours ago
Someone here pointed out the error of the old “even a broken clock is right twice a day” cliche. If you have to independently check if it’s correct, then it’s not giving you any useful information.
saltesc@lemmy.world 3 hours ago
Yes, but only 22 times out of 24 🤣
MirrorGiraffe@piefed.social 10 hours ago
I gave mine rules to always question me and provide critical feedback. It’s quite annoying sometimes but much better than when I was a genius for just about anything.
teft@piefed.social 5 hours ago
I watched an interview with Hannah Fry a few weeks ago and she said that is how she prompts the LLMs she uses.