Comment on Why it’s a mistake to ask chatbots about their mistakes

givesomefucks@lemmy.world ⁨2⁩ ⁨days⁩ ago

Why would an AI system provide such confidently incorrect information about its own capabilities or mistakes? The answer lies in understanding what AI models actually are—and what they aren’t.

What’s ironic is this is one of the most human things about AI…

when an object is presented in the right visual field, the patient responds correctly verbally and with his/her right hand. However, when an object is presented in the left visual field the patient verbally states that he/she saw nothing, and identifies the object accurately with the left hand only (Gazzaniga et al., 1962; Gazzaniga, 1967; Sperry, 1968, 1984; Wolman, 2012). This is concordant with the human anatomy; the right hemisphere receives visual input from the left visual field and controls the left hand, and vice versa (Penfield and Boldrey, 1937; Cowey, 1979; Sakata and Taira, 1994). Moreover, the left hemisphere is generally the site of language processing (Ojemann et al., 1989; Cantalupo and Hopkins, 2001; Vigneau et al., 2006). Thus, severing the corpus callosum seems to cause each hemisphere to gain its own consciousness (Sperry, 1984). The left hemisphere is only aware of the right visual half-field and expresses this through its control of the right hand and verbal capacities, while the right hemisphere is only aware of the left visual field, which it expresses through its control of the left hand.

academic.oup.com/brain/article/140/5/…/2951052?lo…

Tldr:

They split people’s brains in half, and only the right side of the body could speak.

So if you showed the left hand a text that said “draw a circle” the left hand would draw a circle.

Ask the patient why, and they’d invent a reason and 100% believe it’s true.

It’s why it seems like people are just doing shit and rationalizing it later…

That’s kind of how we’re wired to work, and why humans can rationalize almost anything.

source
Sort:hotnewtop