Sunday, September 7, 2014

Overreaching Metaphors

There is an excellent and fascinating article on metaphors and the brain.  I linked it uncritically on Facebook because I was mobile and my problems with it take some explanation, but I do have problems with it. Specifically the claims about consciousness and AI which seem to me unsupported.


First, I love the insight about how metaphors are connecting words and concepts that are already connected through our memories of experiences.  That's genius and my intuition is that it is absolutely true.

It seems to me that how well and reliably this phenomenon will show in the scans is going to depend on how evocative the phrases are for an individual, and what exactly they evoke.  The metaphorical "kick the habit" conjured a specific imagined scene for me, I was kicking an unwanted thing, and hard, right leg in full swing like punting a school-yard soccer ball for distance, toes curled up to strike with the ball of my foot.  The idiomatic conjured nothing for me, I simply translated "kicking the bucket"="dying" and moved on.  I have no idea where this saying comes from, so I cannot imagine it in a way that relates to the context.

But it isn't just my ignorance that can make the phrase fall flat.  "Bought the farm" also means to die, and I have an explanation for why: GIs who died were covered by SGLI, and by dying paid off their families' farm-related debts.  But this isn't evocative for me, it's a euphemism that references some bureaucratic after-effects of death.

And some of the difficulty in their data may simply lie in how different people perceive and experience these different phrases.  In this part:

Textural metaphors, too, appear to be simulated. That is, the brain processes "She’s had a rough time" by simulating the sensation of touching something rough.
I did not imagine a texture at all, but a car ride down a pocked and rutted dirt road, jostling me to the point of needing to use my muscles to protect my spine from the jolts.  If they had imaged my brain I might have been firing circuits for contracting my core or something, but I would not have been thinking about my fingers on sandpaper. I might have been a false negative in their study, if it did not account for this variance in how people connect these phrases to their experiences.

I also think that there may an an element unmentioned - our ability to correct malformed information.  Like Google's "Did you mean..." we can read a phrase with typos or other errors and usually correctly search for what the intended meaning was.  I have more than once found someone's clever meaning only because a literal translation failed and I was forced to go back for a second look to see what this phrase could be rounded to (metaphor!).  This ability to locate (metaphor!) meaning despite "errors" or other unexpected features seems like it might be relevant not just to solving CAPTCHCAs, but to using and comprehending metaphor.

Ok, on to the criticism.  This really bothers me:

If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: "It kills it." Of Ray Kurzweil’s singularity thesis, he says, "I don’t believe it for a second." Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

The article has convinced me that metaphor in language is connected to the interconnectedness of our experiences and memories.  But:

1) It has not shown that use or comprehension of metaphor is necessary for consciousness.
2) It has not shown that cognition is embodied.
3) It seems to assign importance to the body itself rather than to the richly connected memories of inter-related information that was collected (via the body) and stored during events that these words map to.
4) It seems to be missing that our bodies function as very rich interfaces between the world and our brains, and that there is no reason why computers might not have analogously rich interfaces.

To #1, I will simply point out that there are human beings with very limited comprehension of metaphor.  This is one of the known limitations of autism.  I don't think we have any basis to suggest that autistic people are not conscious based on this.

#2 and #3 are related.  Having a body is certainly relevent to making the connection from the word "affection" to the memories of giving or receiving affection, to the memories of warmth, and back to the word "warmth".  But if we're abstracting to non-human minds, what matters is a symbol connects to both a literal meaning and a web of related experiences, which then correlate strongly to another category of experiences, represented by yet another symbol.  You don't need a body to do this, you need your memories to exist in an interconnected, relational web.

If you have a context of someone responding to a poor-quality comment on a blog with "Your little YouTube comment has contributed much to the conversation." then there are multiple non-literal leaps to be made.  First, this isn't literally a YouTube comment, so it must be referring to them in some way?  What patterns does that bring up?  YouTube comments are often terrible, ranty, poorly punctuated.  Ok, so that's a possibility.  "contributed much to the conversation." doesn't seem to make sense.  The comment doesn't seem to be appreciated by anyone else, or to have many redeeming qualities.  This must be sarcasm as well as metaphor.  Finally the diminutive "little" gives us an additional clue that this remark is disparaging the commenter.  We conclude with high probability that we have correctly understood the phrase as sarcasm with the use of metaphor to call out a shitty (metaphor!) comment.  You do not need a body to comprehend or experience any of that.  You need an internet connection and human language comprehension.  Memories have been invoked, for me of cringing while reading the comments on some YouTube video.  That part is human, but the computer could have some other computer-like experience stored in association with those comments, and still understand or create this metaphor, or even create metaphors that humans could not comprehend because we lack the relevant experiences.

I would argue that it is the ability to recognize patterns in experiences and to make associations between them that matters.  We do this with a body, but you could do it with a fiber connection to the internet.  You wouldn't experience kicking a soccer ball that way, but you would experience other things that could be turned to metaphor.

I'd tell you a UDP joke, but you might not get it.

For #4, essentially, I wish to argue that computers could have bodies for the relevant purposes.  It seems to me that the human experience depends on a lot of bandwidth of information coming in all the time.  This comes to us through our body, through our five senses.  All of the visual information, hearing in stereo and with meaningful information derived from varying arrival times, multiple senses of touch all over the body, complex smells and tastes.  We're collecting huge amounts of information about each moment, and it allows us to notice the patterns and associations, to turn events into richly experienced memories, and to connect them also to our feelings and values.  Our values, I think, are purely in the mind, but our feelings are in a way a feedback loop to the brain.  It both causes and detects those chemical signals.  They are an input to the brain that senses the brain itself.  You want to be not-sad.  Bob usually correlates with sadness.  Avoiding Bob probably also avoids feeling sad.  Sure, it's complex, it's sophisticated, but is it fundamentally different from what computer systems could do?  I don't think so.

Our bodies also allow us to interact with the world.  I work in automation, so simplistic computer systems that detect and respond to and control events in the world are commonplace to me. Many people probably read this and think of a PC, but I do not.  I imagine a machine with the power to detect and affect the world.  Their interface is comparatively poor, and inflexible.  A complex system might have 10s of thousands of bits of I/O, but this doesn't compare to the richness of information that is flooding our minds all the time.  Computer vision systems tend to be looking for some purpose determined features, so even though they might transmit a lot of video information, what is really being parsed and used is comparatively poor.  Outputs are similarly limited, usually throwing some switch or varying the power level of some motor, or perhaps moving a 6-axis robot arm.  Our modern systems are probably less complicated than an insect in many ways, and my intuition is that we need to reach the complexity of mammals before consciousness starts to arise.  But perhaps not.  Consciousness could be some multiplication of the power of the mind by the richness of its input.  No one knows how consciousness works, so it is hard to speculate well.

What would the boundary of a computer's body be?  If it controls a relay that turns on a pump that runs a domestic water system, where does its body end?  The relay?  The building?  The pump?  The pin on its processor that commands the relay?  One critical defining feature of our bodies is that we have a wealth of information about them.  A healthy human will have a lot of information about what is happening to a limb without needing to look at it - our bodies are constantly communicating information not just about the world, but about the body itself.  In this sense, nothing in our pump control seems like a body at all.  What if instead, basic systems were covered with sensors for detecting state and damage?

What if the power cable jacket to the pump contained legions of temperature sensors, inductive current detectors, continuity detectors for locating nicks and cuts?  Now a short would not just be inferred from the data, it could be said to be felt.  "I experienced a cut in my cable a short time ago.  I can no longer feel the current coursing through the parts of my cable past that point.  The temperature of the not-flowing parts of my cable is falling, and the temperature of the flowing parts of my cable is rising.  The current flow is higher, but erratic.  The temperature at the cut is rising particularly quickly.  This connects to other memories where my cable was damaged and shorted to a nearby steel column, and created that awful green arc, the color of which I can now see reflecting off of the walls.  I know what I will see when I turn my vision system to look upon my cable, it is cut at the middle, and I am shorting out.  I must act, and soon." Such a machine might appreciate humor about similar machines making tragic mistakes, or horror about the unimagined going wrong.  Fundamentally, its experience seems similar in the important ways to a person having their arm sliced open, or at least it could be designed so that this would be true.

In this way the cable becomes part of the computer's "body" in a meaningful way.  The pump it feeds does not, unless it too is given a richness of sensory feedback.  So I think computers could have bodies, without necessarily mimicking human bodies as androids do.  But it seems to me that the richness of input is what really matters, and with the existence of the internet, that really does not require a body.  Or, alternatively, the whole internet could be considered its body, but I have some trouble comprehending that framing.