Tuesday, October 11, 2011

It's All in the Rhythm

When I first started blogging here, one of my motivations was to dig into the question of how poetry affects (that is, creates affect in) a reader. Most specifically, how is that feeling created when we read or hear a poem and the hair stands up on the backs of our necks, we feel a tingling along our spines, we feel, as Emily Dickinson said, as though the tops of our heads have been removed, or, as James Wright says, as though we are breaking into blossom.


My reading over the last couple of years, most importantly in Joseph LeDoux’s The Synaptic Self, but also in work by Damasio, Hofstadter, Dehaene, and others, has led me to believe that we are feeling in those moments the brain learning in a dramatic and intense way. In those moments we are experiencing the brain doing what it always does, what it exists to do—gathering information and storing it away, using data to build schemas of experience that it can refer back to in the future when it needs it—but it is doing so in a particularly rich mixture (which would make sense if you assume that the most affective poems are those that have the richest language).

LeDoux and others have discovered in the last 50 years or so that we learn (and “we” come into being, LeDoux argues) via a process called “synaptic plasticity”, which works via a process whereby the frequency or intensity of a synapse’s firing strengthens that synapse and thus creates memories (whether explicit memory [accessible, such as what I had for breakfast today] or implicit [memories whose conscious access is not required, such as my knowledge of how to cut and eat a grapefruit]). This is a highly simplified version of what actually happens, but it sums up the process.

Usually this synaptic firing doesn’t just involve a single axon and a single dendrite, just as one might guess when thinking about the process that creates all of our memory banks and our ability to communicate via language. Almost all firings are multivalent, involving input from several different locations among groups of neurons. The resulting complexity should be coming into focus here. To boot, there are several different “systems” of input. Each of the senses has its own input system, for example. All of these inputs are pulled together in the frontal lobes and the “working memory” before being either imprinted into long-term memory (if that is the fate of the data) or forgotten.

It seems to me that those moments of poetic intensity are moments when we are feeling the brain going through this rich, complex process in a very short period of time, which surely involves the release of a great deal of serotonin and/or other neurotransmitters—that chemical burst is probably responsible for the sense of blossoming or head removal, the tingling, which is quite similar to the “aha” moment one feels when a difficult problem has been solved—as though a veil has been removed. That seems like a possible theory to me, at least. A particularly intense poem (which is why I like to use very short poems as a test) or an intense moment in a poem (such as the end of a poem, where both a sense of closure and an opening up can evoke such feeling in the reader) is likely to bring energy from multiple linguistic directions in a rhythmic manner—high frequency and intensity of firing. Maybe someone out there is already doing hard research on this idea re: language processing?

This new research suggests that my wondering/wandering here may not be far off the right track. Mehta and Kumar have discovered that there are optimal frequencies of synaptic firing that encourage learning. It’s not that learning is optimized by the highest intensity of firing (cramming for a final an hour or two before the exam), but that there are lower frequency points that are ideal. You’ll get a clearer sense of this by reading the article. My guess is that some poems (or songs or prose fiction or cinema) that are novel to the reader or listener are able to hit just the right frequency of incoming data that is resonant with the ideal rhythm of the involved neurons so that this fast learning blossoms into the brain (or perhaps an emulation of fast learning). I’m not suggesting that the rhythm of the poem is the same frequency of the data being processed by the neurons, of course, because they work on the scale of many spikes (or firings) per second. Nor am I suggesting that there is a single rhythm that all poems should target—synapses have different ideal frequencies according to distance down the dendrite, apparently, and synapses become desensitized with use (learning) so that the ideal frequency for further learning decreases. This variability suggests that the experience would vary among readers, which is borne out by experience. Not everyone will get the same blossoming or tingling sensation I got when I first read this poem (and still do, to a lesser degree):



Praise be to Vishnu.
His hands fondle in secret
The large breasts of Lakshmi
As though looking there
For his own lost heart.


Tr. W. S. Merwin



But it is entirely possible that the rhythm of the poem is one factor in determining or affecting the rhythm of the synaptic firings, or the synergy among multiple firings.

Now, if I just had a functional MRI to test out some short, intense poems on some folks… (and a friend with some experience running such experiments!)

It also occurs to me that the idea of relation between an embodied rhythm and the effects of poems will ring a bell for those familiar with the arguments of Fred Turner and other neoformalists who wanted to find a model for the iambic line in the rhythm of the heart beat. It seems that they might have had a good idea in principle, but were looking at the wrong organ—or perhaps weren’t looking at enough organs.

Thursday, September 15, 2011

The Self is a Community

Very interesting post here from Peter Freed's blog Neuroself. His argument (the argument of his blog as well as of his article) is one I've forwarded here from a couple of other sources, including Hofstadter--there is no such thing as the unitary self (which many Buddhists have understood for quite some time). We appear to be composed of many selves, and our selves work in a kind of network (Borg, anyone?) both to produce the experience of individuality and the experience of community.

A question for my students: what would this theory suggest about the notion of artistic "voice"?

Saturday, July 30, 2011

How Neural Nets Work

This witty experiment with artificially engineered neural cells provides a great glimpse into how neurons can respond to external stimuli to send out signals. It's not quite "intelligence" as the article exaggerates, but it absolutely demonstrates an essential component of human intelligence.

Thursday, July 14, 2011

Can Computers Learn Natural Langauges?

Apparently so: http://www.sciencedaily.com/releases/2011/07/110712133330.htm

And if the ability to use information for creative ends, as I suggested in an earlier post, defines "knowledge", then this story suggests that computers can acquire knowledge.

What remains to be seen, though, is whether can be aware that they have knowledge... and I think that probably won't happen until our technology is sophisticated enough to run android systems (see previous post on embodiment: http://thinkodynamics.blogspot.com/2011/06/i-human.html).

Tuesday, June 14, 2011

Absence of Empathy = ?

A new one definitely to check out. The reviewer is absolutely correct, though, that some people will be infuriated by the equation of the root causes of "evil" and autism (Zero-Negative vs. Zero-Positive personalities). [Click title for link]

Tuesday, June 7, 2011

I, Human

Poet, philosopher, and science writer Brian Christian has a fun piece in the March 2011 Atlantic (which is adapted from his book The Most Human Human [Doubleday 2011]). In the artcile Christian describes participating in the 2009 Loebner Prize competition, which is an annual exercise that puts the Turing Test to AI entrants and humans alike to determine the Most Human Computer (which goes to the program that fools the most human interlocutors into believing it is human) as well as the Most Human Human (which goes to the human that leaves the least doubt in the minds of human interlocutors that s/he is human).


The Turing Test is intended to test whether an artificial intelligence can “think”— the test assumes that we would know that the artificial intelligence is capable of thinking if it can convince us in a blind conversation that it is human.

Never mind the problem of assuming that intelligence must be human-like (or that it must be expressed in ways similar to the ways humans express it). The real problem is that the Turing Test doesn’t test intelligence at all. It tests the appearance of intelligence. It tests the ability of human programmers to program that simulacrum. The classic rebuttal is that appearing sufficiently intelligent is being intelligent. I’ve gone back and forth over this one for some time, and Christian’s article—along with my recent reading in Antonio Damasio’s Self Comes to Mind—has convinced me that appearing intelligent—no matter how apparent—cannot be equated with being intelligent.

Eventually, in asking these sorts of questions, one comes down to the nature of intelligence, of knowledge, of mind, of awareness, of self. What does it mean to say one is intelligent? What does it mean to say one possesses knowledge? That one has a mind? What’s the difference between mind and self? And how can one test for any of those things? What does it mean that a computer is able to “spin half-discernible essays on postmodern theory” [Christian] yet is unable to compose a joke more complex than the most basic pun?

There are, of course, degrees of intelligence. Watson is more intelligent than my microwave oven, and you, dear reader, are more intelligent than Watson. But what does that statement mean? Watson clearly can store more information than my microwave; your brain might store more information than Watson, but surely a version of Watson could be built that would store far more information than your brain. But as the recent Jeopardy match proved, more information storage (that is, more memory) is not the answer to achieving intelligence. In the questions that Watson missed, the errors clearly came from a lack of context. Watson took wrong turns in his decision trees (associations) when he did not have adequate depth of knowledge. Okay, what does that mean—depth of knowledge? What is knowledge?

Knowledge is not mere information, not merely filled memory banks. Knowledge is stored information that the holder is able to use in a creative way—to write a sentence, to paint a representational picture, to plant a crop, to build a house, to make up a joke. In order to use this information, the holder must know that it knows (that is, it must have a sense of truth; which is not to say that the holder can only use true information). One of the greatest accomplishments of Watson is that his creators gave him a sense of what he knows he knows (a percentage likelihood that he had sufficient information to get the right answer)—which gave him the ability to choose whether or not to ring in. So Watson has some knowledge—he holds information and is able to determine how reliable that information is and when it is reliable enough to venture an answer. It’s conceivable even that Watson could acquire the knowledge necessary to instruct someone (or a crew of robots) on how to build a house. This level of knowledge clearly, though, would be less complex than the knowledge that a human contractor would have, because the contractor will have worked on actual houses, will have experienced what it is to build a house, which is a level of complexity that Watson cannot have.

Why can Watson not have that experience? Because he does not have a body. Damasio and others have demonstrated at this point that human intelligence is embodied, that it rises from a triple function of the brain—its regulation and mapping of the rest of the body, its mapping of relationships to objects outside the body, and an additional layer of mapping of its own activity in the process of mapping the body and objects outside the body. That is, our intelligence rises from a combination of self-reference and context. Watson cannot have that context because he not only cannot perceive his environment, he also cannot perceive his “body” and has no regulatory function that monitors his physical existence and attempts to repair it when necessary (infection, hunger, thirst, fatigue, nutrition, hygiene); in the absence of either of those, it goes without saying that Watson cannot map the relationship between the two. Even if Watson were to be equipped with a more complex version of today’s facial recognition software, he would not have a function that maps what is perceived relative to his “self”—it would only be a function to add more information (recognition of objects) with no deep context.

But what does this necessary organicism, this relationship between self and non-self tell us about our intelligence? Christian says:

Perhaps the fetishization of analytical thinking and the concomitant denigration of the creatural—that is, animal—and bodily aspects of life are two things we’d do well to leave behind. Perhaps, at least, in the beginnings of an age of AI, we are starting to center ourselves again, after generations of living slightly to one side—the logical, left-hemisphere side. Add to this that humans’ contempt for “soulless” animals, our unwillingness to think of ourselves as descended from our fellow “beasts”, is now challenged on all fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and not coincidentally, the entrance onto the scene of an entity with considerably less soul than we sense in a common chimpanzee or bonobo…
…It’s my belief that only experiencing and understanding truly disembodied cognition—only seeing the coldness and deadness and disconnectedness of something that really does deal in pure abstraction, divorced from sensory reality—can snap us out of it. Only this can bring us, quite literally, back to our senses.


That is, seeing what intelligence is not should help us to better see what our intelligence is, help us to see and appreciate ourselves and each other better. Should.

Could Watson or another computer be equipped with a complex sensory apparatus and self-referential monitoring processes that could achieve the complexity of human intelligence? Yes, it’s possible. Mind, after all, is simply a function of a physical system—our bodies. But our technology has a long way to go before it catches up with millions of years of evolution which compacted immense computing power into an amazingly small space—and did so organically with intricate backup systems, redundancies, and shortcuts. We have a long, long, long way to go before Lt. Data or the androids of Bladerunner, AI, or I, Robot.

All of which is to say that we are far inferior engineers to nature.

Monday, February 14, 2011

Who is...?

In the struggle to create an artificial intelligence that is truly “intelligent”, rather than a mere imitation of intelligence, the greatest obstacle may be ambiguity. AI software tends to have trouble with teasing out the meanings of puns, for example; it’s one of the ways of getting a machine to fail the Turing Test. Computers have gotten pretty good at playing chess (IBM’s Big Blue defeated Grand Master Garry Kasparov in a well-publicized event), but until now haven’t graduated far beyond that level of complexity.

Well, IBM is at it again, and their new system, Watson, is going on Jeopardy up against the game show’s two biggest winners. It’s a very big deal, because Jeopardy’s clues frequently are built around puns and other kinds of word play—ambiguities that often throw machines off their game.

This story from NPR summarizes it well, but there’s an interesting and commonly held assumption expressed in this story that I think warrants re-examination. The piece reports that systems like Watson have trouble with ambiguity because, though they can understand relations among words (that is, they can identify syntactic patterns and figure out how a sentence is put together, using that information to hone in on what kind of question is being asked), they can’t understand the meanings of words because they don’t have experience to relate to those words.

Embedded in this claim is the assumption that humans do have experience (of course!) and that our experience comes from direct, unmediated access to the world around us. When we read the word “island” we have a deep understanding of what an island is, because we’ve seen islands.  Computers can’t do this, because they haven’t experience islands or even water or land, the argument goes. This is a version of the Chinese Room argument—the argument that semantics cannot be derived from syntax.

This argument just doesn’t hold up in my way of thinking. First of all, some humans (desert dwellers, for example) may have never seen islands, but that would not inhibit their ability to understand the concept of island-ness. Helen Keller became able to understand such concepts though she had neither seen nor heard anything from the physical world.

Second, our perception of the external world is not unmediated. Our perceptions are generated by a hardware system that converts light into electronic signals, which are then processed and stored in the brain, not unlike the way the digital camera converts light signals to images on your laptop. Research shows that there is a clear delay between physical processing and awareness of perceptions. And the complexity of human perception is starting to seem less and less unique; computers are becoming better and better at being able to process physical information like images and sounds—face recognition and voice recognition software, for example; or the apps that can identify your location according to your photo or can identify a song according to a snippet that you record. The main difference is that in humans, there is an intelligence behind the process; in the computer there is not.

Which leads us ultimately to the question that Watson’s creators are trying to answer. What is intelligence?

The answer must be in the way the information is processed. Intelligence is likely ultimately to come from the complexity of association (including self-relexiveness, a kind of association) that is embedded in the system. The human brain is in a constant state of association and context building. My brain has been working on context and association building for 45 years (and that is a kind of experience). Why shouldn’t computers be capable of this kind of processing—that is, of learning? (Computers are, of course, capable of learning—in adaptive computer games, for example—just not yet at the level of complexity of humans.)What we don’t completely understand about human cognition are the processes used to find information—is it a vertical searching system, a horizontal searching system, some hybrid of vertical and horizontal with some trial and error thrown in? Something else? We just don’t know yet.

But cracking the code for pun recognition and “understanding” word play and jokes may get us a big step closer to an answer. Watson may lose, and he may win (he probably will win, as he did in trail runs for the show). If Watson loses, I hope he won’t feel badly about it; many humans have trouble with ambiguity, too.