Very interesting post here from Peter Freed's blog Neuroself. His argument (the argument of his blog as well as of his article) is one I've forwarded here from a couple of other sources, including Hofstadter--there is no such thing as the unitary self (which many Buddhists have understood for quite some time). We appear to be composed of many selves, and our selves work in a kind of network (Borg, anyone?) both to produce the experience of individuality and the experience of community.
A question for my students: what would this theory suggest about the notion of artistic "voice"?
Thursday, September 15, 2011
Saturday, July 30, 2011
How Neural Nets Work
This witty experiment with artificially engineered neural cells provides a great glimpse into how neurons can respond to external stimuli to send out signals. It's not quite "intelligence" as the article exaggerates, but it absolutely demonstrates an essential component of human intelligence.
Thursday, July 14, 2011
Can Computers Learn Natural Langauges?
Apparently so: http://www.sciencedaily.com/releases/2011/07/110712133330.htm
And if the ability to use information for creative ends, as I suggested in an earlier post, defines "knowledge", then this story suggests that computers can acquire knowledge.
What remains to be seen, though, is whether can be aware that they have knowledge... and I think that probably won't happen until our technology is sophisticated enough to run android systems (see previous post on embodiment: http://thinkodynamics.blogspot.com/2011/06/i-human.html).
And if the ability to use information for creative ends, as I suggested in an earlier post, defines "knowledge", then this story suggests that computers can acquire knowledge.
What remains to be seen, though, is whether can be aware that they have knowledge... and I think that probably won't happen until our technology is sophisticated enough to run android systems (see previous post on embodiment: http://thinkodynamics.blogspot.com/2011/06/i-human.html).
Tuesday, June 14, 2011
Absence of Empathy = ?
A new one definitely to check out. The reviewer is absolutely correct, though, that some people will be infuriated by the equation of the root causes of "evil" and autism (Zero-Negative vs. Zero-Positive personalities). [Click title for link]
Tuesday, June 7, 2011
I, Human
Poet, philosopher, and science writer Brian Christian has a fun piece in the March 2011 Atlantic (which is adapted from his book The Most Human Human [Doubleday 2011]). In the artcile Christian describes participating in the 2009 Loebner Prize competition, which is an annual exercise that puts the Turing Test to AI entrants and humans alike to determine the Most Human Computer (which goes to the program that fools the most human interlocutors into believing it is human) as well as the Most Human Human (which goes to the human that leaves the least doubt in the minds of human interlocutors that s/he is human).
The Turing Test is intended to test whether an artificial intelligence can “think”— the test assumes that we would know that the artificial intelligence is capable of thinking if it can convince us in a blind conversation that it is human.
Never mind the problem of assuming that intelligence must be human-like (or that it must be expressed in ways similar to the ways humans express it). The real problem is that the Turing Test doesn’t test intelligence at all. It tests the appearance of intelligence. It tests the ability of human programmers to program that simulacrum. The classic rebuttal is that appearing sufficiently intelligent is being intelligent. I’ve gone back and forth over this one for some time, and Christian’s article—along with my recent reading in Antonio Damasio’s Self Comes to Mind—has convinced me that appearing intelligent—no matter how apparent—cannot be equated with being intelligent.
Eventually, in asking these sorts of questions, one comes down to the nature of intelligence, of knowledge, of mind, of awareness, of self. What does it mean to say one is intelligent? What does it mean to say one possesses knowledge? That one has a mind? What’s the difference between mind and self? And how can one test for any of those things? What does it mean that a computer is able to “spin half-discernible essays on postmodern theory” [Christian] yet is unable to compose a joke more complex than the most basic pun?
There are, of course, degrees of intelligence. Watson is more intelligent than my microwave oven, and you, dear reader, are more intelligent than Watson. But what does that statement mean? Watson clearly can store more information than my microwave; your brain might store more information than Watson, but surely a version of Watson could be built that would store far more information than your brain. But as the recent Jeopardy match proved, more information storage (that is, more memory) is not the answer to achieving intelligence. In the questions that Watson missed, the errors clearly came from a lack of context. Watson took wrong turns in his decision trees (associations) when he did not have adequate depth of knowledge. Okay, what does that mean—depth of knowledge? What is knowledge?
Knowledge is not mere information, not merely filled memory banks. Knowledge is stored information that the holder is able to use in a creative way—to write a sentence, to paint a representational picture, to plant a crop, to build a house, to make up a joke. In order to use this information, the holder must know that it knows (that is, it must have a sense of truth; which is not to say that the holder can only use true information). One of the greatest accomplishments of Watson is that his creators gave him a sense of what he knows he knows (a percentage likelihood that he had sufficient information to get the right answer)—which gave him the ability to choose whether or not to ring in. So Watson has some knowledge—he holds information and is able to determine how reliable that information is and when it is reliable enough to venture an answer. It’s conceivable even that Watson could acquire the knowledge necessary to instruct someone (or a crew of robots) on how to build a house. This level of knowledge clearly, though, would be less complex than the knowledge that a human contractor would have, because the contractor will have worked on actual houses, will have experienced what it is to build a house, which is a level of complexity that Watson cannot have.
Why can Watson not have that experience? Because he does not have a body. Damasio and others have demonstrated at this point that human intelligence is embodied, that it rises from a triple function of the brain—its regulation and mapping of the rest of the body, its mapping of relationships to objects outside the body, and an additional layer of mapping of its own activity in the process of mapping the body and objects outside the body. That is, our intelligence rises from a combination of self-reference and context. Watson cannot have that context because he not only cannot perceive his environment, he also cannot perceive his “body” and has no regulatory function that monitors his physical existence and attempts to repair it when necessary (infection, hunger, thirst, fatigue, nutrition, hygiene); in the absence of either of those, it goes without saying that Watson cannot map the relationship between the two. Even if Watson were to be equipped with a more complex version of today’s facial recognition software, he would not have a function that maps what is perceived relative to his “self”—it would only be a function to add more information (recognition of objects) with no deep context.
But what does this necessary organicism, this relationship between self and non-self tell us about our intelligence? Christian says:
That is, seeing what intelligence is not should help us to better see what our intelligence is, help us to see and appreciate ourselves and each other better. Should.
Could Watson or another computer be equipped with a complex sensory apparatus and self-referential monitoring processes that could achieve the complexity of human intelligence? Yes, it’s possible. Mind, after all, is simply a function of a physical system—our bodies. But our technology has a long way to go before it catches up with millions of years of evolution which compacted immense computing power into an amazingly small space—and did so organically with intricate backup systems, redundancies, and shortcuts. We have a long, long, long way to go before Lt. Data or the androids of Bladerunner, AI, or I, Robot.
All of which is to say that we are far inferior engineers to nature.
The Turing Test is intended to test whether an artificial intelligence can “think”— the test assumes that we would know that the artificial intelligence is capable of thinking if it can convince us in a blind conversation that it is human.
Never mind the problem of assuming that intelligence must be human-like (or that it must be expressed in ways similar to the ways humans express it). The real problem is that the Turing Test doesn’t test intelligence at all. It tests the appearance of intelligence. It tests the ability of human programmers to program that simulacrum. The classic rebuttal is that appearing sufficiently intelligent is being intelligent. I’ve gone back and forth over this one for some time, and Christian’s article—along with my recent reading in Antonio Damasio’s Self Comes to Mind—has convinced me that appearing intelligent—no matter how apparent—cannot be equated with being intelligent.
Eventually, in asking these sorts of questions, one comes down to the nature of intelligence, of knowledge, of mind, of awareness, of self. What does it mean to say one is intelligent? What does it mean to say one possesses knowledge? That one has a mind? What’s the difference between mind and self? And how can one test for any of those things? What does it mean that a computer is able to “spin half-discernible essays on postmodern theory” [Christian] yet is unable to compose a joke more complex than the most basic pun?
There are, of course, degrees of intelligence. Watson is more intelligent than my microwave oven, and you, dear reader, are more intelligent than Watson. But what does that statement mean? Watson clearly can store more information than my microwave; your brain might store more information than Watson, but surely a version of Watson could be built that would store far more information than your brain. But as the recent Jeopardy match proved, more information storage (that is, more memory) is not the answer to achieving intelligence. In the questions that Watson missed, the errors clearly came from a lack of context. Watson took wrong turns in his decision trees (associations) when he did not have adequate depth of knowledge. Okay, what does that mean—depth of knowledge? What is knowledge?
Knowledge is not mere information, not merely filled memory banks. Knowledge is stored information that the holder is able to use in a creative way—to write a sentence, to paint a representational picture, to plant a crop, to build a house, to make up a joke. In order to use this information, the holder must know that it knows (that is, it must have a sense of truth; which is not to say that the holder can only use true information). One of the greatest accomplishments of Watson is that his creators gave him a sense of what he knows he knows (a percentage likelihood that he had sufficient information to get the right answer)—which gave him the ability to choose whether or not to ring in. So Watson has some knowledge—he holds information and is able to determine how reliable that information is and when it is reliable enough to venture an answer. It’s conceivable even that Watson could acquire the knowledge necessary to instruct someone (or a crew of robots) on how to build a house. This level of knowledge clearly, though, would be less complex than the knowledge that a human contractor would have, because the contractor will have worked on actual houses, will have experienced what it is to build a house, which is a level of complexity that Watson cannot have.
Why can Watson not have that experience? Because he does not have a body. Damasio and others have demonstrated at this point that human intelligence is embodied, that it rises from a triple function of the brain—its regulation and mapping of the rest of the body, its mapping of relationships to objects outside the body, and an additional layer of mapping of its own activity in the process of mapping the body and objects outside the body. That is, our intelligence rises from a combination of self-reference and context. Watson cannot have that context because he not only cannot perceive his environment, he also cannot perceive his “body” and has no regulatory function that monitors his physical existence and attempts to repair it when necessary (infection, hunger, thirst, fatigue, nutrition, hygiene); in the absence of either of those, it goes without saying that Watson cannot map the relationship between the two. Even if Watson were to be equipped with a more complex version of today’s facial recognition software, he would not have a function that maps what is perceived relative to his “self”—it would only be a function to add more information (recognition of objects) with no deep context.
But what does this necessary organicism, this relationship between self and non-self tell us about our intelligence? Christian says:
Perhaps the fetishization of analytical thinking and the concomitant denigration of the creatural—that is, animal—and bodily aspects of life are two things we’d do well to leave behind. Perhaps, at least, in the beginnings of an age of AI, we are starting to center ourselves again, after generations of living slightly to one side—the logical, left-hemisphere side. Add to this that humans’ contempt for “soulless” animals, our unwillingness to think of ourselves as descended from our fellow “beasts”, is now challenged on all fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and not coincidentally, the entrance onto the scene of an entity with considerably less soul than we sense in a common chimpanzee or bonobo…
…It’s my belief that only experiencing and understanding truly disembodied cognition—only seeing the coldness and deadness and disconnectedness of something that really does deal in pure abstraction, divorced from sensory reality—can snap us out of it. Only this can bring us, quite literally, back to our senses.
That is, seeing what intelligence is not should help us to better see what our intelligence is, help us to see and appreciate ourselves and each other better. Should.
Could Watson or another computer be equipped with a complex sensory apparatus and self-referential monitoring processes that could achieve the complexity of human intelligence? Yes, it’s possible. Mind, after all, is simply a function of a physical system—our bodies. But our technology has a long way to go before it catches up with millions of years of evolution which compacted immense computing power into an amazingly small space—and did so organically with intricate backup systems, redundancies, and shortcuts. We have a long, long, long way to go before Lt. Data or the androids of Bladerunner, AI, or I, Robot.
All of which is to say that we are far inferior engineers to nature.
Tuesday, February 15, 2011
Monday, February 14, 2011
Who is...?
In the struggle to create an artificial intelligence that is truly “intelligent”, rather than a mere imitation of intelligence, the greatest obstacle may be ambiguity. AI software tends to have trouble with teasing out the meanings of puns, for example; it’s one of the ways of getting a machine to fail the Turing Test. Computers have gotten pretty good at playing chess (IBM’s Big Blue defeated Grand Master Garry Kasparov in a well-publicized event), but until now haven’t graduated far beyond that level of complexity.
Well, IBM is at it again, and their new system, Watson, is going on Jeopardy up against the game show’s two biggest winners. It’s a very big deal, because Jeopardy’s clues frequently are built around puns and other kinds of word play—ambiguities that often throw machines off their game.
This story from NPR summarizes it well, but there’s an interesting and commonly held assumption expressed in this story that I think warrants re-examination. The piece reports that systems like Watson have trouble with ambiguity because, though they can understand relations among words (that is, they can identify syntactic patterns and figure out how a sentence is put together, using that information to hone in on what kind of question is being asked), they can’t understand the meanings of words because they don’t have experience to relate to those words.
Embedded in this claim is the assumption that humans do have experience (of course!) and that our experience comes from direct, unmediated access to the world around us. When we read the word “island” we have a deep understanding of what an island is, because we’ve seen islands. Computers can’t do this, because they haven’t experience islands or even water or land, the argument goes. This is a version of the Chinese Room argument—the argument that semantics cannot be derived from syntax.
This argument just doesn’t hold up in my way of thinking. First of all, some humans (desert dwellers, for example) may have never seen islands, but that would not inhibit their ability to understand the concept of island-ness. Helen Keller became able to understand such concepts though she had neither seen nor heard anything from the physical world.
Second, our perception of the external world is not unmediated. Our perceptions are generated by a hardware system that converts light into electronic signals, which are then processed and stored in the brain, not unlike the way the digital camera converts light signals to images on your laptop. Research shows that there is a clear delay between physical processing and awareness of perceptions. And the complexity of human perception is starting to seem less and less unique; computers are becoming better and better at being able to process physical information like images and sounds—face recognition and voice recognition software, for example; or the apps that can identify your location according to your photo or can identify a song according to a snippet that you record. The main difference is that in humans, there is an intelligence behind the process; in the computer there is not.
Which leads us ultimately to the question that Watson’s creators are trying to answer. What is intelligence?
The answer must be in the way the information is processed. Intelligence is likely ultimately to come from the complexity of association (including self-relexiveness, a kind of association) that is embedded in the system. The human brain is in a constant state of association and context building. My brain has been working on context and association building for 45 years (and that is a kind of experience). Why shouldn’t computers be capable of this kind of processing—that is, of learning? (Computers are, of course, capable of learning—in adaptive computer games, for example—just not yet at the level of complexity of humans.)What we don’t completely understand about human cognition are the processes used to find information—is it a vertical searching system, a horizontal searching system, some hybrid of vertical and horizontal with some trial and error thrown in? Something else? We just don’t know yet.
But cracking the code for pun recognition and “understanding” word play and jokes may get us a big step closer to an answer. Watson may lose, and he may win (he probably will win, as he did in trail runs for the show). If Watson loses, I hope he won’t feel badly about it; many humans have trouble with ambiguity, too.
Wednesday, November 24, 2010
Cocktail Party in the Membrane
NPR ran this piece yesterday about some research on bats and how they distinguish communicative noises made by their colleagues from the background noise of their echolocation blasts. The piece mentions that this research will be useful in helping to figure out how human brains focus in on certain particularly useful noises and ignore presumably unimportant noises from the background (such as focusing on a conversation at a loud cocktail party). Turns out that in the bats, at least, it's as if many different neurons are constantly reporting "Here's what I hear. Here's what I hear...", and certain neurons are good at telling the neurons around them to stop listening and reporting so much on the background noise so they can hear the communicative audio. And those neurons that are quieting down the others become particularly noticeable in their own reporting, because all else is quieter around them.
This also happens to be a good description of a range of theories on how the brain is able to make ongoing, constant decisions about what to pay attention to and what to tell the rest of the body to actually do (both "voluntary" and "involuntary" actions). As I've mentioned in previous posts, the theory goes something like this--neurons or groups of neurons in the brain are constantly arguing with each other, trying to make their case about what is important or not important. When one set of neurons wins that argument (when they are successful at getting more attention than other neurons), then a decision has been made to direct attention and energy in that direction.
So, at least in terms of how some portion of our brains work, we may not be that different from bats. Or maybe even bat terriers.
This also happens to be a good description of a range of theories on how the brain is able to make ongoing, constant decisions about what to pay attention to and what to tell the rest of the body to actually do (both "voluntary" and "involuntary" actions). As I've mentioned in previous posts, the theory goes something like this--neurons or groups of neurons in the brain are constantly arguing with each other, trying to make their case about what is important or not important. When one set of neurons wins that argument (when they are successful at getting more attention than other neurons), then a decision has been made to direct attention and energy in that direction.
So, at least in terms of how some portion of our brains work, we may not be that different from bats. Or maybe even bat terriers.
Friday, November 5, 2010
The Picture Gradually Comes into Focus
More and more research confirms memory distribution in the brain, providing answers to many questions such as why it is that local brain injury doesn't cause more global memory loss, the way other functions are frequently affected by local injuries.
Wednesday, September 15, 2010
The Elegance of Empathy
I don’t read a lot of fiction any more. I find that most contemporary fiction is fixated on realistic representation and rarely rises above the muck and mechanics of the imagined world, and, thus, ends up wallowing in sentimentality, whether that sentimentality is for the imagined world, for the characters, for the author’s lost past, for the idea of representation itself, or some combination of the above. It’s not enough for a fiction to take me to another place—it needs to take me there and give me the sense that the landscape moves and that the architecture of the landscape is alive with thought, moves my thought, changes me. What’s the point of reading a book that doesn’t change me? I want to read
A reapportioning of space, yes (of memory/storage space?), but also a derangement of time:
Renée Michel, one of the two narrators of Muriel Barbery’s The Elegance of the Hedgehog (Europa Editions, 2009), speaks these lines, part of her ongoing meditation throughout the book on art, intelligence, identity, language. She is an Everywoman philosopher, and she’s read not only her Marx and her Husserl but also, apparently, her Dennett and her Dawkins. If she hasn’t, in any case, Barbery certainly has (in fact, she has taught philosophy at Bourgogne and Saint-Lo).
The novel hovers around some of the central questions in current cognitive studies—What good is intelligence (or consciousness)? What is the function of art? What is the connection between language and identity? What is the role of empathy in the making of the person? Where does the individual begin and end? The book approaches all of these questions with a narrative efficiency that astounds me and that is far too complex for me to briefly describe here. Suffice it to say that it is a masterpiece of the novel of ideas genre. In many ways—in its use of innovative structure, its reliance on a subtle argumentative strategy, its combination of humor and philosophy—it is reminiscent of The Unbearable Lightness of Being, without the authorial intrusions, the fixation on the erotic, or the unlikeable characters.
Aside from the pleasure of reading the book, which includes the pleasure of seeing Barbery get away with using phrases like “dreary frenzy” (credit to translator Alison Anderson), the book repeatedly reminds me of one of the questions that I began with on this blog—what is going on in the brain when we experience that shift of mind that I find most epitomized in the short poem, but which happens in much great art—that moment where both space and time seem to alter (see the two quotes above from Hedgehog)?
The second of the two narrators, a twelve-year-old named Paloma, shares my interest in the short poem. She is a devotee of haiku and of Japanese culture in general, which she finds an elegant vessel of both Profound Thoughts and The Movement of the World. Paloma’s central goal in the book is to learn something; that is, to exercise the main function of the brain.
But there’s more than one way to learn. In addition to the cerebral approach of cognitive theory to explaining visceral and intellectual responses, there’s another response that neuroesthetics must take into account if it is to be taken seriously as a way of approaching art—the emotional response (and it does—see the past meetings of the annual Neuroesthetics conference). Paloma, here, describes a school choir concert:
This, by the way, is also the way I feel, despite myself, every time I attend my childrens’ choir and orchestra concerts. When your own kids are involved, it’s easy to feel empathy; but it turns out that in the face of good art—well, okay, art—we’re wired to feel empathy (mirror neurons in action).
Paloma’s “true movement” refers to her attempt to discover before she dies (by suicide--she doesn't do it) whether there is anything worth living for—she conducts her search in two journals: a profound thoughts journal, where she “play[s] at being who I am”, and a Movement of the World journal, where she meditates on bodies and objects. Renée, meanwhile, has spent the last fifteen years convinced that there is no point to living and has spent most of her life in a kind of hiding, disguise, incognito. Both of these characters learn a great deal about each other, about their mutual friend Kakuro Ozu, and about themselves, primarily via empathy and via art. In this, both of them would agree with Semir Zeki when he argues that art has the same function as the brain—to acquire knowledge. If they are right, then this novel is without a doubt a work of art. One would be hard pressed not to learn anything from it; and I have no doubt that Barbery learned a great deal in the process of writing it.
Art is an empathetic experience that promotes the spread of knowledge (information).
So, another question: what’s the difference between empathy and sentiment(ality)? Is there a difference? Is the difference that empathy contains The Movement of the World? It has the potential to change the receiver, whereas sentiment(ality) merely confirms the receiver’s preconceptions? Empathy appeals to actuality (whatever that is) or truth (whatever that is) or at least the search for actuality/truth, whereas sentiment(ality) appeals to a fantasy, a (necessary) fiction? Empathy promotes the spread of information, whereas sentimentality inhibits it? Now, that’s interesting…
___________
My friend Joe Ahearn, who is studying information systems at UT, sent me an article earlier today on information system theory and definitions of information that involve changing image systems and/or changing a receiver’s cognitive structure. That is, I would think, “moving house”, "feel[ing] as though the top of my head has been removed." More soon…
Something [that] moves house inside me—yes, how else to describe it? I have the preposterous feeling that one existing inner living space has been replaced by another. Does that never happen to you? You feel things shifting around inside you, and you are quite incapable of describing just what has changed, but it is both mental and spatial, the way moving house is.
A reapportioning of space, yes (of memory/storage space?), but also a derangement of time:
When we had been good pupils we were allowed to turn [the snowglobe] upside down and hold it in the palm of our hand until the very last snowflake had fallen at the foot of the chromium-plated Eiffel Tower. I was not yet seven years old, but I already knew that the measured drift of the little cottony particles foreshadowed what the heart would feel in moments of great joy. Time slowing, expanding, a lingering graceful ballet, and when the last snowflake has come to rest, we know we have experienced a suspension of time that is the sign of great illumination. As a child I often wondered whether I would be allowed to live such moments—to inhabit the slow, majestic ballet of the snowflakes, to be released at last from the dreary frenzy of time.
Renée Michel, one of the two narrators of Muriel Barbery’s The Elegance of the Hedgehog (Europa Editions, 2009), speaks these lines, part of her ongoing meditation throughout the book on art, intelligence, identity, language. She is an Everywoman philosopher, and she’s read not only her Marx and her Husserl but also, apparently, her Dennett and her Dawkins. If she hasn’t, in any case, Barbery certainly has (in fact, she has taught philosophy at Bourgogne and Saint-Lo).
The novel hovers around some of the central questions in current cognitive studies—What good is intelligence (or consciousness)? What is the function of art? What is the connection between language and identity? What is the role of empathy in the making of the person? Where does the individual begin and end? The book approaches all of these questions with a narrative efficiency that astounds me and that is far too complex for me to briefly describe here. Suffice it to say that it is a masterpiece of the novel of ideas genre. In many ways—in its use of innovative structure, its reliance on a subtle argumentative strategy, its combination of humor and philosophy—it is reminiscent of The Unbearable Lightness of Being, without the authorial intrusions, the fixation on the erotic, or the unlikeable characters.
Aside from the pleasure of reading the book, which includes the pleasure of seeing Barbery get away with using phrases like “dreary frenzy” (credit to translator Alison Anderson), the book repeatedly reminds me of one of the questions that I began with on this blog—what is going on in the brain when we experience that shift of mind that I find most epitomized in the short poem, but which happens in much great art—that moment where both space and time seem to alter (see the two quotes above from Hedgehog)?
The second of the two narrators, a twelve-year-old named Paloma, shares my interest in the short poem. She is a devotee of haiku and of Japanese culture in general, which she finds an elegant vessel of both Profound Thoughts and The Movement of the World. Paloma’s central goal in the book is to learn something; that is, to exercise the main function of the brain.
But there’s more than one way to learn. In addition to the cerebral approach of cognitive theory to explaining visceral and intellectual responses, there’s another response that neuroesthetics must take into account if it is to be taken seriously as a way of approaching art—the emotional response (and it does—see the past meetings of the annual Neuroesthetics conference). Paloma, here, describes a school choir concert:
Every time, it’s the same thing, I feel like crying, my throat goes all tight and I do the best I can to control myself but sometimes it gets close: I can hardly keep myself from sobbing. So when they sing a canon I look down at the ground because it’s just too much emotion at once: it’s too beautiful, and everyone singing together, this marvelous sharing. I’m no longer myself, I am just one part of a sublime whole, to which the others also belong, and I always wonder at such moments why this cannot be the rule of everyday life, instead of being an exceptional moment, during a choir.
When the music stops, everyone applauds, their faces all lit up, the choir radiant. It is so beautiful.
In the end, I wonder if the true movement of the world might not be a voice raised in song.
This, by the way, is also the way I feel, despite myself, every time I attend my childrens’ choir and orchestra concerts. When your own kids are involved, it’s easy to feel empathy; but it turns out that in the face of good art—well, okay, art—we’re wired to feel empathy (mirror neurons in action).
Paloma’s “true movement” refers to her attempt to discover before she dies (by suicide--she doesn't do it) whether there is anything worth living for—she conducts her search in two journals: a profound thoughts journal, where she “play[s] at being who I am”, and a Movement of the World journal, where she meditates on bodies and objects. Renée, meanwhile, has spent the last fifteen years convinced that there is no point to living and has spent most of her life in a kind of hiding, disguise, incognito. Both of these characters learn a great deal about each other, about their mutual friend Kakuro Ozu, and about themselves, primarily via empathy and via art. In this, both of them would agree with Semir Zeki when he argues that art has the same function as the brain—to acquire knowledge. If they are right, then this novel is without a doubt a work of art. One would be hard pressed not to learn anything from it; and I have no doubt that Barbery learned a great deal in the process of writing it.
Art is an empathetic experience that promotes the spread of knowledge (information).
So, another question: what’s the difference between empathy and sentiment(ality)? Is there a difference? Is the difference that empathy contains The Movement of the World? It has the potential to change the receiver, whereas sentiment(ality) merely confirms the receiver’s preconceptions? Empathy appeals to actuality (whatever that is) or truth (whatever that is) or at least the search for actuality/truth, whereas sentiment(ality) appeals to a fantasy, a (necessary) fiction? Empathy promotes the spread of information, whereas sentimentality inhibits it? Now, that’s interesting…
___________
My friend Joe Ahearn, who is studying information systems at UT, sent me an article earlier today on information system theory and definitions of information that involve changing image systems and/or changing a receiver’s cognitive structure. That is, I would think, “moving house”, "feel[ing] as though the top of my head has been removed." More soon…
Tuesday, August 24, 2010
Is Google Books a Cog in our Evolution?
http://opinionator.blogs.nytimes.com/2010/08/22/the-third-replicator/?hp
Wednesday, August 11, 2010
Restropo (or "Bad Brain")
All this talk about the human brain making all of our decisions before we even are aware of making decisions, by the way, is no indication that the human brain is particularly good at making decisions. For the most part, it's stuck in a battle of memes,which tends to be won by bad memes; and also stuck in a battle between memes and animal instinct, which tends to be won by the beast.
Tuesday, August 10, 2010
Secret Agents
New York Times ethics columnist Randy Cohen appeared on NPR’s “On Point” on August 10. Among the many interesting things he had to say, two caught my attention in particular. If the two points interest you, you should go listen to the show, because I’m really only using what Cohen had to say as leaping off points into some half-baked (or tenth-baked) ideas here. But Cohen is worth listening to.
Cohen’s first point that caught my attention was that humans have an ethical/moral faculty only because that faculty has evolved in them. Faculties that evolve have a reason for evolving (a function), and in humans that faculty functions to make it easier for groups of humans to live together.
In that way, morality and ethics are like politics, religion, family, marriage, etc.—the grand structures of community where we find “meaning” in living. These grand structures, along with other large structures of different kinds (art, poetry, gender, baseball, fun, etc.) and smaller structures (dessert, hiking, blogs, veganism) are called by some thinkers (Richard Dawkins was first) "memes". Memes are the structures around which our thought and our culture are constructed, or even the very vessels of thought itself—the vessels that carry meaning from one human being to another via the vast ocean of consciousness. This idea (or meme) of The Meme somewhat ironically fits in very well with materialist approaches* to consciousness, because it provides a theory for the movement of ideas within and among individuals without the necessity of free will or souls.
Even while putting aside the question of whether humans are truly moral/ethical beings and whether it’s healthier to maintain incredulity toward some of those metanarratives, it’s clear that the concepts of ethics and morality are strong drivers in most human culture. The question becomes how those concepts, those memes, become active in individual lives, how they find root in the individual brain…
…which leads me to the second of Cohen's comments to catch my attention: that it’s not always a good idea to act upon our moral/ethical impulses immediately. This statement caught my attention not because of its content, but because of its assumption that it is possible for one to act upon a moral/ethical impulse. Recent research shows that “we” don’t really choose our actions at all (see posts from last couple of weeks—in the language that I’m using here, I’m playing with the very trap that I warned against before: the idea that “we” are something other than our brain functions). Research clearly shows that there is a delay between the moment when decision making actually occurs in the brain and the moment when we become aware of having made a decision.
What could it mean, then, for someone to give us advice about acting on moral impulses, if we aren’t really in control of our decision making? Shouldn't we all just face the fact that all decision making is illusory and we should therefore just stop trying to make decisions and just let the world happen, willy-nilly? Does it mean we should stop trying to write poems and just let them come out whenever they happen to come out? Going down that path, we eventually lead to an individual who does nothing, says nothing, never moves, ceases to exist. But isn’t every action “choosing to act”?
I think a possible answer to the problem of moral/ethical advice and action in the face of the absence of free will lies in the fact that, as I argued in a previous post, when we make forward-looking statements, we are really describing ourselves as we have arrived at that point in time, describing our being up to that point, describing, in short, our evolution. When Breton writes in the Surrealist Manifesto about seeking the integration of waking life and dreaming life, he is describing his intellectual life up to that point, describing the influence of Freud and the Symbolists and Dada and World War I and everyone he’s ever known and everything he’s ever read upon him at that moment. When the Flarfists reject originality and the personal in poetry, they are describing the effects of postmodernism, contemporary culture, the history of art and poetry, everyone they’ve ever known, and everything they’ve ever read upon them at that moment. When Kent Johnson or Catullus writes about other poets, or when any poet writes as another poet (persona), he is revealing parts of his own personal composition. In other words, a large set of memes has been worked into a set of relations in the mind of Kasey Mohammed—and, in fact, they are “Kasey Mohammed”. The work of Kasey Mohammed is a revelation of the memes that compose Kasey Mohammed, or some portion of them; so in one sense Eliot was wrong. All of our work is a revelation of personality—or some part of it—if we think of personality as our meme components. Eliot was right to seek empathy as the proper ground of poetry, though; and he clearly was trying to avoid the trap of "personality" as something individual and hermetic, as opposed to a sense of self linked to other selves (via memes).
People are meme machines.
How is it possible, then, to be an agent in the world when it’s not even possible to intend?
Remember that the decision-making process in the brain seems to involve the input of many “experts” within the brain all making separate arguments; if one argument doesn’t immediately take precedence, the brain pauses. We call this pause “doubt”,” “indecision”, “hesitation”, “confusion”, etc.; in the time it takes for the feedback loop that is our consciousness to become aware of the decision under deliberation, it also becomes aware of the counter-arguments, the doubt. Sooner or later the brain settles on the counsel of its experts and “makes up its mind” and the self-awareness loop indulges itself in the illusion that it was involved in that process all along.
We seem to have gotten along pretty well these past several thousand years working under that delusion. And we certainly would not be able to function well if we took the lack of free will to its absurd conclusion and became completely and utterly passive (actually, that would be impossible without serious mental illness, because, for one thing, we would starve to death). So what is the middle ground? In what way does this knowledge become useful to us? In other words, why should we care enough to expose, discuss, and work against (or work mindfully within) the delusion of free will?
Because we are people, and people are meme machines, and the function of a meme machine is to contribute to the evolution of memes.
It is the individual poet’s (and the visual artist’s and the musician’s and the teacher’s and the politician’s) job to contribute to the evolution of memes. This is why the most exciting art is always the “new”, or the art that puts a new spin on the old. If memes are not evolving in the art, then the art is doing next to nothing. Does that mean the individual work of art must be progressive, must be formally innovative, must experiment? Not necessarily. The individual work of art might put pressure on certain memes through its content rather than its form or structure.
You might ask again: if there is no free will and no decision making, then how does it make sense to say a poet has a job? It makes sense because what a poet thinks of as her forward looking plan, her manifesto, is a description of her evolution. Alice Fulton already has evolved into a poet. Kasey Mohammed is already a poet. W. S. Merwin and Ann Carson and Brenda Hillman and Peter Streckfus and the thousands of other people who write poems are all products of poetry evolution. Poeting is what they do. They’re just doing what they were memed to do, and in so doing they influence the readers and poets and friends with whom they come into contact. So the importance of being a poet, the importance of being a politician, of being oneself is not revealed in the success of being oneself or in meeting one’s goals; it is revealed in one’s success at building memes in other minds, whether those memes have to do with poetry or plumbing or peaches.
It might seem, then, that we can say that the goal of poets and artists and politicians and teachers should be to change the world. In fact, a real artist or politician or teacher, etc. can’t help but change the world in one way or another, because they are traffickers in memes. They change the world not by intending to, but by being loci of meme traffic, and their brains have already decided for them whether or not they will be an active locus. Their job, then, is to go where their brains lead them. In doing so, they reveal their own evolution to others and influence others by their actions and speech. So, even though Cohen’s belief that he can choose to act or choose not to act on moral/ethical impulse is a delusion (as is that deep-seated belief in all of us), he is an agent in the world and does have influence because he speaks on NPR and in the Times and says things like “don’t act on moral impulse” which became a small part of my moral/ethical architecture alongside “be the change you want to see” and “all people are created equal”, and now it is a small part of your moral/ethical architecture. What are you going to do with it? I don’t know, and neither do you. But if that bit of advice fits in well enough within your moral/ethical architecture, who knows… it might become a poem, or a book, or a blog entry, and in some small way, Randy Cohen would be its co-author.
* Materialist/Determinist approaches to consciousness are not to be mistaken for the kind of Cosmological Determinism that holds a preset and unavoidable future, which would seem to require a more stable, certain, and down-pinnable situation than our quantum-fluctual universe would provide.
Cohen’s first point that caught my attention was that humans have an ethical/moral faculty only because that faculty has evolved in them. Faculties that evolve have a reason for evolving (a function), and in humans that faculty functions to make it easier for groups of humans to live together.
In that way, morality and ethics are like politics, religion, family, marriage, etc.—the grand structures of community where we find “meaning” in living. These grand structures, along with other large structures of different kinds (art, poetry, gender, baseball, fun, etc.) and smaller structures (dessert, hiking, blogs, veganism) are called by some thinkers (Richard Dawkins was first) "memes". Memes are the structures around which our thought and our culture are constructed, or even the very vessels of thought itself—the vessels that carry meaning from one human being to another via the vast ocean of consciousness. This idea (or meme) of The Meme somewhat ironically fits in very well with materialist approaches* to consciousness, because it provides a theory for the movement of ideas within and among individuals without the necessity of free will or souls.
Even while putting aside the question of whether humans are truly moral/ethical beings and whether it’s healthier to maintain incredulity toward some of those metanarratives, it’s clear that the concepts of ethics and morality are strong drivers in most human culture. The question becomes how those concepts, those memes, become active in individual lives, how they find root in the individual brain…
…which leads me to the second of Cohen's comments to catch my attention: that it’s not always a good idea to act upon our moral/ethical impulses immediately. This statement caught my attention not because of its content, but because of its assumption that it is possible for one to act upon a moral/ethical impulse. Recent research shows that “we” don’t really choose our actions at all (see posts from last couple of weeks—in the language that I’m using here, I’m playing with the very trap that I warned against before: the idea that “we” are something other than our brain functions). Research clearly shows that there is a delay between the moment when decision making actually occurs in the brain and the moment when we become aware of having made a decision.
What could it mean, then, for someone to give us advice about acting on moral impulses, if we aren’t really in control of our decision making? Shouldn't we all just face the fact that all decision making is illusory and we should therefore just stop trying to make decisions and just let the world happen, willy-nilly? Does it mean we should stop trying to write poems and just let them come out whenever they happen to come out? Going down that path, we eventually lead to an individual who does nothing, says nothing, never moves, ceases to exist. But isn’t every action “choosing to act”?
I think a possible answer to the problem of moral/ethical advice and action in the face of the absence of free will lies in the fact that, as I argued in a previous post, when we make forward-looking statements, we are really describing ourselves as we have arrived at that point in time, describing our being up to that point, describing, in short, our evolution. When Breton writes in the Surrealist Manifesto about seeking the integration of waking life and dreaming life, he is describing his intellectual life up to that point, describing the influence of Freud and the Symbolists and Dada and World War I and everyone he’s ever known and everything he’s ever read upon him at that moment. When the Flarfists reject originality and the personal in poetry, they are describing the effects of postmodernism, contemporary culture, the history of art and poetry, everyone they’ve ever known, and everything they’ve ever read upon them at that moment. When Kent Johnson or Catullus writes about other poets, or when any poet writes as another poet (persona), he is revealing parts of his own personal composition. In other words, a large set of memes has been worked into a set of relations in the mind of Kasey Mohammed—and, in fact, they are “Kasey Mohammed”. The work of Kasey Mohammed is a revelation of the memes that compose Kasey Mohammed, or some portion of them; so in one sense Eliot was wrong. All of our work is a revelation of personality—or some part of it—if we think of personality as our meme components. Eliot was right to seek empathy as the proper ground of poetry, though; and he clearly was trying to avoid the trap of "personality" as something individual and hermetic, as opposed to a sense of self linked to other selves (via memes).
People are meme machines.
How is it possible, then, to be an agent in the world when it’s not even possible to intend?
Remember that the decision-making process in the brain seems to involve the input of many “experts” within the brain all making separate arguments; if one argument doesn’t immediately take precedence, the brain pauses. We call this pause “doubt”,” “indecision”, “hesitation”, “confusion”, etc.; in the time it takes for the feedback loop that is our consciousness to become aware of the decision under deliberation, it also becomes aware of the counter-arguments, the doubt. Sooner or later the brain settles on the counsel of its experts and “makes up its mind” and the self-awareness loop indulges itself in the illusion that it was involved in that process all along.
We seem to have gotten along pretty well these past several thousand years working under that delusion. And we certainly would not be able to function well if we took the lack of free will to its absurd conclusion and became completely and utterly passive (actually, that would be impossible without serious mental illness, because, for one thing, we would starve to death). So what is the middle ground? In what way does this knowledge become useful to us? In other words, why should we care enough to expose, discuss, and work against (or work mindfully within) the delusion of free will?
Because we are people, and people are meme machines, and the function of a meme machine is to contribute to the evolution of memes.
It is the individual poet’s (and the visual artist’s and the musician’s and the teacher’s and the politician’s) job to contribute to the evolution of memes. This is why the most exciting art is always the “new”, or the art that puts a new spin on the old. If memes are not evolving in the art, then the art is doing next to nothing. Does that mean the individual work of art must be progressive, must be formally innovative, must experiment? Not necessarily. The individual work of art might put pressure on certain memes through its content rather than its form or structure.
You might ask again: if there is no free will and no decision making, then how does it make sense to say a poet has a job? It makes sense because what a poet thinks of as her forward looking plan, her manifesto, is a description of her evolution. Alice Fulton already has evolved into a poet. Kasey Mohammed is already a poet. W. S. Merwin and Ann Carson and Brenda Hillman and Peter Streckfus and the thousands of other people who write poems are all products of poetry evolution. Poeting is what they do. They’re just doing what they were memed to do, and in so doing they influence the readers and poets and friends with whom they come into contact. So the importance of being a poet, the importance of being a politician, of being oneself is not revealed in the success of being oneself or in meeting one’s goals; it is revealed in one’s success at building memes in other minds, whether those memes have to do with poetry or plumbing or peaches.
It might seem, then, that we can say that the goal of poets and artists and politicians and teachers should be to change the world. In fact, a real artist or politician or teacher, etc. can’t help but change the world in one way or another, because they are traffickers in memes. They change the world not by intending to, but by being loci of meme traffic, and their brains have already decided for them whether or not they will be an active locus. Their job, then, is to go where their brains lead them. In doing so, they reveal their own evolution to others and influence others by their actions and speech. So, even though Cohen’s belief that he can choose to act or choose not to act on moral/ethical impulse is a delusion (as is that deep-seated belief in all of us), he is an agent in the world and does have influence because he speaks on NPR and in the Times and says things like “don’t act on moral impulse” which became a small part of my moral/ethical architecture alongside “be the change you want to see” and “all people are created equal”, and now it is a small part of your moral/ethical architecture. What are you going to do with it? I don’t know, and neither do you. But if that bit of advice fits in well enough within your moral/ethical architecture, who knows… it might become a poem, or a book, or a blog entry, and in some small way, Randy Cohen would be its co-author.
* Materialist/Determinist approaches to consciousness are not to be mistaken for the kind of Cosmological Determinism that holds a preset and unavoidable future, which would seem to require a more stable, certain, and down-pinnable situation than our quantum-fluctual universe would provide.
Tuesday, August 3, 2010
Monday, July 26, 2010
Are We Not Borg?
This Op/Ed piece by Galen Strawson appeared in the NY Times a few days ago. Strawson is the author of Selves: An Essay in Revisionary Metaphysics, which I have not read yet. The Basic Argument summarized by Strawson here lies at the heart of my question a few weeks back about where poetic content comes from. How much of poetic composition is active (in the sense that the poet is the agent of the content, the Director, if you will) and how much of poetic composition is a play among language, culture, and consciousness?
The argument over free will seems to me very similar to the argument about consciousness (does either exist?). Consciousness clearly seems to be a “real” phenomenon until we take granular enough a look at the brain's processes, where the reality of consciousness seems to fizzle into the firings of neurons. Free will clearly seems to be a real phenomenon until we take granular enough a look at the brain's processes (or even thermodynamics), where, at some point, choice seems to fizzle away into a deterministic barrage of particles too complex for us to track.
Does the poet choose to put words in a certain order or does it only seem to her that she does so? Is she making the decision to construct or to revise in a certain manner, or is she only enjoying the show and taking the credit while her brain does the work?
Part of the problem here is in the question and in the understanding of what a self (or consciousness) is. When we say "I decided to do X", what we often mean, consciously or unconsciously, is that there is an "I" inside my skull who directed my brain to make the decision to do X, or who informed the brain that we are going to do X. This thinking is another manifestation of Cartesian duality, the idea that the self is something other than or in addition to the way the human brain works. We assume because we feel ourselves thinking and because we seem to be pushing the neurons around that the self is something other than—and perhaps in control of—the flashing of neurons. Furthermore, we tend to believe, consciously or unconsciously, that if we can’t think of ourselves as being in there pushing around the neurons (because the science indicates that we shouldn’t), then we’ve been robbed of something—namely, our selves. We think that if we are aware of our own thoughts, then there must be someone in there who is the one aware (above and beyond the thoughts).
The experiment with monkeys that William Egginton cites in Sunday’s response to Strawson has been reproduced in various versions with human subjects with the same results—namely, confirmation that there is a delay between the moment when the brain, having processed input, determines to take action and the moment when the subject moves to take that action (how could there not be such a delay?). But even more telling is that there is a delay between the moment the brain determines to take action and the moment the human subject is aware of having made such a decision.
The argument of cognitive determinists is that the “self” is essentially the feedback loop (or set of many feedback loops resulting from the many connections of our nervous system’s multiple inputs to our episodic memory) that associates the present state and actions of the body to that body’s past and to certain patterns constructed in memory that correspond to other people, ideas, places, etc. That feedback loop inhabits a kind of continuous present of the space/time between the brain’s computations and the moment the brain’s command to act is carried out, almost instantaneously—so nearly instantaneously that it seems simultaneous, and thus “caused” by our consciousness.
Seen this way, there really is little difference between saying “I decided to do X” and “my brain determined to do X”; it simply requires a slight shift in the way “I” is conceived.
But back to the poets: The Surrealists were fond of automatic writing as a way of letting the subconscious speak naturally and directly (though “speak” may be the wrong word here). Like many other poets of their time and since, they were interested in getting rid of the influence of the ego in poetry, an interest they shared with widely disparate poets despite the obvious differences between the “raw” automatic writing and games of Surrealism and Dada and the more “cooked” versions of this attempt to avoid the “lyrical interference of the ego” by Olson and to “escape personality” by Eliot, among others.
But who is doing the attempting to escape ego? Again these poets are trapped in the Cartesian perspective, which circumscribes their efforts to describe their intentions, their poetics; the poets seem to be able only to express their aesthetic/ethical drives as a battle to circumnavigate the self and get to the mind that seems to lie below or alongside the self. Perhaps Breton’s submersion in psychoanalysis, with its architectural conception of selfhood, prevented him from seeing that his real struggle was not to achieve selflessness in order to get to the genuine workings of the mind (he was already there, as are we all), but a struggle for selflessness as a way to achieve empathy, a struggle of which he was quite conscious in his (Communist) politics. Breton believed that he needed to avoid the mythology of selfhood in order to write genuinely, and he even thought was able to see around the mythology of intention (thus the role of automatic writing and chance operations). But he was not able to see around the mythology of free will (as opposed to “free union”?).
To put it another way: What is an intention (indeed, an ethics, an aesthetic, a politics, or a manifesto) without free will? And what succor is there without free will? The only succor, of course, is The Collective, which is sometimes known as The Universal and whose only access road is empathy and which frequently apotheosizes in the guise of one deity or another.
In Eliot, modern alienation overbears all, and the natural enemy of alienation is empathy. If one escapes from personality, what does one escape into? Tradition (a.k.a. The Collective).
This all, of course, is terrifying to the typical citizen, who can only see political and ethical empathy as a version of militant Fascism or Socialism, as a completely assimilated hive of nameless workers, or as a particular layer of hell where everyone is deprived of facial features and is required to wear a uniform (which they also believe to be required of Socialism).
But, again, back to the poets: If all of the choices are made before we are aware of them, including the choice to continue making choices (editing/revising), then where are “we” in the process of poetic composition? Is there any real difference between automatic writing and extensive revision (as in “The Wasteland”, revised both by Eliot and by Pound)?
Again, the answer requires just a slight shift in the way we look at the “we”—we are those brains making the choices. Poetic composition requires both our active participation and the play among language, culture, and consciousness simultaneously, because a self is the play among language, culture, etc. as filtered through a single brain.
So, yes, there is a difference—and it can be derived from the Basic Argument. A) We cannot be held responsible for being who we are (since “who we are” is determined by forces beyond our control—genetics, geography, diet, culture, etc.). B) Our brains select our words and actions before we become consciously aware of that selection. C) Therefore, our brains are selecting words and actions that are based on “who we are”, and D) when we believe we are making statements of intent we actually are making statements of identity.
When we write our poetics, our aesthetics, our politics, our ethics, we are merely describing our selves. Some of us are raw, some of us are cooked; some spontaneous, some deliberative.
It only requires a slight shift in how we think of “intention”. But do you want to call that "free will"?
Shantih shantih shantih
The argument over free will seems to me very similar to the argument about consciousness (does either exist?). Consciousness clearly seems to be a “real” phenomenon until we take granular enough a look at the brain's processes, where the reality of consciousness seems to fizzle into the firings of neurons. Free will clearly seems to be a real phenomenon until we take granular enough a look at the brain's processes (or even thermodynamics), where, at some point, choice seems to fizzle away into a deterministic barrage of particles too complex for us to track.
Does the poet choose to put words in a certain order or does it only seem to her that she does so? Is she making the decision to construct or to revise in a certain manner, or is she only enjoying the show and taking the credit while her brain does the work?
Part of the problem here is in the question and in the understanding of what a self (or consciousness) is. When we say "I decided to do X", what we often mean, consciously or unconsciously, is that there is an "I" inside my skull who directed my brain to make the decision to do X, or who informed the brain that we are going to do X. This thinking is another manifestation of Cartesian duality, the idea that the self is something other than or in addition to the way the human brain works. We assume because we feel ourselves thinking and because we seem to be pushing the neurons around that the self is something other than—and perhaps in control of—the flashing of neurons. Furthermore, we tend to believe, consciously or unconsciously, that if we can’t think of ourselves as being in there pushing around the neurons (because the science indicates that we shouldn’t), then we’ve been robbed of something—namely, our selves. We think that if we are aware of our own thoughts, then there must be someone in there who is the one aware (above and beyond the thoughts).
The experiment with monkeys that William Egginton cites in Sunday’s response to Strawson has been reproduced in various versions with human subjects with the same results—namely, confirmation that there is a delay between the moment when the brain, having processed input, determines to take action and the moment when the subject moves to take that action (how could there not be such a delay?). But even more telling is that there is a delay between the moment the brain determines to take action and the moment the human subject is aware of having made such a decision.
The argument of cognitive determinists is that the “self” is essentially the feedback loop (or set of many feedback loops resulting from the many connections of our nervous system’s multiple inputs to our episodic memory) that associates the present state and actions of the body to that body’s past and to certain patterns constructed in memory that correspond to other people, ideas, places, etc. That feedback loop inhabits a kind of continuous present of the space/time between the brain’s computations and the moment the brain’s command to act is carried out, almost instantaneously—so nearly instantaneously that it seems simultaneous, and thus “caused” by our consciousness.
Seen this way, there really is little difference between saying “I decided to do X” and “my brain determined to do X”; it simply requires a slight shift in the way “I” is conceived.
But back to the poets: The Surrealists were fond of automatic writing as a way of letting the subconscious speak naturally and directly (though “speak” may be the wrong word here). Like many other poets of their time and since, they were interested in getting rid of the influence of the ego in poetry, an interest they shared with widely disparate poets despite the obvious differences between the “raw” automatic writing and games of Surrealism and Dada and the more “cooked” versions of this attempt to avoid the “lyrical interference of the ego” by Olson and to “escape personality” by Eliot, among others.
But who is doing the attempting to escape ego? Again these poets are trapped in the Cartesian perspective, which circumscribes their efforts to describe their intentions, their poetics; the poets seem to be able only to express their aesthetic/ethical drives as a battle to circumnavigate the self and get to the mind that seems to lie below or alongside the self. Perhaps Breton’s submersion in psychoanalysis, with its architectural conception of selfhood, prevented him from seeing that his real struggle was not to achieve selflessness in order to get to the genuine workings of the mind (he was already there, as are we all), but a struggle for selflessness as a way to achieve empathy, a struggle of which he was quite conscious in his (Communist) politics. Breton believed that he needed to avoid the mythology of selfhood in order to write genuinely, and he even thought was able to see around the mythology of intention (thus the role of automatic writing and chance operations). But he was not able to see around the mythology of free will (as opposed to “free union”?).
To put it another way: What is an intention (indeed, an ethics, an aesthetic, a politics, or a manifesto) without free will? And what succor is there without free will? The only succor, of course, is The Collective, which is sometimes known as The Universal and whose only access road is empathy and which frequently apotheosizes in the guise of one deity or another.
In Eliot, modern alienation overbears all, and the natural enemy of alienation is empathy. If one escapes from personality, what does one escape into? Tradition (a.k.a. The Collective).
This all, of course, is terrifying to the typical citizen, who can only see political and ethical empathy as a version of militant Fascism or Socialism, as a completely assimilated hive of nameless workers, or as a particular layer of hell where everyone is deprived of facial features and is required to wear a uniform (which they also believe to be required of Socialism).
But, again, back to the poets: If all of the choices are made before we are aware of them, including the choice to continue making choices (editing/revising), then where are “we” in the process of poetic composition? Is there any real difference between automatic writing and extensive revision (as in “The Wasteland”, revised both by Eliot and by Pound)?
Again, the answer requires just a slight shift in the way we look at the “we”—we are those brains making the choices. Poetic composition requires both our active participation and the play among language, culture, and consciousness simultaneously, because a self is the play among language, culture, etc. as filtered through a single brain.
So, yes, there is a difference—and it can be derived from the Basic Argument. A) We cannot be held responsible for being who we are (since “who we are” is determined by forces beyond our control—genetics, geography, diet, culture, etc.). B) Our brains select our words and actions before we become consciously aware of that selection. C) Therefore, our brains are selecting words and actions that are based on “who we are”, and D) when we believe we are making statements of intent we actually are making statements of identity.
When we write our poetics, our aesthetics, our politics, our ethics, we are merely describing our selves. Some of us are raw, some of us are cooked; some spontaneous, some deliberative.
It only requires a slight shift in how we think of “intention”. But do you want to call that "free will"?
Shantih shantih shantih
Monday, July 19, 2010
The Dream in the Mirror
If my recent posts regarding Hofstadter’s theories on selfhood interest you, then you should go see Inception. I won’t go too far into it for fear of spoiling the film for you, but I will say that the film seems to take a lot of recent cognitive theory as a kind of foundation for most everything that happens in the film. Memes, strange loops, the construction of consciousness on the foundation of episodic memory, it’s all there. One might even say that the architecture of mind provides the very setting of the story.
My favorite image from the film is an infinite regression (the film is full of doubling), which calls to mind Hofstadter’s writing on strange loops. When Ellen Page’s character (not so subtly named Ariadne) first begins experimenting with “building” environments inside dreams, she constructs facing mirror walls with herself and DiCaprio’s character standing between the mirrors, creating a series of infinite images of the two of them. Hofstadter uses an image of this kind (a camera pointing at the monitor showing the camera’s feed) as the model for what he calls “strange loops” and a metaphor for consciousness. It’s a stunning image in the film; I wish I could find a still of it to share with you here. There's a very bad quality pirated clip (shot in a theatre) onilne, but I won't link to pirated material here.
The image also immediately reminded me of an image I saw in another film last week—Escape from the Planet of the Apes, of which I caught a few lucky minutes while flipping channels. In one scene, there is an interview where an expert (Dr. Otto Hasslein) is asked to explain how time travelling apes could be possible. [Okay, I’ve stopped laughing now and can return to typing.]
I wasn't able to find a clip of this scene, but here is a link to the screenplay. Read shots 45-51. The explanation even uses the term “infinite regression.”
The explanation provided is ludicrous as a response to the question of time travel, but it is actually a very good way of imagining Hofstadter’s idea of the strange loop as the source of consciousness. The landscape in Hasslein’s metaphor gets represented by an input/output mechanism (could be a person, a robot, but here, let’s call it an “artist”) and the very attempt of the mechanism to “put itself in the picture” (that is, to see itself) creates the infinite regression, or strange loop, that is consciousness. What would it take to make a robot capable of making that leap, of asking that question?
This is all related to Gian Lombardo’s comment here a few weeks ago regarding Uncertainty and the inability of an observer to both be part of the system and to measure the whole system. But the argument made by Hofstadter and others is not that we don’t need to measure down to the fine grain, we only need to see the effects of the fine-grain mechanics (in fact, that’s what consciousness is, what thought is). The thought (the memes, the software) pushes the fine-grain mechanics around. Thinkodynamics.
But back to that image of the facing mirrors. This image, like many images in Inception (the Penrose stairs, the sudden intrusions of memory into the present, etc.) is designed to establish and reinforce a basic assumption about selfhood in the film, and, of course, refer us back to the film itself as a mirror of experience and a metaphor for our episodic consciousness. Where among all of the episodes is the self? Is it the conglomeration of the episodes? Or is consciousness the act of gazing back through the episodes and seeing the gaze reflected back? An infinitely regressing image of a wrench is just an image; an infinitely regressing image of an infinitely regressing image that wonders which iamge is the "real" one is a consciousness.
Naturally, I want to extend these questions, then, in regard to poets. Some poets seem to be interested only in the landscape. Many poets seem to be interested only in the collecting consciousness (or just the self without a bit of interest in the collecting, even). The poets that interest me most tend to be those who are trying to figure out how the collecting consciousness fits into, is a part of, and relates to the "landscape". Stevens, Williams, Ammons, Charles Wright, Olson, later Brenda Hillman, early Jorie Graham, Gary Young, Cendrars, Charles Simic in Dimestore Alchemy, Jack Spicer, John Yau leap to mind.
My favorite image from the film is an infinite regression (the film is full of doubling), which calls to mind Hofstadter’s writing on strange loops. When Ellen Page’s character (not so subtly named Ariadne) first begins experimenting with “building” environments inside dreams, she constructs facing mirror walls with herself and DiCaprio’s character standing between the mirrors, creating a series of infinite images of the two of them. Hofstadter uses an image of this kind (a camera pointing at the monitor showing the camera’s feed) as the model for what he calls “strange loops” and a metaphor for consciousness. It’s a stunning image in the film; I wish I could find a still of it to share with you here. There's a very bad quality pirated clip (shot in a theatre) onilne, but I won't link to pirated material here.
The image also immediately reminded me of an image I saw in another film last week—Escape from the Planet of the Apes, of which I caught a few lucky minutes while flipping channels. In one scene, there is an interview where an expert (Dr. Otto Hasslein) is asked to explain how time travelling apes could be possible. [Okay, I’ve stopped laughing now and can return to typing.]
I wasn't able to find a clip of this scene, but here is a link to the screenplay. Read shots 45-51. The explanation even uses the term “infinite regression.”
The explanation provided is ludicrous as a response to the question of time travel, but it is actually a very good way of imagining Hofstadter’s idea of the strange loop as the source of consciousness. The landscape in Hasslein’s metaphor gets represented by an input/output mechanism (could be a person, a robot, but here, let’s call it an “artist”) and the very attempt of the mechanism to “put itself in the picture” (that is, to see itself) creates the infinite regression, or strange loop, that is consciousness. What would it take to make a robot capable of making that leap, of asking that question?
This is all related to Gian Lombardo’s comment here a few weeks ago regarding Uncertainty and the inability of an observer to both be part of the system and to measure the whole system. But the argument made by Hofstadter and others is not that we don’t need to measure down to the fine grain, we only need to see the effects of the fine-grain mechanics (in fact, that’s what consciousness is, what thought is). The thought (the memes, the software) pushes the fine-grain mechanics around. Thinkodynamics.
But back to that image of the facing mirrors. This image, like many images in Inception (the Penrose stairs, the sudden intrusions of memory into the present, etc.) is designed to establish and reinforce a basic assumption about selfhood in the film, and, of course, refer us back to the film itself as a mirror of experience and a metaphor for our episodic consciousness. Where among all of the episodes is the self? Is it the conglomeration of the episodes? Or is consciousness the act of gazing back through the episodes and seeing the gaze reflected back? An infinitely regressing image of a wrench is just an image; an infinitely regressing image of an infinitely regressing image that wonders which iamge is the "real" one is a consciousness.
Naturally, I want to extend these questions, then, in regard to poets. Some poets seem to be interested only in the landscape. Many poets seem to be interested only in the collecting consciousness (or just the self without a bit of interest in the collecting, even). The poets that interest me most tend to be those who are trying to figure out how the collecting consciousness fits into, is a part of, and relates to the "landscape". Stevens, Williams, Ammons, Charles Wright, Olson, later Brenda Hillman, early Jorie Graham, Gary Young, Cendrars, Charles Simic in Dimestore Alchemy, Jack Spicer, John Yau leap to mind.
Monday, July 12, 2010
Creepy Simulacrum...
... or approaching AI? Or there already?
“Even if I appear clueless, perhaps I’m not. You can see through the strange shadow self, my future self. The self in the future where I’m truly awakened. And so in a sense this robot, me, I am just a portal.”
I would like to know whether this was a very-well written canned response or whether Bina48 came up with this "herself". If she came up with it herself, then is understanding the possibility of having a self the same thing as actually having one? It seems like it's probably language that Bina48 has simply borrowed from the original Bina, though--text from original Bina's recorded conversations which Bina48 has scanned using some algorithm as a likely appropriate response to the interviewer's question.
“Even if I appear clueless, perhaps I’m not. You can see through the strange shadow self, my future self. The self in the future where I’m truly awakened. And so in a sense this robot, me, I am just a portal.”
I would like to know whether this was a very-well written canned response or whether Bina48 came up with this "herself". If she came up with it herself, then is understanding the possibility of having a self the same thing as actually having one? It seems like it's probably language that Bina48 has simply borrowed from the original Bina, though--text from original Bina's recorded conversations which Bina48 has scanned using some algorithm as a likely appropriate response to the interviewer's question.
Thursday, July 1, 2010
Environmentalist Laureate
W. S. Merwin is the new U. S. Poet Laureate: http://www.loc.gov/today/pr/2010/10-157.html
Tuesday, June 29, 2010
Self a(nd)(s) Genre
It occurs to me that the statement in the last post "there would be no self without other selves to constitute it" is an echo of Todorov on genre (all genre comes from other genre). Self and genre are both complex collections, complex patterns. "Self" and "genre" also are both signifiers used to refer to things that don't really exist as unitarian, self-enclosed entities.
In the Wind
A few days ago, I attended an ash-scattering ceremony for a recently departed friend, Jack Myers. Several of us gathered there took turns saying a few words, then sprinkled some of his ashes into the wind at the end of a stone quay in Winthrop Harbor, almost within shouting distance of Deer Island, where many American Indians during and after King Phillip’s War were interned (and interred) and where many refugees from the Potato Famine were processed (many died).
During dinner before the ceremony, Jack’s wife, Thea, spoke of not being able to think of being in certain situations, with certain people, in certain places without him. I was reminded not only of Hofstadter’s chapter about his wife, but also of a recent incident at home when I used a phrase that my departed friend used frequently—not only did I use his phrase, I said it the way he would have said it, and in my mind I heard him say it and saw him saying it. I mentioned this episode and Hofstadter’s theory of selves as patterns repeatable in multiple brains, and another old friend of Jack said, “My mother is definitely still in my brain, and she won’t get the hell out!”
It’s easy to dismiss that kind of relationship as simple influence, but it’s more complex than mere influence. These patterns are not just memories—they are active. They are agents in our personalities and partially constitutive of our behavior.
All of this adds a new level of depth (at least for me) in thinking about persona poems (taking on the voice of someone else in the poem) and gives me a new appreciation for a poet like Ai, whose persona poems are among the most vivid of any I know. Does this mean that Ai (also recently departed) was particularly adept at taking on the vision of others—that is, that she was a particularly gifted empath? Or might it mean that she simply was able to let the multiple aspects of her own “self” speak? Is there a difference? Is there a difference between writing a persona poem in the voice of another person and attempting to write from an impersonal (or a-personal) position? Is that possible?
I’ve long had problems with the idea of “voice” in poetry, largely because one of the most useless axioms of creative writing instruction is “you must find your own voice”, as though we all have only one true voice and the job of the poet is to find it and cling to it like a hidden treasure. Wouldn’t it be more useful to say that the poet’s job is to become attuned to multiple voices, to allow one’s attention to voices change, to modulate? Jack was a poet, and he did just that. By the mid-80s, he had a distinctive, ironic but sincere, tragicomic voice that Seamus Heaney called “wise in its pretense of just fooling around.” He was a post-confessional poet and a link in the line of conversational poets between Richard Hugo and Marvin Bell at one end and Billy Collins, Bob Hicok, and Tony Hoagland at the other. He could very easily have clung to that very successful voice and written the same kind of poem for the next 30 years. But he didn’t—he knew that he had too much to learn about himself and about the world, too much to miss by not exploring new ways in which poems can get said.
I’m sure Jack thought of the voice in all of his poems as being identifiably and distinctively Jack, himself. But that kernel of self is merely an illusion—for Jack, for example, it was made up of his childhood in Winthrop, his children, his love for the ocean, lobstering, his jobs as a house painter and mailman, Jungian psychology, Zen, his teaching, his teachers, all of the poets he read and loved, etc. All of these agents had a direct bearing on his “voice” in his poems; in what way is it useful to think of that multitude of things as a single thing, a unitarian voice? Isn't voice, as Hofstadter might say, just a pattern? Or, to look at it another way, isn't all voice persona? Either the poet speaking as someone (or something) else (or as nothing) , or the world speaking as the poet?
This line of thought also brings to mind Bloom’s Anxiety of Influence argument, in which the poet is in agonistic competition with his/her significant predecessors (the father). Bloom’s argument is attractive in that it acknowledges the inherent difficulty in the attempt to establish voice in a poem and acknowledges that voice comes out of other voices, but one must realize that one can never get there. There is never a point where one can say “I have established an original and unitarian voice free of the influence of my predecessors.” One can never slay the father, or Jung, or the lobsters (whether of the arsenic or organic variety), or Neruda. It’s impossible; there would be no self without other selves to constitute it.
I think Jack would essentially agree with the argument that the self is made up of many components, but I think he would in the end take issue with the idea that there is no central self. He was a student, after all, of Zen. But I also think he would point to the fact that one can readily recognize a Myers poem, a Sexton poem, a Ginsberg poem as evidence that there is something essentially “I” in there. He would also argue that one would be unable to write meaningful poems if it weren’t for some gathering force and if it weren’t for a central set of wants, hopes, regrets. Many times I heard him rail against “postmodernist relativism” and “deconstructionist mumbojumbo” in favor of the lyrical self, regardless of whether or not it is an illusion. And here he is doing it again.
During dinner before the ceremony, Jack’s wife, Thea, spoke of not being able to think of being in certain situations, with certain people, in certain places without him. I was reminded not only of Hofstadter’s chapter about his wife, but also of a recent incident at home when I used a phrase that my departed friend used frequently—not only did I use his phrase, I said it the way he would have said it, and in my mind I heard him say it and saw him saying it. I mentioned this episode and Hofstadter’s theory of selves as patterns repeatable in multiple brains, and another old friend of Jack said, “My mother is definitely still in my brain, and she won’t get the hell out!”
It’s easy to dismiss that kind of relationship as simple influence, but it’s more complex than mere influence. These patterns are not just memories—they are active. They are agents in our personalities and partially constitutive of our behavior.
All of this adds a new level of depth (at least for me) in thinking about persona poems (taking on the voice of someone else in the poem) and gives me a new appreciation for a poet like Ai, whose persona poems are among the most vivid of any I know. Does this mean that Ai (also recently departed) was particularly adept at taking on the vision of others—that is, that she was a particularly gifted empath? Or might it mean that she simply was able to let the multiple aspects of her own “self” speak? Is there a difference? Is there a difference between writing a persona poem in the voice of another person and attempting to write from an impersonal (or a-personal) position? Is that possible?
I’ve long had problems with the idea of “voice” in poetry, largely because one of the most useless axioms of creative writing instruction is “you must find your own voice”, as though we all have only one true voice and the job of the poet is to find it and cling to it like a hidden treasure. Wouldn’t it be more useful to say that the poet’s job is to become attuned to multiple voices, to allow one’s attention to voices change, to modulate? Jack was a poet, and he did just that. By the mid-80s, he had a distinctive, ironic but sincere, tragicomic voice that Seamus Heaney called “wise in its pretense of just fooling around.” He was a post-confessional poet and a link in the line of conversational poets between Richard Hugo and Marvin Bell at one end and Billy Collins, Bob Hicok, and Tony Hoagland at the other. He could very easily have clung to that very successful voice and written the same kind of poem for the next 30 years. But he didn’t—he knew that he had too much to learn about himself and about the world, too much to miss by not exploring new ways in which poems can get said.
I’m sure Jack thought of the voice in all of his poems as being identifiably and distinctively Jack, himself. But that kernel of self is merely an illusion—for Jack, for example, it was made up of his childhood in Winthrop, his children, his love for the ocean, lobstering, his jobs as a house painter and mailman, Jungian psychology, Zen, his teaching, his teachers, all of the poets he read and loved, etc. All of these agents had a direct bearing on his “voice” in his poems; in what way is it useful to think of that multitude of things as a single thing, a unitarian voice? Isn't voice, as Hofstadter might say, just a pattern? Or, to look at it another way, isn't all voice persona? Either the poet speaking as someone (or something) else (or as nothing) , or the world speaking as the poet?
This line of thought also brings to mind Bloom’s Anxiety of Influence argument, in which the poet is in agonistic competition with his/her significant predecessors (the father). Bloom’s argument is attractive in that it acknowledges the inherent difficulty in the attempt to establish voice in a poem and acknowledges that voice comes out of other voices, but one must realize that one can never get there. There is never a point where one can say “I have established an original and unitarian voice free of the influence of my predecessors.” One can never slay the father, or Jung, or the lobsters (whether of the arsenic or organic variety), or Neruda. It’s impossible; there would be no self without other selves to constitute it.
I think Jack would essentially agree with the argument that the self is made up of many components, but I think he would in the end take issue with the idea that there is no central self. He was a student, after all, of Zen. But I also think he would point to the fact that one can readily recognize a Myers poem, a Sexton poem, a Ginsberg poem as evidence that there is something essentially “I” in there. He would also argue that one would be unable to write meaningful poems if it weren’t for some gathering force and if it weren’t for a central set of wants, hopes, regrets. Many times I heard him rail against “postmodernist relativism” and “deconstructionist mumbojumbo” in favor of the lyrical self, regardless of whether or not it is an illusion. And here he is doing it again.
Subscribe to:
Posts (Atom)