Welcome to part II of this two part blog series. In part I, I spent considerable time talking about what mental containers are and how they are used. In this post I’d like to take a look at how our mental container skills get set up in the first place. I’m assuming you have read part I and have a sense for what a mental container is.
Before I get to Fonagy et al.’s Mentalization theory (as promised in part I), I’d like to offer up a musing. In thinking about how best to start part II, it occurred to me that Freud was ahead of his time and, probably without knowing it, used mental containers. I would suggest that the id, ego, and superego are all mental containers. Now, I know what you’re thinking: The id is really about the body and its various motivations and drives. How can there be a “body” mental container?” Darn good question. We can find the answer in the work of neurobiologist Antonio Damasio. In his 2010 book entitled Self Comes to Mind: Constructing the Conscious Brain, Damasio argues that the body creates images or maps. He further argues that consciousness is centrally about bringing these body images to mind. So, yes, many neurobiologists (Christopher D. Frith would be an example here) now believe that the mind harbors a map of the body. Damasio calls these “bodies in the mind.” [1]
So, Freud may not have focused on images coming from the body or mental containers but when I think of these concepts, I think of Carl Jung and his focus on archetypal images. Archetypes could be looked at as mental containers that are shared by society. And isn’t Freud’s superego a mental container for society? Ego strength, then, could be looked at as having the cognitive strength to create, maintain, and manipulate mental containers. And, yes, the ego mental container has a special job because it has to bridge or map between the map of the body (id) and the map of society (superego). Without ego strength there simply would be no way to keep these various mental containers straight. Pretty cool eh? Freud introduces us to ego and its strength (or lack thereof). Damasio introduces us to the body in the mind. And Jung introduces us to the society in the mind. Hopefully you can see how important mental container skills are. So, let’s see how they are set up. For this we need to look at Mentalization theory as put forward by Peter Fonagy and his colleagues.
We Interrupt This Post to Bring You an Important Public Service Announcement
I feel compelled to interrupt this blog post to bring you an important message. There are times when reading books that I get frustrated. Why? Well, because authors insist on making arguments at the level of intervention. Simply, such arguments will not work. As I have mentioned many times before, to make an effective argument one must keep the Midgley continuum in mind. Here’s the Midgley continuum (from the work of Gerald Midgley, noted systems thinker):
Allow me to give you an example. In his 2014 book entitled The Zero Marginal Cost Society—The Internet of Things, The Collaborative Commons, and The Eclipse of Capitalism, economist Jeremy Rifkin fully embraces MOOCs or massive online open courses. MOOCs are courses that are delivered online to students all over the world for little to no cost. In contrast, MIT psychology researcher Sherry Turkle, writing in her 2015 book entitled Reclaiming Conversation: The Power of Talk in a Digital Age, takes a decidedly reserved position concerning MOOCs. “The most powerful learning takes place in relationship,” writes Turkle. She continues, “What kind of relationship can you form with a professor who is lecturing in the little square on the screen that is the MOOC delivery system?” Turkle suggests that promoting MOOCs is misguided. Well, it’s not. Promoting MOOCs is perfectly guided: it’s guided by a particular worldview.
Here’s the problem: both Rifkin and Turkle argue at the level of intervention, in this case MOOCs. But let’s flesh out the Midgley continuum for each position on MOOCs:
Rifkin’s Midgley Continuum:
Transhumanism <==> People as Machine Entities <==> Machine Relating <==> MOOC Learning
Turkle’s Midgley Continuum:
Humanism <==> People as Biological Entities <==> Human Relating <==> Classroom Learning
So, Rifkin and Turkle disagree over “the interventions of MOOCs” versus “classroom learning.” But what they really disagree over is the question of differing worldviews: transhumanism versus humanism. If you believe in transhumanism or people as machine entities, then you will advocate for MOOCs as you should. If you believe in humanism or people as biological entities, then you will advocate for classroom learning, again, as you should. Neither is wrong. Simply, we have two different worldviews. What is misguided is to not argue from the level of worldviews. Arguing at the level of interventions is not persuasive. In defense of Rifkin and Turkle and their take on the Internet, authors tend to stay away from discussion of worldviews because I think it is simply too painful to admit how far along the path toward being posthuman we have traveled. It’s hard to admit how much humanity we have given up through our embrace of digital technologies. To make this loss concrete is to own it and to be accountable.
Cognitive scientist George Lakoff regularly tells us that we think using frames not facts. I get frustrated because authors argue ad nauseam using facts concerning interventions like MOOCs. In a moment I’ll point out that the Internet is not minded, it does not have a mind. I’ll be arguing from a humanist worldview. If you do not believe in humanism, then no amount of arguing will persuade you. Ergo, authors should begin by simply stating what worldview they are using. Once you know what worldview is being espoused, then such things as ideology, methodology, and interventions will necessarily fall from that worldview. So, Rifkin’s support of MOOCs is not misguided; it is perfectly guided by the transhuman worldview. In a transhuman world where biological entities have given way to machine entities, there will be no need for classroom learning and the human relationships it encourages. Comments by both Rifkin and Turkle make perfect sense and are perfectly guided once you know their respective worldviews. In my opinion, an author’s worldview should be stated on page one.
As a side note to this discussion, digital architects did not set out with the goal of turning people into machines through Internet use. Transhumanism was an unintended consequence, a side effect. Likewise, feminists did not set out with the goal of turning children into machines by systematically separating mother from child. This too was an unintended consequence, a side effect. However these two side effects have now joined up and are, in my opinion, the main effect. Digital technology has taken up the position of surrogate parent (as talked about in Turkle’s work). And, yes, transhumanists would say that this is a good thing, that this is a desirable outcome. Humanists argue against this position. If a parent were to come up to me and say, “I’m a transhumanist and I wish for my children to become machines by all possible interventions like MOOCs,” then I’d say, “Great. I don’t agree with your worldview but you’re entitled to it. But don’t expect me to support transhumanist interventions like MOOCs.”
We Return You to Our Regularly Scheduled Blog Post
So, how do mental containers—key to such things as empathy, humor, Theory of Mind, mental time travel, and other cognitive skills—get set up? Psychology researcher Peter Fonagy and his colleagues present one possible answer in their 2002 book entitled Affect Regulation, Mentalization, and the Development of the Self (mentioned in part I). According to Fonagy et al., to understand the genesis of mental containers we have to understand a bit about what they call contingency. Let me define contingency with an example. If you raise your arm, the action of raising your arm is perfectly contingent. In other words, the action of raising your arm is perfectly contingent on your desire to raise your arm. Fonagy et al. talk about how many stimuli coming from the world arrive at our sensory apparatus in nearly perfectly contingent forms. Using the work of John S. Watson as a backdrop, Fonagy et al. reveal that infants are born with an innate system that detects contingency. “Watson … proposed that one of the primary functions of the contingency-detection mechanism [in infants] is self-dectection,” reveals Fonagy et al. They continue thus:
One’s motor actions produce stimuli that are necessarily perfectly response-contingent (e.g., watching one’s hands as one is moving them), while the perception of stimuli emanating from the external world typically show a lesser degree of response-contingency between efferent (motor) activation patterns and consequently perceived stimuli may serve as the original criterion for distinguishing the self from the external world.
In essence, Fonagy et al. are suggesting the following: actions initiated by the self are perfectly contingent whereas actions perceived to be coming from the external world are at best nearly perfectly contingent. The subtle difference between perfectly contingent and nearly perfectly contingent perceptions is what helps to define the self boundary or self mental container. Pretty cool eh? And, yes, infants have an innate system that is able to detect contingency states. Using Watson’s work as a background, Fonagy et al. argue that the mother helps set up her infant’s self mental container by engaging in what they call mentalization (MZ). MZ is a “mind-in-mind” operation. Neurobiologist Dan Siegel calls mind-in-mind operations “I know that you know that I know.” So, a mother will engage in “I know that you know that I know” with her infant. Let me see if I can draw a flow chart for you.
Baby signals his/her mother perfectly contingently (through a vocalization or facial expression):
Baby ===perfectly contingent===> Mother
Once the mother receives a signal from her infant, the mother has to create a mental container to hold the baby’s mind. She has to also have a coherent mental container to hold her own self. Now, here’s where the magic happens. The mother must now respond back to her infant with a new, blended container [2]—part baby, part mother. In addition, the blend must be made in such a way that the “baby part” is perfectly contingent while the “mother part” is nearly perfectly contingent. Infant researchers, such as Daniel Stern, call this attunement—attunement between mother and infant. So, diagrammatically, here’s what happens when mother signals back with a vocalization or facial expression:
Baby<===mental container blend=== Mother
And, again, the mental container blend will have a portion that is “perfectly contingent baby” (which came from the baby initially) and “nearly perfectly contingent mother” (which is the mother’s attuned response back to the infant). In this way the baby’s contingency detection system “decodes” the blend and says something like this: “Cool, I just received myself back from my mother, which helps me set up my self container. And I received something back that feels like a reflection but it’s not exactly me so I’m assuming that this part is my mother. And because I can easily decode self from not self, I get the impression that another mind has engaged in attuned reflection or blending.” In this way, the baby receives his or her first sensation of “I know that you know that I know.” When a mother engages in baby talk or motherese with her infant, this is the language of mentalization. It is this language that goes a long way toward setting up mental containers within the mind of the infant. [3] Over time, the mother continues to act as the cognitive scaffolding critically needed for the development of robust EF or executive function skills in the child. As a reminder, EF skills include empathy, mental time travel, delaying gratification, appropriately focusing attention, appropriately shifting attention, and perspective taking.
One reason eye contact is so intimate and makes us feel so vulnerable is because through eye contact we experience the most intense forms of “I know that you know that I know” or mentalization. It is also through eye contact we come to see whether someone has our mind in mind or not. Mind-in-mind operations can be perceived by perceiving the “perfectly contingent—nearly perfectly contingent” communications coming from someone else’s body language. When we take (or do not take) another person’s mind into to our own, that “taking in” (or not taking in) can be read unconsciously through the body. As talked about in part I, persons on the autism spectrum have a hard time with eye contact because in all likelihood they have a hard time with mind-in-mind operations or reading the mind revealed through body. As the Internet and social media do away with the need for eye contact, we should know what we are giving up. As Turkle puts it, “A lack of eye contact is associated with depression, isolation, and the development of antisocial traits such as exhibiting callousness. And the more we develop these psychological problems, the more we shy away from eye contact.”
So, to answer my earlier question “Is the Internet minded?” the answer is “no.” The Internet cannot engage in mind-in-mind operations. It cannot help us become minded or develop mental container skills. As a result, the Internet cannot help us become empathetic. For the moment, we need other people, other people’s minds, to help us become empathetic. Now, it is possible that Internet architects will eventually come up with some way of making machines minded. But for the moment, the Internet has no drives, no biology, [4] no body. It has no body from which images can arise. It has no need for a self or to help others develop a self. It certainly has no need for humor. Sure, it can create humor, but it has no need for humor, for the delight of humor—the delight of taking the mind of another into its own mind. It has no need to be attuned to the mind of another. Frankly, it has no need at all. If there is such a need, it usually takes the form of us humans projecting need on to machines as we did when MIT AI (artificial intelligence) researcher Joseph Weizenbaum created the ELIZA program back in the 1960s, a program that created the illusion of reflective listening. And that’s the key here.
AI researchers are masters at creating the illusion that the Internet is minded, has a mind, and is interested in our minds. At best, the Internet can give us perfectly contingent responses. As Eli Pariser points out in his 2011 book The Filter Bubble: What the Internet Is Hiding from You, the Internet is a master at introducing you to you in very perfectly contingent ways. Without “perfectly” and “nearly perfectly” contrasts, it will be hard for us to engage in the self–other operations that empathy and mind-in-mind operations require. Maybe over time our innate contingency detection system will wither and die from lack of use. And you know, transhumanists would probably be OK with that. I’ll stick with humanists: we need other biologically embodied minds to know our own mind. In this way, we will know that we are minded by others. Just remember, you’ll never see your eye twinkling in the eye of mother Internet. If you do, it’s an illusion, or, to use Freud’s concept, a projection of an unmet need deep within. I guess you could say that Internet architects are masters at accessing and manipulating unmet need deep within ourselves. But there again, isn’t that true of all propaganda? As Turkle puts it, “Our life has been ‘mined’ for clues to our desires. But when our screens suggest our desires back to us, they often seem like broken mirrors.” That is a great way to frame the contingent relationships that the Internet forms with us: broken mirrors.
Personal Note: I’ll be taking a break until after the holidays. Have a great Holiday Season and I’ll see you in the New Year.
Notes:
[1] Back in 1987, Mark Johnson, mentioned in part I, wrote a book entitled The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. So, this idea that the mind harbors a map or model of the body has been around for almost 30 years.
[2] See part I for more on blending mental containers.
[3] Over the years we have supported attachment-related research conducted by Dr. Karlen Lyons-Ruth and her colleagues. One of the papers coming out of these efforts is entitled Disorganized Attachment in Infancy Predicts Greater Amygdala Volume in Adulthood (Behavioural Brain Research 308 (2016) 83–93). The authors describe mothers who display low maternal responsiveness (LMR). Behaviors associated with LMR include (quoting the article): “failure to greet [the infant], backing away from the infant, interacting from a distance, [and] interacting silently.” The authors point out that LMR is also associated with contradictory communications from the mother to the infant such as “failure to respond appropriately to clear infant cues; mixed communications such as sweet voice but negative message.” It is these types of actions and miscommunications that make it difficult for the infant to properly set up mental containers. Carried to an extreme, Fonagy et al. suggest that an infant could even set up a mental container that contains an “alien self.” This is not unlike one of Jung’s “dark side” archetypes. I would suggest that communications with the Internet have a lot in common with communications coming from mothers displaying LMR. If this hypothesis is true, then it may help to explain why online bullying takes such a huge toll on the developing brains (and lives) of kids and teens.
[4] Apparently digital architects are playing with the idea of using DNA to store digital information. This approach is known as “wetware”—using biological materials as a part of a digital structure or process. It remains to be seen whether using increased levels of wetware will make machines more biologically embodied. Who knows, maybe wetware will ultimately lead to such things as drives and desires in machines. This is a topic that science fiction writer Philip K. Dick looks at in his short story Do Androids Dream of Electric Sheep? (which was made into the 1982 movie Blade Runner).
The Internet and How It Affects Our Ability to Develop, Maintain, and Manipulate Mental Containers (part II)
Welcome to part II of this two part blog series. In part I, I spent considerable time talking about what mental containers are and how they are used. In this post I’d like to take a look at how our mental container skills get set up in the first place. I’m assuming you have read part I and have a sense for what a mental container is.
Before I get to Fonagy et al.’s Mentalization theory (as promised in part I), I’d like to offer up a musing. In thinking about how best to start part II, it occurred to me that Freud was ahead of his time and, probably without knowing it, used mental containers. I would suggest that the id, ego, and superego are all mental containers. Now, I know what you’re thinking: The id is really about the body and its various motivations and drives. How can there be a “body” mental container?” Darn good question. We can find the answer in the work of neurobiologist Antonio Damasio. In his 2010 book entitled Self Comes to Mind: Constructing the Conscious Brain, Damasio argues that the body creates images or maps. He further argues that consciousness is centrally about bringing these body images to mind. So, yes, many neurobiologists (Christopher D. Frith would be an example here) now believe that the mind harbors a map of the body. Damasio calls these “bodies in the mind.” [1]
So, Freud may not have focused on images coming from the body or mental containers but when I think of these concepts, I think of Carl Jung and his focus on archetypal images. Archetypes could be looked at as mental containers that are shared by society. And isn’t Freud’s superego a mental container for society? Ego strength, then, could be looked at as having the cognitive strength to create, maintain, and manipulate mental containers. And, yes, the ego mental container has a special job because it has to bridge or map between the map of the body (id) and the map of society (superego). Without ego strength there simply would be no way to keep these various mental containers straight. Pretty cool eh? Freud introduces us to ego and its strength (or lack thereof). Damasio introduces us to the body in the mind. And Jung introduces us to the society in the mind. Hopefully you can see how important mental container skills are. So, let’s see how they are set up. For this we need to look at Mentalization theory as put forward by Peter Fonagy and his colleagues.
We Interrupt This Post to Bring You an Important Public Service Announcement
I feel compelled to interrupt this blog post to bring you an important message. There are times when reading books that I get frustrated. Why? Well, because authors insist on making arguments at the level of intervention. Simply, such arguments will not work. As I have mentioned many times before, to make an effective argument one must keep the Midgley continuum in mind. Here’s the Midgley continuum (from the work of Gerald Midgley, noted systems thinker):
worldview <==> ideology <==> methodology <==> intervention
Allow me to give you an example. In his 2014 book entitled The Zero Marginal Cost Society—The Internet of Things, The Collaborative Commons, and The Eclipse of Capitalism, economist Jeremy Rifkin fully embraces MOOCs or massive online open courses. MOOCs are courses that are delivered online to students all over the world for little to no cost. In contrast, MIT psychology researcher Sherry Turkle, writing in her 2015 book entitled Reclaiming Conversation: The Power of Talk in a Digital Age, takes a decidedly reserved position concerning MOOCs. “The most powerful learning takes place in relationship,” writes Turkle. She continues, “What kind of relationship can you form with a professor who is lecturing in the little square on the screen that is the MOOC delivery system?” Turkle suggests that promoting MOOCs is misguided. Well, it’s not. Promoting MOOCs is perfectly guided: it’s guided by a particular worldview.
Here’s the problem: both Rifkin and Turkle argue at the level of intervention, in this case MOOCs. But let’s flesh out the Midgley continuum for each position on MOOCs:
Rifkin’s Midgley Continuum:
Transhumanism <==> People as Machine Entities <==> Machine Relating <==> MOOC Learning
Turkle’s Midgley Continuum:
Humanism <==> People as Biological Entities <==> Human Relating <==> Classroom Learning
So, Rifkin and Turkle disagree over “the interventions of MOOCs” versus “classroom learning.” But what they really disagree over is the question of differing worldviews: transhumanism versus humanism. If you believe in transhumanism or people as machine entities, then you will advocate for MOOCs as you should. If you believe in humanism or people as biological entities, then you will advocate for classroom learning, again, as you should. Neither is wrong. Simply, we have two different worldviews. What is misguided is to not argue from the level of worldviews. Arguing at the level of interventions is not persuasive. In defense of Rifkin and Turkle and their take on the Internet, authors tend to stay away from discussion of worldviews because I think it is simply too painful to admit how far along the path toward being posthuman we have traveled. It’s hard to admit how much humanity we have given up through our embrace of digital technologies. To make this loss concrete is to own it and to be accountable.
Cognitive scientist George Lakoff regularly tells us that we think using frames not facts. I get frustrated because authors argue ad nauseam using facts concerning interventions like MOOCs. In a moment I’ll point out that the Internet is not minded, it does not have a mind. I’ll be arguing from a humanist worldview. If you do not believe in humanism, then no amount of arguing will persuade you. Ergo, authors should begin by simply stating what worldview they are using. Once you know what worldview is being espoused, then such things as ideology, methodology, and interventions will necessarily fall from that worldview. So, Rifkin’s support of MOOCs is not misguided; it is perfectly guided by the transhuman worldview. In a transhuman world where biological entities have given way to machine entities, there will be no need for classroom learning and the human relationships it encourages. Comments by both Rifkin and Turkle make perfect sense and are perfectly guided once you know their respective worldviews. In my opinion, an author’s worldview should be stated on page one.
As a side note to this discussion, digital architects did not set out with the goal of turning people into machines through Internet use. Transhumanism was an unintended consequence, a side effect. Likewise, feminists did not set out with the goal of turning children into machines by systematically separating mother from child. This too was an unintended consequence, a side effect. However these two side effects have now joined up and are, in my opinion, the main effect. Digital technology has taken up the position of surrogate parent (as talked about in Turkle’s work). And, yes, transhumanists would say that this is a good thing, that this is a desirable outcome. Humanists argue against this position. If a parent were to come up to me and say, “I’m a transhumanist and I wish for my children to become machines by all possible interventions like MOOCs,” then I’d say, “Great. I don’t agree with your worldview but you’re entitled to it. But don’t expect me to support transhumanist interventions like MOOCs.”
We Return You to Our Regularly Scheduled Blog Post
So, how do mental containers—key to such things as empathy, humor, Theory of Mind, mental time travel, and other cognitive skills—get set up? Psychology researcher Peter Fonagy and his colleagues present one possible answer in their 2002 book entitled Affect Regulation, Mentalization, and the Development of the Self (mentioned in part I). According to Fonagy et al., to understand the genesis of mental containers we have to understand a bit about what they call contingency. Let me define contingency with an example. If you raise your arm, the action of raising your arm is perfectly contingent. In other words, the action of raising your arm is perfectly contingent on your desire to raise your arm. Fonagy et al. talk about how many stimuli coming from the world arrive at our sensory apparatus in nearly perfectly contingent forms. Using the work of John S. Watson as a backdrop, Fonagy et al. reveal that infants are born with an innate system that detects contingency. “Watson … proposed that one of the primary functions of the contingency-detection mechanism [in infants] is self-dectection,” reveals Fonagy et al. They continue thus:
In essence, Fonagy et al. are suggesting the following: actions initiated by the self are perfectly contingent whereas actions perceived to be coming from the external world are at best nearly perfectly contingent. The subtle difference between perfectly contingent and nearly perfectly contingent perceptions is what helps to define the self boundary or self mental container. Pretty cool eh? And, yes, infants have an innate system that is able to detect contingency states. Using Watson’s work as a background, Fonagy et al. argue that the mother helps set up her infant’s self mental container by engaging in what they call mentalization (MZ). MZ is a “mind-in-mind” operation. Neurobiologist Dan Siegel calls mind-in-mind operations “I know that you know that I know.” So, a mother will engage in “I know that you know that I know” with her infant. Let me see if I can draw a flow chart for you.
Baby signals his/her mother perfectly contingently (through a vocalization or facial expression):
Baby ===perfectly contingent===> Mother
Once the mother receives a signal from her infant, the mother has to create a mental container to hold the baby’s mind. She has to also have a coherent mental container to hold her own self. Now, here’s where the magic happens. The mother must now respond back to her infant with a new, blended container [2]—part baby, part mother. In addition, the blend must be made in such a way that the “baby part” is perfectly contingent while the “mother part” is nearly perfectly contingent. Infant researchers, such as Daniel Stern, call this attunement—attunement between mother and infant. So, diagrammatically, here’s what happens when mother signals back with a vocalization or facial expression:
Baby<===mental container blend=== Mother
And, again, the mental container blend will have a portion that is “perfectly contingent baby” (which came from the baby initially) and “nearly perfectly contingent mother” (which is the mother’s attuned response back to the infant). In this way the baby’s contingency detection system “decodes” the blend and says something like this: “Cool, I just received myself back from my mother, which helps me set up my self container. And I received something back that feels like a reflection but it’s not exactly me so I’m assuming that this part is my mother. And because I can easily decode self from not self, I get the impression that another mind has engaged in attuned reflection or blending.” In this way, the baby receives his or her first sensation of “I know that you know that I know.” When a mother engages in baby talk or motherese with her infant, this is the language of mentalization. It is this language that goes a long way toward setting up mental containers within the mind of the infant. [3] Over time, the mother continues to act as the cognitive scaffolding critically needed for the development of robust EF or executive function skills in the child. As a reminder, EF skills include empathy, mental time travel, delaying gratification, appropriately focusing attention, appropriately shifting attention, and perspective taking.
One reason eye contact is so intimate and makes us feel so vulnerable is because through eye contact we experience the most intense forms of “I know that you know that I know” or mentalization. It is also through eye contact we come to see whether someone has our mind in mind or not. Mind-in-mind operations can be perceived by perceiving the “perfectly contingent—nearly perfectly contingent” communications coming from someone else’s body language. When we take (or do not take) another person’s mind into to our own, that “taking in” (or not taking in) can be read unconsciously through the body. As talked about in part I, persons on the autism spectrum have a hard time with eye contact because in all likelihood they have a hard time with mind-in-mind operations or reading the mind revealed through body. As the Internet and social media do away with the need for eye contact, we should know what we are giving up. As Turkle puts it, “A lack of eye contact is associated with depression, isolation, and the development of antisocial traits such as exhibiting callousness. And the more we develop these psychological problems, the more we shy away from eye contact.”
So, to answer my earlier question “Is the Internet minded?” the answer is “no.” The Internet cannot engage in mind-in-mind operations. It cannot help us become minded or develop mental container skills. As a result, the Internet cannot help us become empathetic. For the moment, we need other people, other people’s minds, to help us become empathetic. Now, it is possible that Internet architects will eventually come up with some way of making machines minded. But for the moment, the Internet has no drives, no biology, [4] no body. It has no body from which images can arise. It has no need for a self or to help others develop a self. It certainly has no need for humor. Sure, it can create humor, but it has no need for humor, for the delight of humor—the delight of taking the mind of another into its own mind. It has no need to be attuned to the mind of another. Frankly, it has no need at all. If there is such a need, it usually takes the form of us humans projecting need on to machines as we did when MIT AI (artificial intelligence) researcher Joseph Weizenbaum created the ELIZA program back in the 1960s, a program that created the illusion of reflective listening. And that’s the key here.
AI researchers are masters at creating the illusion that the Internet is minded, has a mind, and is interested in our minds. At best, the Internet can give us perfectly contingent responses. As Eli Pariser points out in his 2011 book The Filter Bubble: What the Internet Is Hiding from You, the Internet is a master at introducing you to you in very perfectly contingent ways. Without “perfectly” and “nearly perfectly” contrasts, it will be hard for us to engage in the self–other operations that empathy and mind-in-mind operations require. Maybe over time our innate contingency detection system will wither and die from lack of use. And you know, transhumanists would probably be OK with that. I’ll stick with humanists: we need other biologically embodied minds to know our own mind. In this way, we will know that we are minded by others. Just remember, you’ll never see your eye twinkling in the eye of mother Internet. If you do, it’s an illusion, or, to use Freud’s concept, a projection of an unmet need deep within. I guess you could say that Internet architects are masters at accessing and manipulating unmet need deep within ourselves. But there again, isn’t that true of all propaganda? As Turkle puts it, “Our life has been ‘mined’ for clues to our desires. But when our screens suggest our desires back to us, they often seem like broken mirrors.” That is a great way to frame the contingent relationships that the Internet forms with us: broken mirrors.
Personal Note: I’ll be taking a break until after the holidays. Have a great Holiday Season and I’ll see you in the New Year.
Notes:
[1] Back in 1987, Mark Johnson, mentioned in part I, wrote a book entitled The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. So, this idea that the mind harbors a map or model of the body has been around for almost 30 years.
[2] See part I for more on blending mental containers.
[3] Over the years we have supported attachment-related research conducted by Dr. Karlen Lyons-Ruth and her colleagues. One of the papers coming out of these efforts is entitled Disorganized Attachment in Infancy Predicts Greater Amygdala Volume in Adulthood (Behavioural Brain Research 308 (2016) 83–93). The authors describe mothers who display low maternal responsiveness (LMR). Behaviors associated with LMR include (quoting the article): “failure to greet [the infant], backing away from the infant, interacting from a distance, [and] interacting silently.” The authors point out that LMR is also associated with contradictory communications from the mother to the infant such as “failure to respond appropriately to clear infant cues; mixed communications such as sweet voice but negative message.” It is these types of actions and miscommunications that make it difficult for the infant to properly set up mental containers. Carried to an extreme, Fonagy et al. suggest that an infant could even set up a mental container that contains an “alien self.” This is not unlike one of Jung’s “dark side” archetypes. I would suggest that communications with the Internet have a lot in common with communications coming from mothers displaying LMR. If this hypothesis is true, then it may help to explain why online bullying takes such a huge toll on the developing brains (and lives) of kids and teens.
[4] Apparently digital architects are playing with the idea of using DNA to store digital information. This approach is known as “wetware”—using biological materials as a part of a digital structure or process. It remains to be seen whether using increased levels of wetware will make machines more biologically embodied. Who knows, maybe wetware will ultimately lead to such things as drives and desires in machines. This is a topic that science fiction writer Philip K. Dick looks at in his short story Do Androids Dream of Electric Sheep? (which was made into the 1982 movie Blade Runner).