Before I start this blog post, allow me to do some housekeeping. After my last post entitled John Bowlby and the Glass Cage of Automation, a reader asked if I could provide a reference on the rise of posthuman or postbiology thought. On the rise of posthuman thought I would direct the reader to Francis Fukuyama’s 2003 book entitled Our Posthuman Future—Consequences of the Biotechnology Revolution. On postbiology thought I’ll mention two books: the first is Steven Pinker’s 2003 book entitled The Blank Slate: The Modern Denial of Human Nature; the second is James Barrat’s 2013 book entitled Our Final Invention: Artificial Intelligence and the End of the Human Era.
I’ve mentioned the above books in earlier blog posts. The theme that ties them together is simple: the belief that innate behavioral systems (such as attachment, caregiving(receiving), and sex) in specific and biology in general can and should be transcended. This is not a new idea. As Barrat points out in Our Final Invention, the current embrace of the Singularity—the idea that human minds will merge with computer minds in the near future—shares much in common with the Biblical idea of End Times. It would appear that people have expressed a desire to be posthuman or postbiology since the beginning of recorded history. Today this desire is captured by such movements as AI (artificial intelligence), End Times, and even behaviorism (a la the continuum of Pavlov, John Watson, and B. F. Skinner), which has led to the widespread use of behavioral drugs. As Pinker points out in The Blank Slate, behaviorism attempts to deny the existence of biologically mediated innate behavioral systems (again, attachment, caregiving(receiving), and sex) while promulgating the idea that all motivation can and should be “programmed” through some process of conditioning (especially social conditioning).
Now, as I was writing the above I ran across an appropriate article: Amoebas Are Still More Intelligent Than Our Most Powerful Computers by Mark Strauss. Strauss starts his article thus:
If you’re one of those people who believe the singularity is imminent, you might want to pack a lunch. Our machines just aren’t that smart, says Alva Noë, a philosopher at the University of California at Berkeley. What we call artificial intelligence is actually best described as pseudo-intelligence.
What caught my attention was this quote by Strauss (which apparently came from a Bloomberg Businessweek article):
It’s striking that even the simplest forms of life — the amoeba, for example — exhibit an intelligence, an autonomy, an originality, that far outstrips even the most powerful computers. A single cell has a life story; it turns the medium in which it finds itself into an environment and it organizes that environment into a place of value. It seeks nourishment. It makes itself — and in making itself it introduces meaning into the universe.
See, even an amoeba has innate behavioral systems, that is to say, it seeks nourishment, it seeks to organize, it places value. The question becomes: Can we artificially create motivational systems? Behaviorists and believers in the Singularity will answer: Yes we can and should! But what will these artificial behavioral systems look like? What kind of motivational systems will robots have I wonder? Barrat convincingly argues that once robots acquire an autonomous behavioral system (or systems), they will be “naturally” motivated to devalue and eliminate all biologically-based lifeforms (which would include us). While End Timers believe that only the chosen ones will exist as spiritual beings housed in heaven once all material organic life has been eradicated, Singularity fanatics believe that the chosen ones will exist as spiritual beings housed in machines. Regardless, we are living in a time when believers in biologically mediated behavioral systems such as attachment , caregiving(receiving), and sex, are having to do battle with those who believe in mechanically mediated behavioral systems such as those that guide missiles, make Amazon.com recommendations, and allow us to believe that Apple’s Siri is a real person who cares about us.
So, I hope the above references will get you started as you study the areas of posthumanism and postbiology. Allow me to throw in a bonus reference that I also found helpful: Finn Bowring’s 2003 book entitled Science, Seeds, and Cyborgs: Biotechnology and the Appropriation of Life. In the rest of this post I’ll bring a few of Nicholas Carr’s observations—delivered in The Glass Cage—back to Bowlbian attachment theory. Lets get started.
Early on Carr makes the following observation:
Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate [my emphasis] our ends without replicating our [biologically mediated] means.
Suffice it to say that many followers of John Bowlby have now moved into such areas as mentalization (i.e., Peter Fonagy and colleagues) and mindfulness (i.e., Dan Siegel) because on some level they see early attachment relationships as the foundation upon which being a minded or mindful person rests. As I have blogged about before, mindedness and mindfulness are higher order cognitive abilities that are held by the larger rubric known as EF or executive function. Other EF skills would include mental time travel, planning, focusing attention, shifting attention, perspective taking and the like. Effectively Carr cautions us to not take an act of mindfulness (such as Siri knowing what food we like to eat) for true cognition, for true executive functioning. As the mythologist Joseph Campbell used to say (and I paraphrase): “Don’t eat the menu for the food that it represents.” In other words, artificial forms of mindfulness represent humans wishing to create mindfulness, not mindfulness itself. As Strauss makes clear in the aforementioned article, humans at IBM used the supercomputer Watson to play the game show Jeopardy; Watson did not play Jeopardy itself. Do not take Watson’s playing for actual playing: Watson’s playing represents the play of humans, not machines. From a Bowlbian perspective, do not take attachment to machines for actual attachment: machine attachment represents the attachment patterns of humans, which, I would argue, represent a form of distancing attachment or wishing to distance oneself from biological motivations. This is a topic that Sherry Turkle takes up in her 2012 book entitled Alone Together—Why We Expect More from Technology and Less from Each Other.
Carr makes a very important observation concerning the rise of automation during WWII. As I mentioned in my previous post, John Bowlby witnessed the birth of modern automation. Bowlby even talked about the rise of guided missile systems in his trilogy on attachment. Bowlby saw systems engineers putting purpose into their creations. So, he thought that scientific discussions centered on biological purpose would now be permissible. Sadly this did not turn out to be the case. Today we revere search engine systems such as Google and Amazon while denying biological systems by feeding our kids massive amounts of behavioral drugs. Carr also talks about the rise of guided missile systems during WWII. Here’s how Carr summarizes that discussion:
Sensory organs, a calculating brain, a stream of messages to control physical movements, and a feedback loop for learning: there you have the essence of automation, the essence of a robot. And there, too, you have the essence of a living being’s nervous system. The resemblance is no coincidence. In order to replace a human, an automated system first has to replicate a human, or at least some aspect of a human’s ability.
What Bowlby saw firsthand (and Carr is pointing to above) are attempts on the part of systems engineers to replicate some aspects of biologically mediated innate behavioral systems. Starting during WWII, these behavioral system replications formed the core of such machine systems as antiaircraft guns and guided missile systems. As Carr puts it (quoting Norbert Wiener, arguably the father of cybernetics), “The new [WWII] technologies, while designed with weaponry in mind, gave rise to ‘a general policy for the construction of automatic mechanisms of the most varied type.’ ” While Ludwig von Betalanffy wrote about General Biological Systems Theory back in the 1950s, Wiener was writing about General Mechanical Systems Theory. Today we revile the former and revere the latter.
One last example. Carr spends a lot of time discussing the topic of flying. Carr suggests that extensive use of automation (in the form of automatic pilot systems) could potentially degrade the flying skills of pilots. He points to several well publicized airline crashes for evidence. Carr goes on to suggest that at the heart of degraded flying skills are degraded mental models.
Well, Bowlby spent a lot of time looking at mental models and how they are formed. Bowlby used the term Inner Working Models. Bowlby put forth the idea that early safe and secure attachment relationships (if all goes well) form the foundation upon which open and flexible Inner Working Models rest. Suffice it to say that open and flexible Inner Working Models are part and parcel of Executive Function. Bowlby championed the idea that we need open and flexible Inner Working Models in order to successfully navigate the world of social relationships.
Carr writes: “As [the pilot’s flying] experience continues to deepen, his brain develops so-called mental models—dedicated assemblies of neurons—that allow him to recognize patterns in his surroundings.” The same is true of attachment relationships. Allow me to paraphrase Carr using Bowlbian attachment theory: “As [the child’s attachment] experience continues to deepen, his brain develops so-called mental models—dedicated assemblies of neurons—that allow him to recognize patterns in his social surroundings.” These mental models, if they are robust, give rise to expectation fields. Expectation fields are a key component of the EF skill of mental time travel: being able to see into the future and plan for certain expectancies. Carr continues: “The models enable [the pilot or even the child] to interpret and react to stimuli intuitively, without getting bogged down in conscious analysis. Eventually, thought and action become seamless. Flying [and navigating the social terrain] becomes second nature.”
To sum up, as automation in such areas as flying and parenting continues to degrade our mental models, flying and being a social being will no longer feel natural, feel like a second nature. As we continue to create mechanical behavioral systems while denying biological behavioral systems, we should expect to see such things as airline crashes due to an overuse of automated systems. Sadly, as we continue to turn to myriad forms of parent substitutes we should expect to see more and more of our kids crash and burn. I would point to the many school shootings of late for evidence. For more on this topic see Mary Eberstadt’s 2004 book entitled Home-Alone America: The Hidden Toll of Day Care, Behavioral Drugs, and Other Parent Substitutes.