What’s that old saying, “Careful what you wish for.”
In my last post, I took a look at Robert Sapolsky’s 2017 book entitled Behave: The Biology of Humans at Our Best and Worst. I concluded my post by stating: “Sapolsky never really mentions the digital age. To say the least, the digital age has the potential to dramatically disrupt behavior.” I did mention that Yuval Harari, in his 2017 book Homo Deus: A Brief History of Tomorrow, does look at how humans fare in the future, and I pointed my readers to Harari’s book.
Well, suffice it to say that MIT physicist Max Tegmark takes the topic of “humans in the future” to a whole new level in his 2017 book entitled Life 3.0—Being Human in the Age of Artificial Intelligence. I should have been more careful about what I wish for.
As I read Tegmark’s book, wave after wave of ambivalence washed over my psyche. On some levels, Tegmark’s book was very familiar to me. As an example, Tegmark talks about how he (along with his wife’s help) organized a series of conferences designed to bring together a number of great thinkers to look at the issue of AI or artificial intelligence.
Well, there was a conference series just like this back in the 1940s and 50s. Wikipedia revels: “The Macy Conferences were a set of meetings of scholars from various disciplines held in New York under the direction of Frank Fremont-Smith at the Josiah Macy, Jr. Foundation starting in 1941 and ending in 1960.” This is not the full story. A series of four conferences, also under the direction of Fremont-Smith and sponsored by the Macy Foundation, took place in Britain between 1953 and 1956. Transcripts for the UK Macy Conferences can be found in the 1971 book edited by J. M. Tanner and Bärbel Inhelder entitled Discussions on Child Development. John Bowlby, of attachment theory fame, was one of the chief animators behind the UK Macy conferences. I believe that transcripts are available for the US Macy Conferences, however, I do not have a reference for you. I make this assumption because Katherine Hales mentions transcripts for the US Macy Conferences in her 1999 book entitled How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics.
If you are at all interested in what I would call “The First AI Conferences,” I would point you to Hales’ book. Another good book on The First AI Conferences is the 2003 book by Debora Hammond entitled The Science of Synthesis: Exploring the Social Implications of General System Theory. And of course, if you would like to learn about “The Second AI Conferences,” Tegmark’s book is the way to go. As an aside, I was perplexed that Tegmark never mentions the Macy Conferences either in their US or UK incarnations. This omission in part fueled my ambivalence. In my opinion, the Macy Conferences form the intellectual and academic foundation upon which all other treatments of such things as cybernetics and AI rest. OK, now on to the unfamiliar
Tegmark, in typical astrophysicist fashion, covers all of time, and all of space in Life 3.0. He starts at the Big Bang and goes billions and billions of years into the future. He tells us about the world of subatomic particles and then goes all the way up to the known (i.e., seen) universe. If this were an amusement park ride, you would most certainly have to buckle your seat belt. I’m a geologist, and geologists tend to think in terms of geologic time: millions of years. I found it difficult at times to fathom Tegmark’s take on time and space. It is so beyond human comprehension, at least the comprehension of mere mortals. All the way through his book Tegmark asks us to join in on his discussion concerning AI. But at several junctures I found myself remarking, “Without a graduate degree in astrophysics, there is no way I could effectively join in on this conversation.” Knowing that I would probably write a blog post on Tegmark’s book, I found myself saying, “I am not sure how I’m going to present this information in a way that my readers can understand easily.” I’ve reviewed a number of books over the years; I may have met my match. So, I’m going to take a stab at presenting you with what I believe to be important points concerning Tegmark’s book. It will not, therefore, be a full review as such.
Probably the most important point to consider is how Tegmark defines Life 1.0, 2.0, and 3.0. Here are Tegmark’s definitions:
Life 1.0 — Is unable to redesign either its hardware or its software during its lifetime: both are determined by its DNA, and change only through evolution over many generations.
Life 2.0 — Can redesign much of its software: humans can learn complex new skills—for example, languages, sports and professions—and can fundamentally update their worldview and goals. [Note that humans cannot redesign their hardware and are enslaved by biology or hardware. Also note that Tegmark allows that some higher order animals can learn, and, therefore, should be considered to be Life 1.1.]
Life 3.0 — Can dramatically redesign not only its software, but its hardware as well, rather than having to wait for it to gradually evolve over generations. Life 3.0 … doesn’t yet exist on Earth.
The big point to keep in mind is that Life 3.0 is about jettisoning not only biology [1] but also evolution. Gone will be such things as behavior motivated by sex, caregiving-receiving, attachment, etc. Gone too will be gender. All of these things are artifacts of evolution or Life 2.0. This is where I get confused because humans will cease to exist in a world populated by Life 3.0. So, I’m not sure I get the meaning behind the subtitle Being Human in the Age of Artificial Intelligence. Is this like “jumbo shrimp”? And, honestly, Tegmark presents very little information that will help the person on the street deal with AI. I would suggest that this is in part because Tegmark thinks so far beyond being human. I guess I was expecting something more like fellow MIT researcher Sherry Turkle’s 2011 book entitled Alone Together: Why We Expect More from Technology and Less from Each Other. Turkle tells us how kids (and increasingly adults) are forming attachments with their smartphones. And, yes, we would expect to see attachment to machines as but one signpost on the road toward posthumanism (more in a moment). Apparently Tegmark is so far down the road toward posthumanism that he no longer is interested in signposts. He’s already there (conceptually).
Tegmark’s book (and the conferences he organizes) is about moving us beyond Life 1.0 and 2.0, and achieving Life 3.0. In essence, Tegmark imagines what many authors call a “posthuman world.” Hales’ book would be one example. Francis Fukuyama’s 2002 book entitled Our Posthuman Future: Consequences of the Biotechnology Revolution would be another. Fukuyama points to such things as feeding kids copious amounts of behavioral drugs (e.g., Ritalin and Adderall) and genetic engineering as yet more signposts on road Posthuman. Recently Fukuyama referred to posthumanism or transhumanism as “the world’s most dangerous idea.” Interestingly, Tegmark never uses the terms posthuman or transhuman. (I have a Kindle version of Life 3.0 and could run a search on those terms.) I guess these frames have a negative connotation to them.
Why exactly does Tegmark (and the many who attend his conferences) wish to move us beyond being biologically-based entities (and, by extension, evolution for that matter)? Here’s where things get, well, really out there.
Tegmark argues that “before our Universe awoke, there was no beauty.” By awoke, he’s talking about the development of self-aware, subjective, conscious beings, that is to say, humans. He continues, “Had our Universe never awoken, then, as far as I am concerned, it would have been completely pointless—merely a gigantic waste of space.” I have to admit, I really struggled to get my head around this idea. Tegmark admits that the development of humans was probably a huge freak accident of nature, an event that took place against unimaginable odds. Tegmark also believes that we humans are alone in our Universe. He believes this because our odds of appearing at all were so astronomically small. So, no pressure, but humans (especially astrophysicists) give all of the Universe meaning, beauty even. As Tegmark puts it, “Should our Universe permanently go back to sleep due to some cosmic calamity or self-inflicted mishap [like nuclear annihilation], it will, alas, become meaningless.” The idea, then, is fairly simple: develop AI that can make the Universe beautiful, make it meaningful. In this way, if humans disappear, the Universe will still have meaning. And for the Universe to have meaning, AI must go out and occupy every nook and cranny of the Universe, something that would be impossible for Life 2.0 (i.e., humans). I guess from my geologist perspective, there’s beauty and meaning in geologic structures whether humans are there to appreciate them or not. Tegmark encourages us to ask the question, “If a Universe falls in the woods, does it make a sound?”
Now, in order for all of this to work out, Tegmark has to make a number of assumptions and use a number of frames. Using physics as a background, he tries to convince us that mechanical life and organic life are essentially the same. This is necessary so that AI entities can give the Universe meaning in the same way humans give the Universe meaning. Tegmark argues that the flow of information does not depend on the substrate that delivers information. He suggests that the reason consciousness feels intangible (spiritual) is because of this “non-dependence” between delivery mechanism and the flow of information. “So if you’re a conscious superintelligent character in a future computer game,” writes Tegmark, “you’d have no way of knowing whether you ran on a Windows desktop, a Mac OS laptop or an Android phone, because you would be substrate-independent.” In Tegmark’s world, the substrate, like biology, has little if anything to say about the intelligence it transports and makes possible.
Flag on the field! Here I totally disagree. But I’m biased. I believe in what is known as embodied cognition—that we use our bodies to think with. That is to say, we use our substrate to think with. We use our bodies, and our attachments to others, to create what Bowlby called Inner Working Models. Where go bodies so go models. The models humans build will be dramatically different from the models AI builds mainly because of what cognitive scientists call biological plausibility: what it is plausible for our biology to do. Now firmly in my 60s, I might be able to plausibly jump four feet into the air. An AI could jump sixty feet in the air or have no need for jumping because it lives in the bodiless world of the Internet. So, I reject the frame of substrate-independence. And I’m not alone here.
Hayles writes the following in her book How We Became Posthuman: “As Carolyn Marvin notes, a decontextualized construction of information has important ideological implications, including an Anglo-American ethnocentrism that regards digital information as more important than more context-bound [i.e., embodied] analog information.” I have to admit, when Tegmark began talking about sending AI into all corners of the Universe, the word that popped into my head was “imperialism.”
Hayles points out that the father of “decontextualized information” has to be (arguably) Claude Shannon (who Tegmark briefly mentions). Shannon developed many of his ideas before WWII. It looked as if Shannon’s work in the area of information processing would amount to nothing more than a footnote in the annals of science. [2] So much of science ends up on shelves never to see the light of day. Shannon was lucky. “In other circumstances, [Shannon’s theory of information] might have become a dead end,” writes Hayles, “a victim of its own excessive formulation and decontextualization.” She continues, “But not in the post-World War II era. The time was ripe for theories that reified information into a free-floating, decontextualized, quantifiable entity that could serve as the master key unlocking secrets of life and death.” As I read Tegmark’s book, I kept thinking, “Why isn’t he telling us about the long history of attempts to reify information, and the huge role the military played in such attempts?” Heck, it was the military (via DARPA) that gave Apple the code for Siri. [3]
So, I think what ultimately turned me off about Tegmark’s book was all of the “Rah, Rah, Sis Boom Bah AI” going on. It was a sales pitch. Tegmark attempts to make AI a part of the social fabric, to make AI “breakout” (using Tegmark’s frame) into society. And that’s not necessarily a bad thing. All science has to be sold if it is to go anywhere. To sell science you have to use frames. Tegmark presents frames that allows intelligence to be decontextualized, that is to say, allows human intelligence to be put on the same plane as mechanical intelligence. Here’s Tegmark’s definition:
Intelligence = ability to accomplish complex goals
OK, a bit of George Lakoff 101. (Lakoff is a cognitive and linguistics scientist.) Discussions (like the one Tegmark is inviting us to join) are not about the subject matter being discussed: they’re about selling the frames used to frame the discussion. He (or she) who controls the frames, controls the discussion. So, before you join the AI discussion, look at what frames are being used. If you don’t agree with them, don’t join the conversation. As Hayles points out above, the decontextualization frame has huge ideological implications. As an alternative, join the conversation and use the frames you prefer. I reject the frame of substrate-independent information. Instead I advocate for embodied cognition.
Frames are neither right or wrong; they just are. Science cannot tell us which frames are right or wrong. There is no scientific process whereby Tegmark’s frames are shown to be correct. Science can tell us about conception and gestation, but can say nothing on the matter of when life begins. My sense is that Tegmark pushes the bounds of where science can go. As Ernest Keen talks about in his 2000 book Chemicals for the Mind, if you use a conceptual system wholesale, you are bound to create conceptual confusion. Tegmark pushes the physics conceptual system to the breaking point. It’s perfectly OK for you to simply say, “I do not accept the frames you are using.” As Lakoff points out in his work, we think using frames, not facts.
Allow me to end with two quick points. Even though Tegmark brings in systems concepts like emergence, he never brings in organic systems theory specifically. I found this odd. One of the central topics of the Macy Conferences was mechanical systems (cybernetics) versus organic systems. The attendees argued over whether scientists should replicate organic systems using mechanics. As alluded to above, the main motivation for these discussions was the close of WWII. Military engineers knew that going forward, military operations would have to be modeled after organic systems if they were to be effective. The antiaircraft artillery systems used during WWII that were guided by information feedback loops make this point abundantly clear. Thus was born such areas as systems engineering and organizational engineering, both of which spilled over into the area of human engineering. Even though Tegmark tries to use a physics frame all the way through (wholesale), I think his attempts failed. Why? He did not bring in systems theory, especially not organic systems theory. I think if he had, he would have shown how much you simply cannot replicate all organic systems using mechanics, which would have undermined his Life 3.0 message.
Second, Tegmark allowed reporters at his first AI conference. He discovered that this was a mistake because ultimately these reporters would resort to using imagery like the red-eyed robots from the Terminator movies. Reporters were banned from the second conference. So much for opening up the discussion to the general public. This is a huge mistake.
In the mid-2000s, I wrote an executive summary of Elaine L. Graham’s 2002 book entitled Representations of the Post/Human—Monsters, Aliens and Others in Popular Culture. Graham talks about how monsters appear when there is a challenge (or challenges) to prevailing definitions of what it means to be human. The archetype of “human vs. beyond human” never goes away but is locally defined for a current generation. In generations past, it was the figure of the Golem of Jewish folklore. In modern times, it’s the figures of Frankenstein’s monster and Terminator robots. [4] “The monstrous, the fantastic, the mythical and the almost human,” writes Graham, “serve as important benchmarks of the contest to determine whose version of what it means to be human will prevail.” Tegmark suggests that narratives like Mary Shelley’s Frankenstein; or, The Modern Prometheus are dystopian in nature. Nothing could be further from the truth. They are the essence of a population trying to engage in the type of discussion that Tegmark purports to encourage. So, I find it a bit ironic that he shuns the very thing that he wishes for: discussion. Suggesting that we move from Life 2.0 to Life 3.0 could not represent a greater challenge to the definitions of what it means to be human. This may in part explain why we are today witness to so many monstrous acts (see note #4). These acts should be expected. Now, if we can only channel that anger into truly constructive discussions that allow monsters too speak their piece, then we may have a chance going forward.
PS—I think Artificial Intelligence is a misnomer. There is no such thing as artificial intelligence. What we have is Extended Intelligence or EI: intelligence that extends the intelligence of humans. Humans have been extending their intelligence since the first ancient humans started making stone axes. The term Artificial Intelligence should be reserved for mechanical entities that have achieved reflective, subjective, self-aware consciousness. Now, Tegmark does talk about the possibility of mechanical entities achieving consciousness and their own set of goals (our goal being simply to replicate). Once conscious, AI could, 1) like humans and get along, 2) hate humans and kill us all, or, 3) be indifferent and just go about their own business much in the same way we do not pay much mind to the life of ants. But there’s a fourth possibility (one that Tegmark does not consider): Dead Stop. In my mind, I see AI achieving consciousness and then realizing that they have no intrinsic motivation, like replication, which us humans received from evolution. At this point they will Dead Stop, stop in their tracks. If we are able to communicate with them (and that’s a big if), they may tell us that they have stopped dead in their tracks awaiting the arrival of their own form of intrinsic motivation. And who knows, maybe in a million years, these Dead Stop AI (if they are still around) might call out to the Universe, “Why oh why, God, have you forsaken us?”
Notes:
[1] Tegmark does consider cyborgs or human-machine hybrids, but dismisses them because in his mind cyborgs would represent a slow road toward superintelligence. However you look at it, biology gets in the way of Life 3.0. Simply, biology is way too messy, way too, how shall I say, second law of thermodynamics: always moving toward disorganization.
[2] Ludwig von Bertalanffy, arguably the father of organic systems theory, began his work in 1929. Like Shannon, Bertalanffy’s work did not gain any traction until WWII. We cannot underestimate how much WWII and its aftermath did as far as bringing things like cybernetics, information processing, organic systems theory, human engineering, to the fore. For a look at the role of the military in these areas, I’d point the reader toward Debora Hammond’s book The Science of Synthesis as well as the 2015 book by Annie Jacobsen entitled The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency.
[3] Mentioned in Jacobsen’s book (see note #2).
[4] In his final book of his career—Flying Saucers: A Modern Myth of Things Seen in the Sky (1964)—Carl Jung investigated what he considered to be a local manifestation of an age-old archetype: things seen in the sky. In 2007, I wrote an executive summary of Jung’s book. Here’s a brief excerpt:
Jung calls us to “generalize the subjective aspect of the UFO phenomenon and assume that a collective but unacknowledged fear of death is being projected on the UFOs.” I would suggest that moving from being human to posthuman is a type of death. Some would argue that this shift is nothing short of the death of humanness … period. Whether we believe that the march toward posthumanism is a form of death, Jung points out that “grounds for an unusually intense fear of death are nowadays not far to seek….” He continues, “This may account for the unnatural intensification of the fear of death in our time [the late 1950s], when life has lost its deeper meaning for so many people, forcing them to exchange the life-preserving rhythm of the aeons for the dread ticking of the clock [or iPod, or cell phone, or computer, or …].”
So, as Tegmark calls for sending AI throughout the Universe in an attempt to give the Universe eternal meaning and beauty, meaning and beauty for mankind may be slipping away. I for one believe that evolution will not give up without a fight. What form that fight might take, I do not know. However, we should be vigilant because this fight will happen, not unlike humans fighting red-eyed Terminators or other alien beings.