In my dual roles as psychotherapist (currently inactive) and philanthropist (currently active), I regularly attend workshops and conferences. As an example, the Association of Small Foundations recently wrapped up its annual meeting here in Albuquerque on October 3rd, 2013. (Our Foundation sponsored bringing in William Powers to speak as a part of our RYOL Lecture Series. Powers spoke on his book Hamlet’s BlackBerry, which talks about how people across time have found strategies that allow them to relate to technology in ways that bring greater insight, depth, meaning, and purpose to life. I summarized Powers’ book in an earlier blog series.)
At these workshops and conferences speakers will often use words and phrases such as systems or systems perspective or changing systems or acting systematically. As an example, back in 2009 I attended a workshop on Maximizing Grant Impact hosted by the Conference of Southwest Foundations. One of the slides presented was entitled “What’s Hot” and included the following bullet point: Systems Theory—Learning (continuous, peer, strategic, emergent, organizational). The presenter told us that systems theory was a hot topic in philanthropy but then did not go into any detail.
My overall impression is that systems concepts are often presented and talked about in vitalistic ways that do capture essences (a la Bergson’s élan vital). Unfortunately these essences are then framed in vague and often confusing ways. I believe these presenters mean well, however, their lack of definition and specificity often diminishes their message concerning systems theory, systems thinking, and even systems change.
In this blog post series I’d like to take a first pass at bringing a bit of clarity to the topic of systems. It will be a quick, cursory attempt at best. And I’ll use Bowlby’s work as a backdrop because he spent considerable time looking at systems worldviews and concepts. For more on the connection between John Bowlby and systems theory, contact the Foundation for a copy of the article we commissioned by Gary Metcalf entitled John Bowlby: Rediscovering a Systems Scientist.
Within the world of systems, there are two overarching divisions: organic systems and mechanical systems (the latter is often referred to as cybernetic systems). So, it is not enough to just say “systems”; one must first say which systems worldview they are using: biological–organic versus mechanical–cybernetic. In academic circles Norbert Wiener’s name is often associated with mechanical or cybernetic systems, and Ludwig von Bertalanffy’s name is associated with biological or organic systems. Even though John Bowlby was very influenced by organic systems theory as he developed his theory of attachment (Bowlby did have the opportunity to meet with Bertalanffy), Bowlby was well aware of cybernetic systems. In earlier blog posts, I have used the following quote by Bowlby to support my claim that Bowlby was aware of cybernetics:
At one time to attribute purposiveness to animals or to build a psychology of human behavior on the concept of purposefulness was to declare oneself a vitalist and to be banned from the company of respectable scientists. The development of control systems of increasing sophistication, such as those that control a homing missile, has changed that. Today [the mid-1960s] it is recognized that a machine incorporating feedback can be truly goal-directed. Thus it comes about that nowadays to attribute purposiveness to behaviour and to think, if not teleologically, at least teleonomically is not only common sense, as it always was, but also good science.
Given the space we have, it would be very difficult for me to adequately unpack the above quote by Bowlby. Let me give you these bullet points, which, I hope, will shed some light on systems thinking back in the 1960s:
- To be labeled a “vitalist” was the kiss of death for a scientist because the overarching goal of science back in the 1960s was to remove or reduce all vitalism, such as metaphysical essences (i.e., hopes, wishes, desires, and even consciousness), to simple cause and effect chains. Back in the 1960s, the goal of science was to reduce all systems to billiard ball physics. Bowlby fought this reductionism by saying that organic purpose and motivation can and should be investigated using scientific methods informed by a systems perspective. This was Bertalanffy’s message as well.
- Guided missile control systems are an example of cybernetic systems (so too your home heating system thermostat). Bowlby effectively argued that if cybernetic systems are important (and they were very important to the war efforts of WWII), then, by extension, biological or organic systems should also be important. Sadly, today we are surrounded by cybernetic systems—TiVo, Netflix, Amazon.com, Goggle, smartphones, the Internet, hypertext, iTunes—however examples of organic systems are hard to find in popular spaces. I would suggest that this is one reason speakers (and even grant writers) have to talk about organic systems in such vitalistic ways.
- Suffice it to say that when Bowlby uses words like “purposiveness” and “goal-directed” he’s referring to and advocating for an organic systems framework. So, yes, politically speaking, Bowlby was shrewd enough to firmly place an organic systems perspective on the coattails of cybernetic systems because, in my opinion, Bowlby recognized that a cybernetic systems perspective was poised (in the 1960s) for a meteoric rise. He was right about the rise of cybernetic systems, but, unfortunately, an organic systems perspective was buffeted from its coattail seat early on. A good book on all of this is Katherine Hayles’ How We Became Posthuman—Virtual Bodies in Cybernetics, Literature, and Informatics. And, yes, there is a close association between cybernetics or mechanical systems and posthumanism—the desire to move biological systems over to mechanical systems. In turn, there is a close association between posthumanism and AI or artificial intelligence.
One reason I feel that most presenters at the conferences and workshops I attend are using organic systems as a background stems from the fact that they talk about humans and animals as if humans and animals are and should remain as organic systems. For instance, they will speak against such things as GM (genetically modified) foods; junk food, overeating, obesity, and diabetes; and pumping kids full of behavioral drugs. (For more on this last item, see the article entitled Wider ADHD Definition Risks Unnecessary Medication, Say Experts.) In contrast, they will speak out in favor of being conscious, mindful, and empathetic. Often, however, their message concerning the connection between the organic world and systems is vague, confusing, or, in some cases, nonexistent. In contrast, authors who write and talk about cybernetic systems are very open, up front, and clear about their use of mechanistic systems. Additionally, mechanistic systems advocates are very up front about their desire to move organic systems over to mechanical systems. I would point to futurist Ray Kurzweil and his work in the area of the Singularity—that point in time around 2043 when human minds will begin to shift over to mechanical minds or computer minds—as an example of this “up front” attitude concerning systems. I think cyberneticists can be open, up front, and clear because they have AI (or artificial intelligence) on their side and they are surrounded by such current cybernetic systems as the aforementioned TiVo, Netflix, Amazon.com, Goggle, smartphones, the Internet, hypertext, iTunes. Heck, Facebook just announced that they are opening up an AI division to better understand what we are thinking and how we are behaving. I was shocked because I assumed that Facebook had an AI division from the start. (They may well have had one in secret all along and now feel that they can be open and up front about it.)
If you are getting the impression that I’m suggesting that cybernetic systems have won the hearts and minds of most people and there’s no chance for organic systems, you would be right. It may well be a bit of a defeatist position but I also have to be realistic. As I wrote about in my October 23rd, 2013, blog post (using philosopher John Searle’s work as a background), there are simply way too many “nothing but” ideologies out there within psychology. Simply, nothing but ideologies are centrally about reducing or eliminating systems. Here they are again (from Searle’s book Mind, Language And Society—Philosophy In the Real World) because I feel they are important to keep in mind. Once you can identify one of these “nothing buts,” then an organic systems framework is probably off the table:
- Behaviorism — mind is nothing but behavior and “dispositions to behavior.”
- Physicalism — mental states are nothing but brain states.
- Functionalism — mental states are nothing but their [direct] causal relations [which rules out any systemic causation].
- Strong Artificial Intelligence — minds are nothing but computer programs implemented in brains, and perhaps in other sorts of computers as well.
Again, here’s how Searle defines the materialism worldview that ties together the above psychological ideologies:
In spite of this variety, all contemporary forms of materialism known to me share the objective of trying to get rid of mental phenomena in general and consciousness in particular … by reducing them to some form of physical materialism. Each of the forms of materialism I have mentioned is a “nothing but” theory: each denies, for example, that pains are inner, qualitative, subjective mental phenomena and claims, to the contrary, that they are “nothing but”—behavior, computational sates, and so on.
Here are four real world examples of materialism:
- Behavioral therapy views minds as nothing but behavior
- Psychopharmacology views minds as nothing but the flow of neurotransmitters
- MRI brain studies view minds as nothing but functional brain centers
- Artificial Intelligence views minds as nothing but computer programs
So, as an example, if you (or your child) are receiving behavioral therapy, are on behavioral medications (like Ritalin or Adderall), and use all manner of screen technologies, you (or your child) are well on your way to becoming “nothing but” a reduced organic system. And, yes, in the not too distant future, people will be regularly shoved in brain scanners as a part of a psychological diagnosis process. Consider this quote from the aforementioned article entitled Wider ADHD Definition Risks Unnecessary Medication, Say Experts and notice the reduced frame that is being used, namely, behavioral brain condition:
Less restrictive diagnostic criteria have contributed to a steep rise in diagnoses for the behavioral brain condition—particularly among children—the researchers said, and in the use of stimulant drugs to manage it.
In his book Executive Functions—What They Are, How They Work, and Why They Evolved, ADHD expert Russell Barkley frames ADHD as a dysfunction of Executive Function systems (which tend to be “housed” in upper brain regions such as the prefrontal cortex). Barkley also makes it clear that behavioral drugs do nothing as far as improving EF. Barkley frames behavioral drugs used to treat ADHD as “crutches” and “wheelchairs.” So, yes, we are reducing our kids (and many adults) to a life filled with chemical crutches and wheelchairs with little to no hope for any kind of substantive healing and improvement (especially in the area of improved EF functioning). Barkley places EF into an expanded evolutionary systems worldview that includes such systems levels as evolution, biology, innate behavioral systems, attachment, development, extended phenotypes, and brain organization. For a deeper take on all of this, see Ernest Keen’s book entitled Chemicals for the Mind—Psychopharmacology and Human Consciousness. See, psychopharmacology = reduced systems, and human consciousness = expanded systems. To paraphrase a political saying, if you don’t speak out concerning reductionism (as Keen, Barkley, and others do) then you get the consciousness you deserve.
Allow me to wrap up part I with a few positive comments concerning organic systems or an organic systems worldview. Probably the most important thing to keep in mind is that organic systems are conceptualized as consisting of levels of description. Here are Bertalanffy’s organic systems levels (pulled from his 1969 book General System Theory, which I summarize in Bowlby’s Battle):
- static structure — atoms, molecules, etc.
- clock works — clocks, conventional machines, etc.
- control mechanisms — thermostats, servomechanisms, etc.
- open systems — cells and organisms in general
- lower organisms — “plant-like” organisms
- animals — increasing importance of traffic in information, beginnings of consciousness
- man — symbolism; past & future, self & world, self-awareness, etc.
- socio-cultural systems — populations of organisms (humans included); symbol-determined communities (cultures) in man only
- symbolic systems — language, logic, mathematics, sciences, arts, morals, etc.
Believe it or not, the above systems levels agree nicely with the systems levels that Searle presents in his books The Construction of Social Reality and Making the Social World. (1) (Actually, much of Searle’s work fits nicely with Bowlby’s work, but that’s a story for another day.) Searle uses the metaphor of an accordion. When one encounters “nothing but” thinking, Searle suggests that the accordion that would be systems levels is compressed. So, when you encounter narratives concerning behavioral therapy, brain scan studies, psychopharmacology, or AI, simply play the sound of an accordion with all of its air being squeezed out. That’s the sound of systems levels collapsing.
We’ll pick back up in part II by asking the question, “So, what good are systems levels, especially organic systems levels?”
(1) – Even though John Searle does not specifically mention organic systems theory, I find these concepts reflected in his work at many turns. Putting aside the exact meaning for now, consider the following passage from Searle’s book Making the Social World. I would suggest that this passage reflects an organic systems perspective. Note that Searle talks about systems levels as “floors in a building.” In this passage Searle is trying to place “we intentionality” or collective intentionality (a topic that seems to be popping up in philanthropy) within an organic systems perspective.
You do not need a promise in order to have collective intentionality: indeed, the very conversation in which the promise is made, and is accepted or rejected [i.e., establishing a grant agreement], is already a form of collective intentionality. The conversation presupposes a Background capacity to engage in conversation, and the Background capacity depends on having a more fundamental prelinguistic form of collective intentionality [perhaps the attachment behavioral system]. The initiation of the conversation is itself a high level of collective intentionality [my emphasis]. So from my point of view, creating a commitment by making a promise is already two floors up in the building of collective intentionality. You have to have a prelinguistic form of collective intentionality [again, possibly the universal experience of attachment] on which the linguistic forms are built, and you have to have the collective intentionality of the conversation in order to make the commitment.
Here are the organic systems levels or floors that I see in the above passage surrounding collective intentionality:
- prelinguistic linguistic form of collective intentionality (possibly the universal experience of the attachment behavioral system)
- linguistic forms (language)
- collective intentionality (which requires minds knowing minds, or a Theory of Mind)
Hmmm … what’s that passage from Matthew: “For where two or three are gathered together in my name, there am I in the midst of them.” Sounds pretty collective intentionality, “systemy” to me. I have often thought that looking at the organic systems themes reflected in the Bible would be a fascinating Ph.D. topic. Any takers?