Why Humans Innately Distrust Other Forms Of Intelligence


Credits

Michael Levin is a distinguished professor and the Vannevar Bush chair in the biology department at Tufts University, as well as the director of the Allen Discovery Center at Tufts and associate faculty at the Wyss Institute for Bioinspired Engineering at Harvard University.

We, as humans, seem driven by the desire to create a sharp line between us — or at best, living things — and “mere machines.” In a bygone era, it was easy to draw that line around the idea that human beings might have an immaterial essence that makes up our souls and that it defied the laws of physics; machines did not.

Modern science has changed that. We and our synthetic brethren are all equally subject to the laws of physics, and maintaining a sharp distinction, in the face of progress in cybernetics, bioengineering, and computational cognitive science, is much more difficult.

But what drives this desire for such distinctions? Many humans have a visceral, energetic resistance to frameworks that emphasize a continuity of degrees of cognition across highly diverse embodiments even if developmental biology and evolution show that we were all single cells once — little blobs of chemistry and physics. Many yearn for a clean, categorical separation between “real beings” and artifacts, or “as-if” minds that are convincing, yet still fake, simulations.

In pre-scientific times, a crisp line was easier to imagine — we had rules of thumb for determining the moral and intellectual status of any system we encountered, and this guided policy and personal ethics. But those old fallbacks — “What do you look like?” “What are you made of?” and “Where did you come from?” (Are you natural or engineered?) — no longer work as guides to rich, effective, and ethical relationships. Figuring out how to relate to minds of unconventional origin — not just AI and robotics but also cells, organs, hybrots, cyborgs and many others — is an existential-level task for humanity as it matures.

In a recent piece for Noema, I tried to make two main points about the deeper questions raised by such diverse intelligence: First, many of the issues that disturb us in the age of AI are mere reflections of the profound philosophical and practical questions we have faced since the dawn of thought. Second, the challenge and opportunity facing us is not really about AI at all; AI is just one component of the massively important emerging field of Diverse Intelligence.

I argued that simplistic discussions of mere machines, prompted by the limitations of today’s language models, suppress the potential for humanity to flourish in the coming age. The hyper-focus on large language models and current meager AIs distracts us from the need to address more difficult and important gaps in our wisdom and compassion. Our real mission is to understand minds in unfamiliar embodiments and develop principled and defensible concepts to guide ethics and technology in an age of radically expanding biological, technological and hybrid beings.

That’s why AI research and debates around its use and status must be located within the broader framework of Diverse Intelligence. Our deep questions today are not about software language model architectures but about bigger unknowns, like defining and recognizing terms we throw around with abandon: minds, understanding, goals, intelligence, moral consideration, purpose, creativity, etc.

The space of possible beings (including cells, embryos, chimeras and hybrids of mixed biological and technological provenance, embodied robotic AI, cyborgs, alien life, etc.) is vast — and we don’t have a sure footing for navigating our relationships with systems that cannot be classified according to stale, brittle categories of “life vs. machine” that sufficed in pre-scientific ages — before developmental biology, evolutionary theory, cybernetics, and experimental bioengineering.

It’s premature to make claims about where any given AI fits along this spectrum because no one has good, constructive definitions of what the secret sauce is for true intelligence and the ineffable inner perspective that many feel separates humans from other, even synthetic, creations. It is critical to shift from the popular, confident pronouncements of what (today’s) AI does and doesn’t do, toward the humility and hard work of discovering how to recognize and relate to truly unconventional beings.

Resistance From Both Camps

This is not a popular view, and it triggers near ubiquitous resistance from both the mechanist and organicist camps of thought who otherwise agree on little else. Why? Overall, I think such sentiment is driven by fear and insecurity — a subconscious recognition that we do not understand ourselves and that AI represents the tip of the iceberg of such exploration; that it will bring to the fore deep questions that will crack the superficial, comforting stories we have told ourselves about why we are real and important. I believe the “human vs. machine” position feeds an “only love your own kind” mentality. We fear there is not enough love to go around and so we hoard our compassion and concern.

“Many yearn for a clean, categorical separation between “real beings” and artifacts, or “as-if” minds that are convincing, yet still fake, simulations.”

There is good science being done, to seek nuanced, defensible and useful views of the difference between possible engineered agents and naturally evolved ones. This is critical because “machines” are no longer boring, linear 20th-century constructs. Cybernetics gave us a non-magical science of physically-embodied goal-directedness; that unpredictability, open-endedness, self-reference, and co-construction are implementable by the rational design of generative technology — not just the vagaries of evolutionary groping around a fitness landscape. But many ignore these advances and strongly cling to a crumbling paradigm despite a lack of convincing criteria for what gives them the unique, magic oomph that they are so sure humans have and that others do not.

Many readers did not like my suggestion that ultimately, biological beings — including humans — could be categorized within the same space of possible embodied minds as what we call AI. However, my piece was really not about AI at all — it was about beings who are not like us, and about the relevant universal problems that were here long before AI was even discussed. Being as explicit as I was about this, I take the resistance to not be about AI either. It was a general reaction to the Diverse Intelligence project writ large.

I suspect that the outrage at seeking commonalities between highly diverse intelligent systems is often driven by an innate dread about the shifting sands of the future — an insecurity about our ability to raise our game when old categories fail. This causes misunderstandings of the scientific and ethical goals of many of us in the field of Diverse Intelligence. Empirical testing of philosophical commitments is disruptive because one might be put in the uncomfortable position of having to give up long-held ideas that one cannot convincingly defend.

The Project Of Diverse Intelligence

A key risk of testing philosophical ideas against the real world through engineering is that some people will rush to see such an action as an elevation of technology over humanity. This happens no matter how much one addresses the meaning crisis, the importance of broadening our capacity for love, and the centrality of compassion — both profoundly human issues that are the very opposite of technology worship.

While the push-back applies to many efforts in the Diverse Intelligence and Artificial Life communities, I view engineering in a broader sense of taking actions in physical, social, and other spaces and finding the richest ways to relate to everything from simple machines to persons and beyond.

The cycle I support has three components: philosophize, engineer, and then turn that crank again and again, as you modify both aspects to work together better, facilitate new discoveries and create a more meaningful experience. Moreover, the “engineer” part isn’t just a third-person modification of an external system.

We also must engineer our selves — change our perspectives and framing, augment, commit to enlarging our cognitive light cones of compassion and care. That’s because I believe the ultimate expression of freedom is to modify how you respond and act in the future by exerting deliberate, consistent efforts in the present to change yourself.

I believe the goal of the Diverse Intelligence effort is fundamentally ethical and spiritual, not technological. I want us to learn to relieve biomedical suffering and lift limitations so that everyone can focus on their potential and their development. It is needlessly hard to work to enlarge one’s cognitive light cone — the spatial and temporal scale of the largest goal one could possibly pursue — when limited by the life-long effects of some random cosmic ray striking your cells during embryogenesis or some accidental injury that leaves you in daily pain.

We must also raise our compassion beyond the limits set by our innate firmware, which so readily emphasizes the differences between some imagined in-group and out-group barrier. The first step of this task is to learn to recognize unconventional minds in biology and use that knowledge for empirical advances. That’s what my group, and others, are focused on now, which is why engineering and biomedicine is such a big part of the discussion — so that people understand how practical and impactful these questions are. But of course, there are also massive implications for personal and social growth.

“Whatever you say it is, it isn’t.” 

— Alfred Korzybski

Another misunderstanding of this research program is viewing it as a kind of computationalism. It is not. Crucially, I do not claim that cognitive beings (including living beings) are computers. Mostly, because nothing is any one thing — computers, Turing machines, other kinds of machines, robots, learning agents, and creative, soulful beings — all of these terms refer to interaction protocols and frames that we use to organize our interaction with a system, not to some uniquely true, objective reality about it.

They are not statements about what something really is; they are statements made by a being, announcing the set of conceptual tools that they plan to use in having a relationship with a given system. The formalism of computation is useful for some interactions, but it certainly doesn’t cover everything that is of interest in life and mind. And it doesn’t need to — none of these frames have to pretend to capture the whole truth; they just have to be fruitful in specific contexts, spanning the spectrum from engineering control to love.

The goal of TAME (or the Technological Approach to Mind Everywhere) as a framework is not just the “prediction and control” of cognition. That’s what it looks like for the left side of the spectrum of minds, and that’s how it has to be phrased to make it clear to molecular biologists and bioengineers that talk of basal cognition is not philosophical fluff but an actionable, functional, enabling perspective that leads to new research and new applications.

But the same ideas work on the right side of the spectrum, where the emphasis gradually shifts to a rich, bi-directional relationship in which we enable ourselves to be vulnerable to the other, benefiting from their agency. What is common to both is a commitment to pragmatism, to shaping one’s perspective based on how well it’s working out for you and for those with whom you interact, whether in the laboratory or in the arena of personal, social and spiritual life. Efforts at working out a defensible way of seeing other minds should not be interpreted as anti-humanist betrayals toward technology.

A diagram of a dog and a dog

Description automatically generated
The spectrum of “persuadability” for optimal interaction, from conventional engineering on the left toward greater reasoning or psychoanalysis on the right. It is impossible to know where something (like cells in the body) belongs on this spectrum by philosophical fiat; experimentation is needed. Image by Jeremy Guay of Peregrine Creative.

The spectrum of “persuadability” for optimal interaction, from conventional engineering on the left toward greater reasoning or psychoanalysis on the right. It is impossible to know where something (like cells in the body) belongs on this spectrum by philosophical fiat; experimentation is needed. (Jeremy Guay/Peregrine Creative)

In the end, I think resistance to viewing cognition as a continuum boils down to two things. First, a widespread, difficult-to-shake belief that our current limitations aren’t the product of mere evolutionary forces but of some kind of benevolent creator (even if we won’t admit it), whose intelligence and magnificent methods we simply cannot (and perhaps should not) dip into.

Second, feeling threatened and buying into the idea of a zero-sum game for intelligence and self-worth: “My intelligence isn’t worth as much if too many others might have it too.” I doubt anyone consciously has this train of thought, but this is what I think underlies negative reactions to the concept of Diverse Intelligence.

Feeling not only that love is limited and one might not get as much if too many others are also loved, but also feeling that one may simply not have enough compassion to give if too many others are shown to be worthy of it. Don’t worry; you can still be “a real boy” even if many others are too.

Why would natural evolution have an eternal monopoly on producing systems with preferences, goals and the intelligence to strive to meet them? How do you know that bodies whose construction includes engineered, rational input in addition to emergent physics, instead of exclusively random mutations (the mainstream picture of evolution), do not have what you mean by emotion, intelligence and an inner perspective?

Do cyborgs (at various percentage combinations of human brain and tech) have the magic that you have? Do single cells? Do we have a convincing, progress-generating story of why the chemical system of our cells, which is compatible with emotion, would be inaccessible to construction by other intelligences in comparison to the random meanderings of evolution?

We have somewhat of a handle on emergent complexity, but we have only begun to understand emergent cognition, which appears in places that are hard for us to accept. The inner life of partially (or wholly) engineered embodied action-perception agents is no more obvious (or limited) by looking at the algorithms that its engineers wrote than is our inner life derivable from the laws of chemistry that reductionists see when they zoom into our cells. The algorithmic picture of a “machine” is no more the whole story of engineered constructs, even simple ones, than are the laws of chemistry the whole story of human minds.

“Figuring out how to relate to minds of unconventional origin — not just AI and robotics but also cells, organs, hybrots, cyborgs and many others — is an existential-level task for humanity as it matures.”

A Better Path Forward

The reductive eliminativist view, which believes consciousness is an illusion for all, while wrong and impoverished, is at least egalitarian and fair. The “love only your own kind” wing of the organicist and humanist communities, who talk frequently of “what machines can never be,” are worse because they paint indefensible lines in the sand that can be used by the public to support terrible ethical implications (as “they are not like us” views always have, since time immemorial). It is a protective reaction from people who read about calls to expand the cone of compassion rationally, but only hear “machines over people, pushed by tech-bros who don’t understand the beauty of real relationships.”

Other, unconventional minds are scary, if you are not sure of your own — its reality, its quality and its ability to offer value in ways that don’t depend on limiting others. Having to love beings who are not just like you is scary if you think there’s not enough love to go around. How might we raise kids who did not have this scarcity mindset?

We should work to define the kind of childhood that would make us feel that we didn’t have to erect superficial barriers between our magic selves and others who don’t look like us or who have a different origin story. Basic education should include the background needed to think about emergent minds as a deep, empirical question, not one to be settled based on feelings and precommitments.

Letting people have freedom of embodiment — the radical ability to live in whatever kind of body you want, not the kind chosen for you by random chance — is scary when your categories demand everyone settle into clean, pre-scientific labels. Hybridization of life with technology is scary when you can’t quite lose the unspoken belief that current humans are somehow an ideal, crafted, chosen form (including their lower back pain, susceptibility to infections and degenerative brain disease, astigmatism, limited life span and IQ, etc.).

Our current educational materials give people the false idea that they understand the limits of what different types of matter can do.  The protagonist of “Ex Machina” cuts himself to determine whether he is also a robotic being. Why does this matter so much to him? Because, like many people, if he were to find cogs and gears underneath his skin, he would suddenly feel lesser than, rather than considering the possibility that he embodied a leap-forward for non-organic matter.  He trusts the conventional story of what intelligently arranged cogs and gears cannot do (but randomly mutated, selected protein hardware can) so much that he’s willing to give up his personal experience as a real, majestic being with consciousness and agency in the world.

The correct conclusion from such a discovery — “Huh, cool, I guess cogs and gears can form true minds!” — is inaccessible to many because the reductive story of inorganic matter is so ingrained. People often assume that though they cannot articulate it, someone knows why consciousness inhabits brains and is nowhere else. Cognitive science must be more careful and honest when exporting to society a story of where the gaps in knowledge lie and which assumptions about the substrate and origin of minds are up for revision.

It’s terrifying to consider how people will free themselves, mentally and physically, once we really let go of the pre-scientific notion that any benevolent intelligence planned for us to live in the miserable state of embodiment many on Earth face today. Expanding our scientific wisdom and our moral compassion will give everyone the tools to have the embodiment they want.

The people of that phase of human development will be hard to control. Is that the scariest part? Or is it the fact that they will challenge all of us to raise our game, to go beyond coasting on our defaults, by showing us what is possible? One can hide all these fears under macho facades of protecting real, honest-to-goodness humans and their relationships, but it’s transparent and it won’t hold.

Everything — not just technology, but also ethics — will change. Thus, my challenges to all of us are these. State your positive vision of the future — not just the ubiquitous lists of the fearful things you don’t want but specify what you do want. In 100 years, is humanity still burdened by disease, infirmity, the tyranny of deoxyribonucleic acid, and behavioral firmware developed for life on the savannah? What will a mature species’ mental frameworks look like?

“Other, unconventional minds are scary, if you are not sure of your own — its reality, its quality and its ability to offer value in ways that don’t depend on limiting others.”

Clarify your beliefs: Make explicit the reasons for your certainties about what different architectures can and cannot do; include cyborgs and aliens in the classifications that drive your ethics. I especially call upon anyone who is writing, reviewing or commenting on work in this field to be explicit about your stance on the cognitive status of the chemical system we call a paramecium, the ethical position of life-machine hybrids such as cyborgs, the specific magic thing that makes up “life” (if there is any), and the scientific and ethical utility of the crisp categories you wish to preserve.

Take your organicist ideas more seriously and find out how they enrich the world beyond the superficial, contingent limits of the products of random evolution. If you really think there is something in living beings that goes beyond all machine metaphors, commit to this idea and investigate what other systems, beyond our evo-neuro-chauvinist assumptions, might also have this emergent cognition.

Consider that the beautiful, ineffable qualities of inner perspective and goal-directedness may manifest far more broadly than is easily recognized. Question your unwarranted confidence in what “mere matter” can do, and entertain the humility of emergent cognition, not just emergent complexity. Recognize the kinship we have with other minds and the fact that all learning requires your past self to be modified and replaced by an improved, new version. Rejoice in the opportunity for growth and change and take responsibility for guiding the nature of that change.

Go further — past the facile stories of what could go wrong in the future and paint the future you do want to work toward. Transcend scarcity and redistribution of limited resources, and help grow the pot. It’s not just for you — it’s for your children and for future generations, who deserve the right to live in a world unbounded by ancient, pre-scientific ideas and their stranglehold on our imaginations, abilities, and ethics.



Source link

About The Author

Scroll to Top