A Team of Rivals and Co-pilots: Lessons From David Eagleman
By
Marcus Weldon, a Newsweek senior contributing editor, is the former president of Bell Labs and a leader with the ability to connect pioneering research to innovative product development and novel business strategies. Previously, he was the chief technology officer at Nokia and at Alcatel-Lucent and Lucent Technologies. He has a Ph.D. in physical chemistry from Harvard University, and he served as the Neil Armstrong Visiting Professor at Purdue University in 2023 and 2024.
Despite the current uncertainty and concerns about AI, we are undoubtedly on a path to create and leverage AI systems in nearly every domain of human existence.
The most striking statement made by David Eagleman during our recent conversation is almost the first thing he said, that we still don't have a good definition of one of the defining attributes of humans: intelligence. I find it remarkable that this is the case despite centuries of analysis by a host of brilliant philosophers, psychologists, anthropologists, and more recently by biologists and neuroscientists like David. Accepting that a complete definition is not available, I think we can make progress with a "working definition."
I recently wrote that I thought that a reasonable definition of intelligence is that it is comprised of three essential abilities:
1. Processing of inputs from multiple sources (and senses) 2. Providing an essential understanding or perspective 3. Producing original output of value/utility, i.e. a decision, judgement, artifact, scenario/hypothesis etc.
Importantly, it is the second ability that is most critical, as it is the one that moves us beyond the so-called Chinese Room scenario created by the philosopher John Searle in 1980. This scenario imagines a system that just blindly maps inputs to outputs in a mechanical way, based on a comprehensive set of rules or mappings. The argument is that if the machine is only following a set of rules which allow it to mimic intelligent discourse, but it does not exhibit any understanding of what it is doing, it cannot be construed as intelligent. This scenario was used to counter the Standard Turing Test, also known as "The Imitation Game," which essentially argues that as long as what the machine does is indistinguishable from intelligent human discourse, it is demonstrably intelligent (or "thinking").
Two other notable thinkers on the topic have suggested alternative framings. The behavioral economist and Nobel laureate Daniel Kahneman argues in his classic text, Thinking Fast and Slow, that "intelligence is the ability to find relevant things in memory and to apply attention to those things" and further that "more intelligent people have richer representations of things."
Photo-illustration by Newsweek/Getty
The theoretical neuroscientist Jeff Hawkins, in his book A Thousand Brains, states that "intelligence is the ability to learn and use a model" and, furthermore, that "a system that learns a model of the world, continuously remembers the states of that model, and recalls past states of the model, is conscious."
In his Inner Cosmos podcast series, Eagleman highlights factors such as the ability to balance exploration of diverse ideas and new concepts, with the efficient selection of these hypotheses for action by suppression or filtering of what he calls "distractors"—ideas that are unlikely to result in the desired outcome based on the predictions of our world model. He also argues that there are many levels or, as Yann LeCun puts it, "hierarchies" to human intelligence, including social, emotional, moral, practical, engineering, creative, artistic, linguistic, literary, scientific as well as raw, general cognitive intelligence that is typically what comes to mind in any discussion of intelligence.
Looking across all the different definitions and views, there is general agreement that human-level intelligence cannot be confined to a single dimension as our world model encompasses many aspects, with the relative mix being defined by a combination of genetics and the life experience of navigating and interacting with the physical world in all its complexity. This makes defining intelligence in terms of linguistic ability (the Turing test) or literary/narrative ability (the Lovelace 2.0 test) transparently inadequate and incomplete.
But the fact there are multiple different (but complementary) definitions of intelligence also suggests an alternative approach: We should not attempt to come up with one general intelligence measure but rather assign levels of intelligence in each of the specific domains of interest. This would include the set of all human intelligences (examples above), as well as the set of "beyond human intelligences." These "ultra-intelligences," as I call them, are associated with domains for which humans have no real capability, for example in quantum computation, or protein structure prediction, or nonvisual, nontactile or nonauditory world navigation and modeling.
Each domain of intelligence would then potentially have a different metric or test, and a machine or system would be assigned a score in every domain for which it was supposed to have demonstrable intelligence—this would be the mental equivalent to the nutritional scoring and labeling that applies to what we physically consume, but instead as a guide for intellectual "goodness."
Eagleman has proposed an intelligence test for what could be called "general information processing." In my formulation of his framework, there are three levels:
◼️ Level 0—Curating: A system is able to condense information from one or more sources to produce a coherent summary
◼️ Level 1—Creating: A system is able to synthesize information from a variety of sources to produce a novel perspective
◼️ Level 2—Conceptualizing: A system is able to propose a new model or medium that produces a new understanding or form of expression
This formulation extends Eagleman's framing, which focusses on scientific knowledge for Level 2, to allow also for other forms of novel creation, including all forms of artistic expression.
Looking at today's AI systems, LLMs would be capable of Level 0 and Level 1 intelligence in linguistic information processing, but are not capable of Level 2, as they lack the requisite understanding of the physical world, or alternatively, of human perception of this world that would allow them to propose a potentially valid new model or concept by anything other than a "statistical fluke."
I think this formulation could be quite generally applicable to higher order information and perception processing across many different domains. For example, the Turing test could be described as a Level 1 test of language processing and the Lovelace 2.0 test as a Level 1 test of narrative creation. Moreover, AlphaGo and chess-playing machines would be Level 1 intelligent in their respective domains. It is interesting to contemplate whether AlphaFold's production of protein-folding structure is Level 1 or Level 2 intelligence; I would argue that it is also Level 1 since, although it produces a protein "model," it is not a new conceptual model, rather one synthesized from its training data and rules, albeit with ultra-intelligent results.
Leveling Up and Looking Forward
Now that we have a proposal for how we might measure machine intelligence, let's turn to the question of what we might expect in terms of machines being able to achieve human-like Level 2 intelligence across different domains of experience and existence. A key component of any path forward is clearly to be able to build models that represent the human world. So, let's do a quick recap of what we know about how human brains model our world, from the collective insights of Jeff Hawkins, Max Bennett and David Eagleman, which can be summarized as follows:
◼️ It's All About Movement: Brains developed to manage movement in the physical world, so we evolved a neocortical substrate that supported sensorimotor-based understanding and learning.
◼️ Thinking Is Internal Movement: We have repurposed and extended this substrate to not just support modeling of what we perceive in reality, but to also predict possible outcomes for imagined movements or "thoughts," allowing us to explore and exploit future potential states of the world.
◼️ Theory of Mind: We have further repurposed this same substrate to allow us to understand others' worlds to learn from them, and form larger social groups with their collaborative advantages.
◼️ Sharing Is Caring: Language was also developed on this substrate to optimize the sharing of world models and knowledge across different groups and across time. Language plays a unique role in human cognition. As Eagleman describes it, "language is a super-compressed, low bandwidth package of meaning that unpacks a whole mental world that provides rich mental and emotional experiences."
◼️ The Beating Heart: We have a unique resonance with other humans; as Eagleman says, "One of the only requirements for literature is that the reader can feel the beating heart pulsing back at them from the other side of the page. The creator is a fellow traveler with us on the human journey."
◼️ Thinking Is Multisensory Exploration: Thinking describes the brain operating in internal mode, exploring mental space and possible courses of action. It is a multisensory experience involving language (about 25 percent of thoughts use language centers) but predominantly sensory information (75 percent) and so is highly subjective.
◼️ All Thinking Is Equal: Cognitive thinking and creative thinking use the same substrates to "bend, break and blend" our knowledge and experiences to create new scenarios or potential models.
◼️ Innies and Outies: We are in a perpetual state of tension between the brain trying to recognize and react to a known scenario in the external world (the "Outie" in Severance terminology) versus thinking about alternative scenarios that encourage exploration of new possible outcomes in an internal world (loosely like the severed corporate "Innie")
◼️ Team of Rivals: As Eagleman said in our interview, our brain is a form of "neural parliament," with the different processing regions of the brain communicating over long distances to influence each other and "vote" on proposed analyses to determine the consensus, which is then elevated into conscious "thought."
◼️ Feelings Are Error Corrections: Emotion is an integral part of this processing as it is the response to the difference between what actually happens compared to the anticipated affect, as well as being a predictive forewarning or alert mechanism for danger.
◼️ More Is Better: We are unique amongst nearly all animals in that we have the ability to change how we react to any set of inputs. This is a consequence of the large neocortical distance between inputs and outputs that allows multiple processing sub-areas to modify our decision making and override instinctive or previously learned behaviors.
◼️ The Big Question: Consciousness can be thought of as the process of managing the interplay between mental and physical "motions"; it is the decision engine or process that allows more control or attention to be given to one or the other, based on current needs or intents.
That's a massive oversimplification, of course, but it represents a reasonable "working consensus" of how our intelligence was derived and appears to us. Given the obvious richness of the human cognitive condition, and the clear dependence on our unique substrate, the question naturally arises as to whether we can really expect to be able to create human-level intelligence (HLI) on a non-neocortical substrate? The consensus from Eagleman, Hawkins, LeCun, Brooks and many eminent others seems to be that the answer is "yes."
Jeff Hawkins and his company Numenta have made significant progress toward understanding the operation of the cortical columns that comprise the neocortex, and they are currently building AI systems that are predicated on this model on inorganic substrates.
Yann LeCun is advancing a new AI model, known as the Joint Embedding Predictive Architecture (JEPA) that has similarities to the high-level architecture of the brain—more on this in our forthcoming interview with Yann.
And there are many other approaches using neurosymbolic architectures and processors, as well as conventional neural network and transformer-based architectures that are evolutions of today's LLMs, incorporating additional expert models (so-called Mixture of Experts) and multipath or "beam" analyses (so-called Chain of Thought reasoning).
But regardless of which approach or approaches ultimately prove to be the most viable for HLI, it is safe to say that significant differences between these systems and human wetware systems will remain and that these AI systems will not think in the way that we think and will not be conscious in the way we are conscious. That is not to say that they won't think or be conscious—it will just be different from us for a number of reasons. For example, these AI systems will:
1) Not have the same experience of human life—neither that of an individual human nor of humanity as a whole, so their world models will necessarily differ from ours
2) Not have the same set of senses with the same minute sensory componentry as humans, so their continuous experience of the world will differ from ours
3) Not have the same emotional and affect mechanisms as humans, as these are rooted in the "old brain" and its survival needs that will not be replicated in AI systems
4) Not have the same goals as humans, as these are rooted in our biologically hard-coded drive to reproduce, modified by neocortical oversight
It is therefore impossible for such AI systems to simulate our lived experience. However, in all likelihood, it will be possible for them to learn a sufficient amount about our world that they will be able to emulate domains of our world. And, in many cases, these emulations will be reasonably close approximations to our experience, and will also comprise extensions that go beyond our experiences or abilities to enable ultrahuman augmentation. Many of these AI systems will be Level 0 and Level 1 intelligent systems, but some will achieve Level 2, human-like intelligence in specific domains. This will likely require the development of foundational models by these systems that can be presented to humans for validation and verification. This explainability will allow us to build confidence in these systems by providing the required measure of understandability and transparency.
Kevin Kelly has proposed that the future will likely be a "periodic table" of elemental AIs, each specializing in ("emulating") a certain area of human assistance and existence. This can be thought of as an extension of the current move toward agentic architectures, but with a diverse set of complementary AIs based on different architectures, not just LLMs and transformer models.
However, the issue will remain that these AIs will operate at speeds that are completely incompatible (1000s of times faster) with human-operating timescales. But this is nothing new; we have always created machines that extend our capabilities and operate at speeds that are far greater than our own—whether it be in the physical world (e.g. planes, cars, trains, rockets or any mechanically-advantaged machine) or the intellectual realm (e.g. computing systems and software execution). The primary difference is that these new AI systems may be opaque to any understanding as they are not programmed by a human to follow a prescribed algorithm or logical flow, their behavior may "emerge" from the training regime. Even for those AIs that are comprised of a model representation that can be interrogated and understood, the difference in operating speed will render it impossible for real-time understanding of the exact decision-making process. One potential solution to this speed/opacity conundrum that Eagleman sees is that each AI system could have a paired AI whose job it is to probe/critique its partner to provide a check and challenge on behalf of a human. This paired critic AI could then provide summary explanations in human world-models terms and on human timescales.
In summary, the conversation with David Eagleman did not disappoint. I end up swayed by his arguments that despite all the current uncertainty and concerns about AI, we are undoubtedly on a path to create and leverage AI systems in nearly every domain of human existence. And, as we extend current AI systems beyond just today's set of silicon substrates and build them on different substrates with different algorithms, he argues that there will be a positive feedback loop; we will learn more about how our brain works and, consequently, be able to build better and more human-compatible systems.
And this is probably the ultimate augmentation: Humans will create machines that augment human capabilities and augment our understanding of ourselves.