🎙️ Voice is AI-generated. Inconsistencies may occur.

8 min read

The Story So Far: 8 Principles for the Future of AI

The Newsweek AI Impact series interviewees share a level of coherence and alignment among their views.

  • Español
  • 中国人
  • Français
  • Deutsch
  • Portuguese
  • हिन्दी

With the publication of the first three interviews in the Newsweek AI Impact series, it is a good time to reflect and distill the essence of what we have learned to date. The remarkable thing for me is the level of coherence and alignment among the views of the first three interviewees, despite their different backgrounds and focus areas: roboticist Rodney Brooks; neuroscientist David Eagleman; and AI innovator Yann LeCun.

But, on further reflection, this is perhaps not surprising given the intelligence of the three individuals and their innate curiosity, the combination of which leads to positions that are both broad in scope and deep in conception. As a result, I think we can already see a common thesis that has emerged and can be summarized as follows.

1. Magical Thinking

Humans are repeatedly seduced into thinking that any sign of intelligence is equivalent to our own intelligence; we engage in "magical anthropomorphism" of everything that appears to exhibit any human capability, and we delude ourselves about the real capabilities.

  • Rodney Brooks: "When we don't have a model and can't even conceive of the model, we of course say it's magic. But if it sounds like magic, then you don't understand...and you shouldn't be buying something you don't understand."
  • David Eagleman: "Often we will ask a question to the AI, and it will give us an extraordinary answer. We'll say, 'My God, it's brilliant! It has theory of mind!' But in fact, it's just echoing something that somebody else has already said."

2. Beyond the IQ Test

Human intelligence cannot be quantified by a single test or score as it is a complex interplay of cognitive, creative, social, moral and physical capabilities and developed expertise. Any evaluation of machine intelligence against human intelligence can only be valid for a specific domain for which the full array of human capabilities is evaluated and compared.

  • David Eagleman: "We don't have a single definition of intelligence. It's almost certainly one of those words that has too much semantic weight on it. Intelligence presumably involves many different things, ... [so] when we ask this question, is AI actually intelligent? We don't have some clear yardstick along which we can measure that."
  • Yann LeCun: "You could think of intelligence as two or three things. One is a collection of skills, but more importantly, an ability to acquire new skills quickly, with minimal or no learning."
Newsweek.AI Carousel TheStorySoFar
Newsweek.AI carousel image for The Story So Far Photo Illustration by Newsweek

3. Think Fast But Also Slow

Nobel Prize-winning psychologist Daniel Kahneman's framework for understanding how the human brain operates comprises a System 1 mode that is fast, automatic and intuitive and a System 2 mode that is slower, more deliberate and analytical.

Current LLMs exhibit only System 1 capabilities without a complementary System 2, as they lack reliable, accurate models of the world. The future of AI requires new models with a hierarchy of abstract representations of the real world in all its richness and with System 2 reasoning capabilities.

  • Yann LeCun: "An LLM produces one token after the other. It goes through a fixed amount of computation to produce a token, and that's clearly System 1—it's reactive...there's no reasoning."
  • Rodney Brooks: On System 1: "It seems to me that what LLMs have shown us is we can emulate language with that thoughtless part, which to me is a surprise." On System 2: "It's got social dynamics knowledge in it. It's got knowledge of the physical world. It's got a creative component to it for simulation of an unknown. It's sort of intrigued by the unknown."

4. Limits of Language

Written language is an insufficient basis for reliably representing the physical world and the human experience of it, as it is too highly compressed and insufficiently complete to describe this complex multidimensional, continuous reality. Therefore, the future of AI will not be about scaling, adapting or enhancing LLMs alone.

  • Yann LeCun: "Language, it turns out, is relatively simple probably because it's discrete and it has strong statistical properties. It's basically a serialized version of our thoughts."
  • David Eagleman: "The connection that we have via language is extraordinarily low bandwidth. We can throw just a few words over the transom, like I say the word justice or freedom, and I mean something by it. You might have a completely different view of what is meant by that one word, but we try as best we can to get by in the world communicating with this extraordinarily low bandwidth channel."
  • Rodney Brooks: "We've gotten used to different language models that we can interact with, but we know how shallow they are. Therefore, a model where we move to purpose-driven models is a better model than an omniscient LLM. I think we'll use the LLM capability for the generality of language to get to a core set of things that are done by other modules that obey guardrails"

5. A Society of Machines

The future of AI will be different systems, each with their own domains or various levels of representation of the world interacting with each other. These machines will collaborate and compete to amplify human capabilities as machines have always done, as described by Moravec's Paradox: "It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a 1-year-old when it comes to perception and mobility."

  • Yann LeCun: "It's going to be an interactive Society of Machines. You're going to have some AI systems that are smarter than other systems and can take them down. So, it's going to be my smart AI police against your rogue AI "
  • Rodney Brooks: "After 50 years, there has not been much progress in building robot hands—they don't have good general-purpose picking...slightly more than 50 percent of the robots that are doing picking still have suction cups, so that's not dexterous like a human."

6. A New Societal Hierarchy

The future human society will be a hierarchy of humans and intelligent machines, with machines constrained to be below humans in the hierarchy as they will not have human-like "free will" and will be bound by embedded guardrails.

  • Yann LeCun: "Everybody will become a CEO of some kind, or at least a manager. The nature of human work is going to change...humanity is going to step up in the hierarchy. We're going to have a level below us, which is going to be these AI systems. They may be smarter than us, but they will do our bidding."
  • David Eagleman: "What we're going to have is competitive AI systems—and we're going to build AI systems to check on other AI systems...and operate a trillion times faster than me, and I can't possibly understand this level of complexity. But there will be another system that's adversarial in some way and we will say 'Hey, you've got to keep an eye on him.' And there will be a whole group of these competitive AI systems where everyone is watching everyone."

7. The Overestimated Power of Intelligence

The power of intelligence alone is overestimated, and we should be much more wary of the tendency of humans to cede to physical or psychological dominance or be victims of physical world power such as natural disasters and disease.

  • Yann LeCun: "People give too much credit and power to pure intelligence. It's not the only force in the world—there are physical and biological forces, for example. Looking at the political scene today, it's not clear that intelligence is actually such a major factor. It's not the smartest among us that tend to be the leaders."

8. An Open and Predictable Future

AI systems need to be predictable and consistent, not just with our understanding of the physical world but also with our different social, moral and cultural landscapes so that they amplify our subjective personal worlds on our own chosen terms.

  • Yann LeCun: "There [must be] a collaborative effort to design these systems in ways that are aligned with human values...and I think the best way to do this is in an open, collaborative fashion...because [anyone] can build on top of it and establish their own sovereignty."
  • David Eagleman: "We only accept what we can reliably control."
  • Rodney Brooks: If an AI or robotic system "is a plug-in to my world model and it behaves in a consistent expected way, I will add it."

I think these are probably as close to a defining set of principles for the evolution of AI, as Asimov's Laws of Robotics were for physical robot systems—already a notable accomplishment for the series. But before concluding, I want to return to the question of how to evaluate intelligence that has cropped up throughout these conversations so far, as highlighted in principle No. 2 above.

From these interviews, two ways of evaluating the intelligence of AI systems emerged. First, there was the framework proposed by Eagleman that tries to establish the level of intelligence demonstrated in any given domain by identifying whether a system is just curating information, creating knowledge or generating new conceptual or creative frameworks in different domains of expertise. Second, the Kahneman framework looks at the type of intelligence process that the AI uses to "think" about a problem. The two are complementary—one focusing on the "what" (was demonstrated) and the other focused more on the "how" (it was produced). But there is clearly more thinking to be done to create a general-purpose methodology for accurately judging the intelligence of any system in any domain, and to eliminate hyperbolic or "magical" conjecture.

Framework for evaluating the intelligence
Marcus Weldon

So that's where things stand at this point in the series; it will be fascinating to see how this evolves as we extend the scope of inquiry to include different disciplines and domains, looking across specific industries at the forefront of AI focus such as healthcare and medicine, as well as in the creative arts and industries.

The Latest in AI

EXCLUSIVE
05.29.25
HEALTH
05.01.25
MOBILITY
04.29.25
MOBILITY
04.22.25
HEALTH
04.23.25
See all Stories