Mathematician on AI Dystopia and Human Superiority Over Machines

🎙️ Voice is AI-generated. Inconsistencies may occur.

As computing technology rapidly advances, there has been much discussion of the potential threats posed by artificial intelligence. But the author of a book that explores the nature of machine versus human intelligence has said some of these debates over the future of AI have been overhyped and may be distracting from more pressing issues.

Junaid Mubeen, a research mathematician turned educator and author of Mathematical Intelligence: A Story of Human Superiority Over Machines, which will be published November 1, told Newsweek one of the reasons he wrote the book was that AI has generated significant amounts of publicity recently.

"Some of it may be justified because there are exciting developments coming through, but much of it, I think, is overhyped," Mubeen said. "And I think there's a real risk that we're going to rush to judgment, exaggerate the capabilities of AI in the process and undermine our own human intelligence.

"It was Arthur C. Clarke who said, 'Any sufficiently advanced technology is indistinguishable from magic,' and we're seeing that now," he said.

Mubeen pointed to the example of the Google engineer who made headlines earlier this year after saying that a chatbot the company developed, called LaMDA (Language Model for Dialogue Applications), had acquired sentience.

At the core of this kind of machine learning-based artificial intelligence, which is being used in a huge variety of applications—everything from medicine and agriculture to astronomy and robotics—is pattern recognition. Machine learning is a subfield of artificial intelligence that gives computers the ability to learn without being explicitly programmed.

"That is just one aspect of intelligence," Mubeen said. "They have the appearance of intelligence—when you're engaging with a chatbot, it can feel like you're conversing with a human. But we tend to then overreach and ascribe these other human qualities to them, like sentience and consciousness. Or we say: Because they're so capable, it doesn't really matter if they're lacking in emotion or that they don't have consciousness."

He continued: "I think because machine learning has proven to be really useful over the last 10 years, we've kind of jumped the gun and assumed that we're already at the point of what people refer to as artificial general intelligence, which is the ability to navigate the world and be able to solve a whole range of problems. But at the moment, all of the examples of AI that we have are very narrow in their focus, which is fine until we overreach and then suggest that they're ready to replace humans."

The machine learning approach is very data hungry, requiring large amounts of data to be fed into the given system. This raises a number of questions about how that data is collected and how reliable it is.

"If you haven't made the effort to separate truth from mistruth, to filter through that data, there is a real risk that you're going to unleash discriminatory technologies on the world, and we've seen lots of examples of that," Mubeen said. "We're seeing examples of technologies that spew out lots of hatred, that spread misinformation. And so I think it's good to rein them in and remember that at the core is pattern recognition.

"There are some patterns that are truly meaningful, but others are just misleading. And humans are very easily duped by false patterns," he said.

Mubeen pointed to prominent figures from Silicon Valley and beyond, who discuss potential threats that are perhaps a few years or decades away while not paying the same attention to the potential harms of artificial intelligence technologies today. Some examples of these potential harms include the use of AI in mass surveillance programs or how some algorithms reinforce social biases based on the data they have been trained on.

"My concern about these speculations regarding where AI will end up in the future and the apocalyptic doomsday scenarios is that they distract from the present-day issues," Mubeen said. "The present-day issues may not be as glamorous, they may not lend themselves to Hollywood scripts—you can't make a Terminator-type movie out of these issues of bias that exist in our data today—but they are issues that are affecting people now. And I think that a disproportionate amount of attention is focused on future threats."

He continued: "Now, I'm not saying we should ignore them completely—I think some time should be spent thinking through the consequences of how AI might develop, but we have to start by facing up to ethical issues today."

Intelligence is multifaceted, and depending on your definition it is possible to argue that in some respects certain technologies have already achieved this. Computers can already beat the best chess players in the world, and increasingly they are able to outperform human experts on individual tasks. One properly trained AI system, for example, was shown to outperform humans at reading chest X-rays for signs of tuberculosis.

But according to Mubeen, the jump from there to general intelligence has not yet been achieved, and as a result humans still have the edge in many respects.

"There are the aspects of intelligence that, for now at least, are uniquely human and that we would all do well to embrace as we now usher in this new era of smart machines," Mubeen said. "We have these amazing systems of thinking that we've developed over thousands of years, and one of those is mathematics. That's the lens through which I have examined this whole notion of intelligence."

Many people might be surprised by the idea that humans have an advantage over computers in the field of mathematics, given how good machines are at crunching numbers or performing complex calculations. But Mubeen said much of this perception comes down to how mathematics is taught in schools and often portrayed in the wider society—a field purportedly dominated by memorizing formulas, performing calculations and executing algorithms.

"All that stuff computers do very well, and humans often struggle with these skills," he said. "It turns out calculation doesn't come very naturally to us—some people have a knack for it, but many people don't."

But in his book, Mubeen outlines several aspects of human mathematical intelligence that move beyond our very rigid views of the subject and show where humans have an advantage.

A robot arm holding a human skull
A stock illustration shows a robot arm holding a human skull. Is the hype surrounding artificial intelligence overblown? iStock

"For example, the computers can be relied on to crunch through numbers, to perform calculations with speed and precision. But what they don't have is a grounding of the world. They don't have a sense of whether those calculations are meaningful and whether an answer is plausible, or whether it even makes sense to do that calculation within a particular context," he said.

Another example is questioning. Computers increasingly have the ability to answer questions. But it is humans that are driven by an innate curiosity, according to Mubeen.

"One of the claims that I make in the book is that the curiosity of humans and our ability to ask interesting questions will always outpace the ability of computers to answer them," he said. "In many ways, that may come to be the defining trait of mathematicians—that mathematics will evolve into a subject that consists of questions that can't be reduced to computation or the things that computers can do.

"We're already seeing examples of this," he continued. "There are lots of problems that have arisen by just interrogating the limits of what computers can do. In fact, even the story of how the modern computer was invented has its roots in mathematical inquiry—people about a hundred years ago were thinking about how to develop algorithms to solve abstract problems. And that's what led Alan Turing and others to then formally define what we mean by a computer and an algorithm."

Another area is imagination. Because computers have become very good at games like chess and Go, some have speculated that mathematics could be outsourced to computers because the field is based on rules of logic, similar to a game.

"But the point I make is that, contrary to the public view, [mathematics] isn't about following rules—it's as much about tinkering with those rules and creating alternative realities," he said.

One notable example many children are taught in school is that you cannot take the square root of a negative number. But Mubeen said if you go far enough into the subject, you find out that it is possible to do this, producing so-called imaginary numbers.

"Up until the 17th century, this was forbidden, it was accepted convention that you could not take the square root of minus one," he said. "Then a handful of mathematicians explored the idea and asked, What's the worst that can happen? Sometimes you do this and you get ridiculous outcomes. But other times you actually end up with some really powerful constructs. Imaginary numbers now underpin our understanding of electronics and quantum mechanics. They're incredibly useful across a wide range of applications."

He continued: "I would certainly suggest the way AI works today is very much bound by its programmatic instructions—it is given rules and then it has to follow them. And it can do some very creative things within that set of rules—it produces artwork, it wins games of Go, etcetera. But what I'm talking about is a different form of creativity. It's not just combining rules, it's breaking out of them. And that is something we've always done, it's very much in our human nature."

Despite the differences between human and machine intelligence, mathematicians have always found ways to work with technology. There is a rich history, for example, of humans inventing tools to extend our mathematical skills—everything from the abacus to the slide rule and now the modern computer.

As for the future, this spirit of human-machine collaboration is set to continue, which is something we may all benefit from, according to Mubeen.

"As with any collaboration, it forces you to reflect on what you bring to the table, what skills you have that complement your partner, which in this case is computers," he said. "When we think of computers as collaborators, then we're less likely to overreach and assume too much of them," he said.

"I worry that with the adversarial framing of human versus machine, it leads to the very binary notion that one is better than the other, whereas we can think of them in more collaborative terms. I think that leaves more room for scrutiny to understand how the two can work more closely together."

About the writer

Aristos is a Newsweek science and health reporter with the London, U.K., bureau. He is particularly focused on archaeology and paleontology, although he has covered a wide variety of topics ranging from astronomy and mental health, to geology and the natural world. Aristos joined Newsweek in 2018 from IBTimes UK and had previously worked at The World Weekly. He is a graduate of the University of Nottingham and City University, London. Languages: English. You can get in touch with Aristos by emailing a.georgiou@newsweek.com. Languages: English, Spanish




Aristos is a Newsweek science and health reporter with the London, U.K., bureau. He is particularly focused on archaeology and ... Read more