Humans are typically quite adept at recognizing mistakes, but artificial intelligence systems are not. According to a recent research, AI has intrinsic restrictions as a result of a century-old mathematical contradiction.
AI systems, like certain individuals, often have a level of confidence that much surpasses their real capability. And, like an overconfident human, many AI systems are unaware when they are making errors. It is often more difficult for an AI system to recognize when it is making a mistake than it is to achieve the proper output.
According to researchers from the University of Cambridge and the University of Oslo, current AI’s Achilles’ heel is instability, and a mathematical contradiction demonstrates AI’s limits. The most advanced AI technology, neural networks, generally simulate the connections between neurons in the brain. The researchers demonstrate that there are issues in which stable and accurate neural networks exist, yet no method can generate such a network. Algorithms can only calculate stable and accurate neural networks in a few instances.
Under particular circumstances, the researchers present a classification theory that describes when neural networks may be taught to generate a trustworthy AI system. Their findings have been published in the Proceedings of the National Academy of Sciences.
Deep learning, the top AI approach for pattern identification, has received a lot of attention recently. Examples include identifying sickness more effectively than doctors and reducing traffic accidents using self-driving vehicles. Many deep learning systems, on the other hand, are untrustworthy and easily duped.
“Many AI systems are unstable, and this is becoming a huge issue, particularly as they are being utilized in high-risk sectors like illness detection or autonomous cars,” said co-author Professor Anders Hansen of Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are employed in areas where they have the potential to do significant damage if they go wrong, confidence in such systems must be prioritized.”
The researchers traced the contradiction back to two twentieth-century mathematical giants: Alan Turing and Kurt Gödel. Mathematicians sought to defend mathematics as the ultimate coherent language of science around the turn of the twentieth century. Turing and Gödel, on the other hand, demonstrated a fundamental contradiction in mathematics: it is impossible to prove whether some mathematical propositions are true or incorrect, and certain computational problems cannot be solved using algorithms. And, anytime a mathematical system is rich enough to represent the arithmetic we learn in school, it is unable to demonstrate its own consistency.
Decades later, mathematician Steve Smale produced an 18-item list of unresolved mathematical issues for the twenty-first century. The 18th issue examined the limitations of intellect in humans and computers alike.
“The dilemma initially recognized by Turing and Gödel has now been taken forward into the field of AI by Smale and others,” said co-author and Department of Applied Mathematics and Theoretical Physics professor Dr Matthew Colbrook. “Mathematics has basic constraints, and similarly, AI algorithms cannot exist for certain tasks.”
According to the researchers, there are circumstances when decent neural networks exist but an intrinsically trustworthy one cannot be developed due to this contradiction. “No matter how good your data is, you can never have the exact information to create the requisite neural network,” University of Oslo co-author Dr Vegard Antun stated.
Regardless of the quantity of training data, it is impossible to compute a decent existing neural network. No matter how much data an algorithm has access to, it will not generate the required network. “This is analogous to Turing’s argument: there are computational problems that cannot be solved regardless of processing capacity or runtime,” Hansen said.
According to the researchers, not all AI is intrinsically bad, but it is only trustworthy in select domains and utilizing specific methodologies. “The problem is in areas where you need a guarantee, since many AI systems are a black box,” Colbrook said. “It’s perfectly good for an AI to make errors in certain cases, but it must be honest about it. And that’s not what we’re seeing with many systems — there’s no way to tell whether they’re more or less certain about a conclusion.”
“At the moment, AI systems may have a hint of guessing to them,” Hansen remarked.
“You try something, and if it doesn’t work, you add additional things in the hopes that it will. You’ll eventually grow bored of not obtaining what you want and attempt a fresh approach. It is critical to recognize the limits of various techniques. We have reached a point where AI’s practical triumphs have far outpaced theory and comprehension. To overcome this gap, a program on learning the fundamentals of AI computing is required.”
“When 20th-century mathematicians discovered several paradoxes, they did not abandon their studies of mathematics. They just had to forge new avenues since they were aware of the constraints “Colbrook said. “In the case of AI, it may be an issue of modifying or establishing new approaches to design systems that can handle problems in a trustworthy and transparent manner, while also acknowledging their limits.”
The researchers’ next step will be to integrate approximation theory, numerical analysis, and computing foundations to discover which neural networks can be calculated by algorithms and which can be made stable and trustworthy. Just as Gödel and Turing’s paradoxes about the constraints of mathematics and computers led to complex foundation theories defining both the limitations and the potential of mathematics and computing, maybe a similar foundations theory will develop in AI.
Matthew Colbrook is a Trinity College, Cambridge, Junior Research Fellow. Anders Hansen is a Peterhouse Fellow at Cambridge. The Royal Society helped fund some of the study.