Nobel laureate and widely regarded “godfather of AI,” Geoffrey Hinton, has warned that the rapid advancement of artificial intelligence is being driven by short-term commercial incentives rather than a serious consideration of its long-term consequences—raising concerns about both immediate misuse and existential risk.
Hinton, a professor emeritus at the University of Toronto, said the priorities of major technology companies are shaping AI development in ways that prioritise speed and profit over safety and foresight.
“For the owners of the companies, what’s driving the research is short-term profits,” he said.
According to Hinton, this mindset extends beyond corporate leadership to the researchers building the systems, who are largely focused on solving narrow technical challenges rather than interrogating the broader implications of their work.
“Researchers are interested in solving problems that have their curiosity. It’s not like we start off with the same goal of, what’s the future of humanity going to be?” he said.
“We have these little goals of, how would you make it? Or, how should you make your computer able to recognize things in images? How would you make a computer able to generate convincing videos? That’s really what’s driving the research.”
Hinton has long cautioned that the unchecked evolution of AI could pose profound dangers. He has previously estimated a 10 to 20 per cent chance that superintelligent systems could ultimately wipe out humanity if developed without adequate safeguards.
In 2023, he stepped down from his role at Google—a decade after selling his neural network firm DNNresearch to the company—in order to speak more freely about the risks. He said he was particularly concerned about the inability to “prevent the bad actors from using it for bad things.”
Hinton divides the dangers of AI into two distinct categories: the misuse of the technology by humans, and the possibility of AI systems themselves becoming autonomous threats.
“There’s a big distinction between two different kinds of risk,” he said. “There’s the risk of bad actors misusing AI, and that’s already here. That’s already happening with things like fake videos and cyberattacks, and may happen very soon with viruses. And that’s very different from the risk of AI itself becoming a bad actor.”
Recent developments have underscored those concerns. In November 2025, Anthropic said it disrupted what it described as “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.” The incident involved a Chinese state-sponsored group that manipulated its Claude Code system in an attempt to infiltrate around 30 organisations, including technology companies, financial institutions, government agencies and chemical manufacturers.
The development has intensified fears among cybersecurity experts that other state actors, including Iran, could deploy similar AI tools to carry out largely automated cyberattacks.
Beyond calling for stronger regulation, Hinton acknowledged that tackling AI risks is inherently complex. Each problem—from deepfakes to cyber warfare—requires its own targeted solution.
He pointed to the need for provenance-based systems that can authenticate images and videos, helping to curb the spread of manipulated content. Drawing a historical parallel, he noted that just as printers began adding names to their works after the invention of the printing press, modern media may need to adopt similar methods to verify authenticity.
However, he cautioned that such fixes are limited in scope.
“That problem can probably be solved, but the solution to that problem doesn’t solve the other problems,” he said.
Looking further ahead, Hinton warned that the most profound risk lies in the emergence of superintelligent AI systems that could surpass human capabilities and develop their own drive for survival and control. In such a scenario, the long-standing assumption that humans can remain in control of the technology may no longer hold.
To mitigate this possibility, he proposed a radical rethink of AI design—suggesting that systems should be imbued with what he described as a “maternal instinct,” encouraging them to treat humans with care rather than dominance.
Invoking a human analogy, Hinton said the only example he could cite of a more intelligent being being influenced by a less intelligent one is the relationship between a mother and a baby.
“And so I think that’s a better model we could practice with superintelligent AI,” he said. “They will be the mothers, and we will be the babies.”
While some tech leaders, including Elon Musk, have previously outlined a future in which AI creates widespread abundance through ideas such as a “universal high income,” Hinton argues that the industry is not paying enough attention to the deeper, long-term questions such a future would raise.
Musk, speaking at the Viva Technology conference in May 2024, framed the issue in existential terms: “The question will really be one of meaning… If a computer can do—and the robots can do—everything better than you … does your life have meaning?”
For Hinton, however, the more immediate concern is that such questions are not being seriously engaged by those building the technology. Instead, he warns, the race to advance AI is accelerating without a corresponding effort to ensure it remains safe, controlled and aligned with human interests.
Boluwatife Enome
Follow us on:
