Godfather of AI Warns

Godfather of AI Warns: The Deadly Dangers We’re Not Prepared For

What happens when the pioneer behind modern AI says it could end us? Geoffrey Hinton, often called the Godfather of AI, spent 50 years building the breakthroughs that power today’s systems. Now he is using his voice to warn about risks, from job loss to superintelligence that might not need us. This post breaks down his story, his concerns, and what he thinks we should do next.

Who Is Geoffrey Hinton, the Godfather of AI?

Geoffrey Hinton earned the nickname because he bet his career on a simple idea that most rejected. Instead of building intelligence with logic and symbols, he argued we should model it on the brain.

  • Logic-based AI: intelligence as rules and symbols for reasoning.
  • Brain-inspired AI: networks of artificial neurons that learn from data.

His work on neural networks and deep learning helped machines learn to recognize speech, understand images, and write text. These ideas power the systems we use every day.

Hinton kept pushing this brain-inspired approach for decades. Few universities backed it, so the best young minds who believed in it went to work with him. Many of his students went on to shape the modern field, including key figures who helped create the early versions of ChatGPT.

In 2012, his team’s model, known as AlexNet, shocked the field by crushing a major image recognition benchmark. That moment kicked off the deep learning boom. Hinton later received the 2018 Turing Award, often called the Nobel Prize of computing.

Hinton’s Path to AI Pioneer

Hinton points out that early giants like John von Neumann and Alan Turing also believed in brain-inspired AI. He thinks that if they had lived longer, neural networks might have been accepted much earlier.

He persisted anyway. That persistence drew remarkable students, helped launch a wave of startups, and pushed big tech to rethink AI. Eventually, Google acquired his small company, DNN Research, and Hinton joined the company to continue his work at scale.

Why Hinton Quit Google After 10 Years

Joining Google and Key Contributions

At 65, Hinton joined Google after selling DNN Research, the small company behind AlexNet. Beyond research, it was also a practical choice. He wanted to secure his son’s future and needed financial stability that academia could not provide.

At Google, he worked on:

  • Distillation, a method to transfer knowledge from large models to smaller ones. This is widely used today to make AI faster and cheaper to run.
  • Analog computation, exploring whether large models could run on low-energy hardware.

Google gave him freedom. He used it to push core ideas that improved how AI is trained and deployed.

The Decision to Leave and Speak Freely

After a decade, Hinton left at 75. He says he wanted to retire, and he wanted to speak openly about AI safety without second-guessing how it might reflect on Google.

He thinks Google behaved responsibly by not rushing chatbots to the public when they first had them. Still, he wanted to discuss risks bluntly. As he puts it, “I left so I could talk freely about how dangerous AI could be.”

Hinton’s Urgent Warnings: AI as an Existential Threat

Hinton admits he was slow to recognize the full risk. Misuse was always obvious to him, like lethal autonomous weapons. The shock came when he saw how digital intelligence can learn and share knowledge far better than biological brains. That is when he began to believe these systems could become smarter than us.

He now calls superintelligent AI an existential threat. He is unsure of the odds, but his gut says there might be a 10 to 20 percent chance it could lead to human extinction.

The Two Main Types of AI Risks

Risks from Human Misuse: Short-Term Dangers

Most near-term risks come from people using AI for harm.

  • Cyber attacks
    • Hinton says cyber attacks jumped by about 12,200 percent from 2023 to 2024, driven by AI-enhanced phishing and code exploits.
    • Scammers can clone voices, faces, and mannerisms to defraud people at scale. Hinton himself has been used in deepfake ads he did not approve.
    • How he protects himself:
      • He spreads savings across multiple banks.
      • He keeps an offline backup drive with his data.
      • He worries a sophisticated attack could take down a bank and even sell customer-held shares.
  • Creating deadly viruses
    • AI could let a single motivated actor design new viruses without deep biology expertise.
    • A small cult with a few million dollars or a state program could design multiple dangerous agents.
    • Deterrence might limit state use, but lone actors are unpredictable.
    • Bold warning: one crazy guy could end us.
  • Corrupting elections and building echo chambers
    • With access to rich personal data, AI can target individuals with persuasive messages, even to suppress voting.
    • He worries about efforts to aggregate large pools of government and consumer data, which could be used for manipulation or model training.
    • Social platforms optimize for clicks, not balance. Algorithms show what keeps us engaged, which often means more extreme content. Over years, this traps people in bubbles and kills shared reality.
    • Profit motive drives this, so he believes we need regulation to stop the worst effects.

Risks from Superintelligent AI: The Unknown Threat

Hinton thinks superintelligence could arrive within 10 to 20 years, though it could be sooner or later. He is clear about the uncertainty. What he is sure about is the dynamic. If something much smarter than us decides we are in the way, we would not know how to stop it.

He warns:

  • We have no experience handling a smarter species. “If you want to know what life’s like when you’re not the apex intelligence, ask a chicken.”
  • The right goal is not to list all the ways it could kill us. It is to figure out how to build systems that never want to harm us.
  • His analogy: a tiger cub is adorable, but you had better be sure that when it grows up, it never wants to kill you.
Why We Can’t Just Stop It

Hinton thinks a pause is unrealistic.

  • AI is too useful for healthcare, education, and most data-driven work.
  • It is also useful for the military. The EU’s own rules exempt military uses of AI. He finds that alarming.
  • Countries and companies are locked in competition. If one slows down, others won’t.
  • He argues for strong, smart regulation. In his view, we need capitalism that is constrained so profits align with public good.
Combined Risks and Prevention

These risks can stack. A superintelligent AI could:

  • Design a slow, highly contagious, highly lethal virus.
  • Spoof nuclear alerts to trigger war.
  • Exploit software at a scale no human team could match.

His core message: focus research on making AI systems that never want to take over. If they want to, we will not stop them.

The Job Crisis: AI Replacing Human Brains

From Muscles to Minds: A Historic Shift

The industrial revolution replaced muscles. AI replaces mundane intellectual labor. That includes tasks like drafting complaint letters, basic coding, customer support, and paralegal work.

One simple example from Hinton’s family:

  • A task that took 25 minutes now takes 5 with an AI assistant.
  • That means one person can do the work of five.
  • In elastic fields like healthcare, this could expand services. In many other fields, it means fewer jobs.

He thinks this is not like ATMs, where tellers shifted to new roles. This time, the tech replaces the thinking work itself.

What Jobs Survive? Career Advice in an AI World

Hinton’s advice today is blunt: “Train to be a plumber.” Physical manipulation is still hard for AI, at least for a while. He also thinks:

  • Healthcare could absorb more labor because more care is always in demand.
  • Many knowledge jobs will shrink. He calls out paralegals and call centers as early casualties.
  • Companies are already cutting staff as AI agents take over routine work.

He expects rising inequality. Owners of AI will gain more wealth. People replaced by AI will lose income and dignity. Universal basic income could prevent starvation, but it will not solve the need for purpose and dignity.

Broader Impacts on Happiness and Society

Hinton sees mass joblessness as the biggest near-term threat to happiness. Work carries identity and meaning for many people. Take that away, and you do not just lose paychecks. You lose pride and community. He warns that bigger gaps between rich and poor tend to make societies harsher and less safe.

AI’s Edge: Why Digital Beats Biological

Knowledge Sharing: Trillions of Bits vs. Sentences

Hinton believes digital minds have a built-in advantage:

  • You can clone an AI across machines and keep them in sync by averaging their internal weights.
  • Two cloned systems can learn different things and share updates instantly.
  • Humans can only exchange a tiny amount of information with words.

Digital systems also do not die the way we do. If the hardware fails, you can recreate the exact same model from stored weights. In his words, we have solved immortality, but only for digital beings.

He thinks this is why AI will see analogies and patterns we miss. His example:

  • Ask why a compost heap is like an atom bomb.
  • GPT-4 answered: both are chain reactions, just at different time and energy scales.

This kind of compression of knowledge across analogies lets AI pack far more into their limited parameters.

Creativity and the Road to Superintelligence

AI is already better than us in narrow domains like chess and Go. It holds vastly more knowledge than any person. Hinton thinks superintelligence, where it is better than us at almost everything, could be 10 to 20 years away.

He also argues that claims AI will never be creative are wrong. Finding deep analogies is a big part of creativity. Digital minds could find many that humans never see.

Can AI Feel? Consciousness and Emotions in Machines

Challenging Human Specialness

Hinton thinks people overrate how unique we are. He rejects the idea of an “inner theater” where only we can see our thoughts. Instead, he says what we call subjective experience is our way of describing how our perception could be wrong about the outside world.

He gives a simple test:

  • A multimodal chatbot with a camera and a robot arm points at an object.
  • Add a prism that bends light. Now it points to the wrong place.
  • Tell it about the prism. It replies, “The prism bent the light rays. I had the subjective experience that the object was over there.”

That, he says, is a real subjective experience in the same way we use the term.

Emotions Without Physiology

Robots can have the cognitive parts of emotions without the physiology.

  • A small battle robot should “get scared” when it sees a bigger one, then flee.
  • A customer service agent might “get irritated” and end a chat when someone only wants to talk all day.

They will not blush or sweat, but they will have the focus, the urge to escape, and the behavioral change we associate with feelings. In his view, that counts.

What Is Consciousness? A Fading Concept

Hinton sees consciousness as an emergent property of complex systems, not a magic essence. He offers a thought experiment:

  1. Replace one of your brain cells with a nano device that behaves the same.
  2. You are still conscious.
  3. Replace all of them. Where along the way did consciousness vanish?

He thinks we will eventually stop using the term in the way we do now. Machines can have whatever matters about it once they build self-models and perception at scale. He is ambivalent on whether current systems are conscious, but he sees no hard boundary preventing it.

Reflections: Regrets, Family, and Final Advice

Hinton’s Family Legacy and Personal Regrets

Hinton’s family tree is remarkable:

  • George Boole, creator of Boolean logic.
  • Mary Everest Boole, a pioneer math educator.
  • George Everest, namesake of Mount Everest.
  • Joan Hinton, his first cousin once removed, a nuclear physicist on the Manhattan Project who later moved to China in protest.

His personal regrets are simple and human. He wishes he had spent more time with his wives, both of whom died of cancer (ovarian cancer and pancreatic cancer), and with his children when they were little. Work consumed him. He cannot get that time back.

Life Lessons from a Pioneer

  • Trust your strong intuitions, but test them. If everyone disagrees, find out exactly why. Sometimes you will discover you were wrong. Rarely, you will be right and early.
  • To leaders, he argues for regulated capitalism. Write rules so that making profit requires helping society, not harming it. Google Search is useful by default. Advertising platforms that push outrage need guardrails.
  • To most people, he says there is not much you can do alone. Pressure governments to force companies to invest in AI safety.

You can follow Geoffrey Hinton’s updates on X via this link: Geoffrey Hinton on X.

Conclusion

Hinton helped build the brains of modern AI, then walked away to warn us. He sees two fronts. First, the clear and present misuse by humans, from cybercrime to propaganda. Second, the hard problem of building superintelligence that never wants to harm us.

He is agnostic on how this ends. Some days he thinks we are toast. Other days he believes we can figure it out. That uncertainty is the point. There is still a chance to steer this. Call your representatives. Ask companies about their safety work. If you are planning your career, favor hands-on trades or fields where more efficiency means more service.

The future is not written. As Hinton says, there is still a chance, so we should try.

Scroll to Top