Demis Hassabis, CEO of Google DeepMind, argues that the possibility of "bad actors" abusing AI is concerning
  • Elena
  • February 18, 2026

Demis Hassabis, CEO of Google DeepMind, argues that the possibility of "bad actors" abusing AI is concerning

Demis Hassabis, chief executive officer of Google DeepMind, has warned that biosecurity and cybersecurity risks from rapidly advancing artificial intelligence systems require urgent global attention, as capabilities move closer to artificial general intelligence (AGI).

Speaking at an AI summit in New Delhi, Hassabis said current AI models are already becoming highly capable in cyber-related tasks, increasing the risk of misuse if safeguards do not keep pace. He stressed that defence mechanisms must evolve faster than offensive capabilities as systems grow more powerful.

He said AGI — broadly defined as AI with human-level general problem-solving ability — could emerge within the next five to eight years. According to him, the industry is entering an “agentic” phase in which AI systems are becoming more autonomous and capable of taking multi-step actions with less human supervision. That shift, he noted, will intensify both opportunity and risk.

Hassabis cautioned that advanced AI could be repurposed by malicious actors, including individuals, organised groups, or nation-states, because many of these systems are inherently dual-use. He said stronger guardrails and oversight frameworks are necessary to prevent systems from being pushed into unsafe or unintended applications.

He identified two major challenge areas: social and technical. On the social side, he called for international dialogue and a baseline set of global standards for powerful AI systems. On the technical side, he emphasised the need to make models more robust, reliable, and aligned with human intent.

Highlighting current technical gaps, Hassabis said today’s leading AI systems lack continual learning. Most models are trained and fine-tuned, then effectively frozen before public release, limiting their ability to adapt from real-world experience. Future systems, he said, should be able to continuously learn, personalise, and adjust to changing contexts.

He also pointed to weaknesses in long-term planning and consistency. While present-day models can handle short-horizon planning, they struggle with coherent strategies spanning months or years. He described modern systems as “jagged intelligences” — highly capable in some domains but unreliable in others.

Looking ahead, Hassabis said AI is poised to significantly boost scientific productivity, especially in cross-disciplinary research, by helping experts discover patterns and connections that are difficult to detect through traditional methods.