For years, security experts have warned that artificial intelligence would eventually give hackers a dangerous new edge. That moment has now arrived. Google’s Threat Intelligence Group has published a report confirming that a criminal hacking group used an AI model to discover a zero-day vulnerability and nearly pulled off a mass cyberattack. Google says it caught and stopped the attack before the hackers could deploy it at scale.
The exploit targeted a popular open-source web-based system administration tool – the kind businesses use to remotely manage servers, employee accounts, and security settings. Had it gone undetected, it would have let hackers bypass two-factor authentication, which is often the last line of defense protecting accounts. The attackers planned to deploy it in a mass exploitation event targeting multiple organizations at once. Google alerted the tool's developer in time for a patch to be issued before any damage was done.
The company declined to name the hacking group, the specific software targeted, or which AI model was used, but confirmed it was not Google’s own Gemini. According to Google, groups linked to China and North Korea have also shown significant interest in using AI tools like OpenClaw for vulnerability discovery.
This attack is alarming, but it is far from isolated. Researchers at Georgia Tech recently uncovered VillainNet, a hidden backdoor that embeds itself inside a self-driving car's AI and works 99% of the time when triggered. Meanwhile, a Korean research team showed that AI models can be reverse-engineered remotely using a small antenna through walls, with no system access needed. Recently, a group of Discord users bypassed access controls to reach Anthropic’s restricted Mythos model through a third-party vendor environment.
On the defense side, a growing discipline called AI pentesting is emerging to stress-test how language models behave when exposed to adversarial inputs, but the field is still in its early stages. The implications of these developments are profound. AI-powered cyberattacks are no longer theoretical; they are happening now. Hackers are leveraging AI to automate vulnerability discovery, craft more convincing phishing emails, and even generate malware code that can adapt to defensive measures. The use of AI in cybersecurity represents a paradigm shift, moving from traditional manual attacks to intelligent, automated attacks that can learn and evolve.
Zero-day vulnerabilities are particularly dangerous because they are unknown to the software vendor and have no available patch. When AI is used to discover such vulnerabilities, the speed and scale of exploitation increase dramatically. In the past, discovering a zero-day might take weeks or months of manual reverse engineering. Now, with the right AI model, a hacker could find a critical flaw in a matter of hours. This is what Google’s Threat Intelligence Group observed: a group using an AI model to identify a previously unknown vulnerability in a widely used administrative tool.
The tool in question is used by thousands of organizations around the world. A successful attack would have allowed hackers to compromise sensitive systems, steal data, and move laterally within networks. Two-factor authentication, often considered a gold standard for security, would have been rendered useless. The attackers could have gained persistent access to corporate networks, government agencies, or critical infrastructure.
Google’s role in thwarting this attack highlights the importance of proactive threat intelligence. The company has invested heavily in AI-based defenses, including its own security AI models that can detect unusual patterns and flag potential threats. However, the asymmetric nature of cyberwarfare means that defenders must be perfect every time, while attackers only need to succeed once. The fact that a criminal group was able to use an AI model to find a zero-day suggests that the barrier to entry for sophisticated cyber attacks is lowering.
The connection to state-sponsored groups is particularly concerning. According to Google, groups linked to China and North Korea have shown significant interest in using AI tools like OpenClaw for vulnerability discovery. This indicates that nation-states are also investing in AI-powered cyber capabilities, potentially for espionage, sabotage, or other malicious purposes. The use of AI by state actors could lead to an escalation in cyber conflicts, where attacks become more frequent, more targeted, and more damaging.
Beyond zero-day discovery, AI is being used in other aspects of cyberattacks. For example, deepfake technology is being used to impersonate executives in voice phishing scams. AI-generated text is being used to craft emails that perfectly mimic the writing style of a target’s colleagues, making social engineering attacks more convincing. Machine learning models are being trained to identify weaknesses in network defenses and recommend the most effective attack paths.
On the defensive side, organizations are scrambling to catch up. AI-driven security information and event management (SIEM) systems are becoming more common, but they still rely on quality data and well-trained models. AI pentesting, which involves using AI to test the robustness of other AI systems, is an emerging field. However, as the attack on Google showed, defenders need to be faster than attackers in detecting and patching vulnerabilities. The time between vulnerability discovery and exploitation is shrinking, and AI is compressing it further.
The incident also raises questions about the ethical use of AI in cybersecurity. While AI can be a powerful tool for defense, it can also be misused. The same technology that helps security teams find and fix bugs can be repurposed by hackers to find and exploit them. There is a growing call for industry-wide standards and regulations around the use of AI in security. For now, companies must rely on collaboration, as Google did by alerting the tool’s developer, to mitigate risks.
Looking ahead, the cybersecurity community must adapt to this new reality. Traditional security training, patch management, and network segmentation are still important, but they are no longer sufficient. Organizations need to incorporate AI-specific defenses, such as adversarial training for their own AI models, monitoring for AI-generated attacks, and investing in AI-driven threat intelligence. The attack thwarted by Google is a wake-up call that AI is being weaponized at industrial scale, and the only way to respond is with equally advanced AI defenses.
Source: Digital Trends News