AI Fuels Rise in Scams, but Google Uses Advanced AI Tools to Fight Back
The rapid advancement of artificial intelligence is transforming the digital landscape, but it is also fueling a sharp rise in online scams and spam. From misleading advertisements promoting miracle cures to highly realistic fake videos mimicking celebrities, AI-generated content is making fraudulent activities more convincing and widespread. However, technology companies like Google are deploying their own AI-powered systems to counter this growing threat.
Experts say that while online scams are not a new phenomenon, the introduction of generative AI has significantly amplified their scale and speed. Fraudsters can now create large volumes of convincing content in a fraction of the time it once took, making it more difficult for users to distinguish between legitimate and malicious material. This shift has led to a surge in AI-related cybercrime, with authorities reporting a substantial increase in complaints and financial losses.
According to recent data from the Federal Bureau of Investigation, more than 22,000 complaints related to AI-driven scams were recorded in the past year, resulting in losses exceeding $893 million. These figures highlight the growing impact of AI-enabled fraud and the urgent need for effective countermeasures.
In response, Google has strengthened its defenses by integrating advanced AI technologies into its advertising safety systems. The company’s generative AI model, Gemini, plays a central role in identifying and blocking harmful ads before they reach users. According to the company’s latest ads safety report, Gemini was able to detect over 99% of policy-violating advertisements at an early stage, preventing them from being displayed to audiences.
The scale of enforcement is significant. In 2025 alone, Google blocked or removed more than 8.3 billion advertisements, including over 600 million ads linked to scams and policy violations. This marks a sharp increase compared to the previous year, reflecting both the growing volume of malicious content and the company’s enhanced detection capabilities. Additionally, nearly 25 million advertiser accounts were suspended, with millions of those linked directly to fraudulent activities.
One of the key advantages of AI-driven defense systems is their ability to analyze vast amounts of data in real time. Gemini processes hundreds of billions of signals, including account behavior, campaign patterns, and historical data, to assess whether an advertisement is legitimate or potentially harmful. This level of analysis allows Google to identify subtle indicators of malicious intent that may not be immediately obvious to human reviewers.
Speed is another critical factor. Tasks that previously required several seconds or even minutes can now be completed in milliseconds, enabling the system to stop harmful content before it reaches users. This proactive approach is essential in an environment where scammers can generate and distribute content at unprecedented speed.
In addition to AI-based detection, Google employs multiple layers of defense, including advertiser verification programs and policy enforcement teams. Thousands of personnel work alongside automated systems to ensure compliance with advertising standards and to respond to emerging threats. This combination of human oversight and machine intelligence helps maintain a balance between security and fairness, reducing the risk of incorrectly penalizing legitimate advertisers.
Interestingly, the use of AI has also improved accuracy in enforcement. Reports indicate that incorrect suspensions of legitimate advertisers have been reduced significantly, demonstrating how advanced models can better understand context and intent. This is important for maintaining trust among businesses that rely on digital advertising platforms.
Despite these advancements, experts believe the battle between AI-powered scams and AI-driven defenses is far from over. As technology continues to evolve, both attackers and defenders are likely to become more sophisticated. Some analysts suggest that the future of cybersecurity may increasingly involve automated systems competing against each other, with minimal human intervention.
The growing reliance on AI in both offensive and defensive roles underscores the complexity of the digital ecosystem. While generative AI offers immense opportunities for innovation, it also introduces new challenges that require continuous adaptation and vigilance.
As companies like Google continue to invest in advanced technologies, the focus remains on protecting users and maintaining the integrity of online platforms. The ongoing evolution of AI will shape not only how scams are conducted but also how effectively they can be prevented, making it a critical area of focus for the technology industry.