AI Tools Promoted by Threat Actors

Source: Cyber Security News

The rise of artificial intelligence in 2025 marks a pivotal shift in the cybercrime landscape, as it becomes integral to operations conducted by malicious actors in underground forums. According to Google’s Threat Intelligence Group, the illicit AI tools marketplace has expanded dramatically, demonstrating a 200% increase in mentions of such tools from 2023 to 2024. Notable offerings like WormGPT and FraudGPT highlight this evolution, with their capabilities tailored for email compromises, phishing attacks, and even malware creation.

WormGPT, recognized as one of the first malicious AI tools, has notably enabled less technically skilled criminals to execute sophisticated phishing scams by generating convincing emails. Similarly, FraudGPT offers an extensive suite of capabilities, including coding malicious software and finding vulnerabilities, operating on a subscription basis that mimics legitimate SaaS models. By 2025, new tools like Xanthorox AI showcase the advancement to multifunctional platforms that significantly lower the barriers for entry-level cybercriminals. Alarmingly, AI-driven phishing attacks have surged by 1,265%, posing one of the highest threats to organizations worldwide. With supportive evidence of operational growth, AI tools’ accessibility allows financial criminals to craft deceptive campaigns with ease, shifting the nature of the cybersecurity threat landscape dramatically.

👉 Pročitaj original: Cyber Security News