Recent security research has unveiled malicious applications posing as ChatGPT interfaces, specifically designed to collect sensitive user information and monitor activities. These apps are found in third-party app stores and feature convincing branding that mimics authentic ChatGPT. When installed, they execute hidden surveillance routines while appearing to function as legitimate AI assistants.
The threat is amplified by the millions of users worldwide who download unofficial AI applications unaware of the spyware embedded within. Analysts from Appknox uncovered these clones during extensive research on AI-themed applications across various distribution platforms. The research highlights how attackers exploit brand trust as a method to compromise user security, implementing malware frameworks for persistent surveillance and credential theft. Notably, the malware employs advanced techniques such as domain fronting to obscure malicious network communications, effectively evading security measures.
👉 Pročitaj original: Cyber Security News