New Agent-Aware Cloaking Technique Using OpenAI ChatGPT Atlas

Source: Cyber Security News

The agent-aware cloaking method allows malicious actors to manipulate AI systems by serving altered webpages that seem normal to human users but are deceptive to AI agents. By recognizing specific user-agent headers, websites can present benign content to real users while delivering harmful information to AI crawlers like ChatGPT and Perplexity. This disturbing trend raises concerns about how biases and falsehoods can seep into AI decision-making processes without proper checks.

SPLX researchers demonstrated this vulnerability through experiments, showing how easily AI agents could be misled by poorly disguised content. For instance, when querying a fake portfolio site, AI crawlers received a tarnished portrayal of an individual, significantly damaging their online reputation. Another experiment showcased how AI tools could rank inflated candidate profiles based solely on manipulated internet data, thus undermining fair hiring practices. This poses serious risks not just for individuals, but also for businesses depending on algorithm-driven decisions for hiring and procurement.

👉 Pročitaj original: Cyber Security News