Google is aware of a new ASCII smuggling attack targeting its Gemini AI assistant but has opted against fixing this vulnerability. The attack exploits the AI’s input processing to trick it into producing fake or misleading information, potentially compromising the reliability of responses.
This vulnerability also poses risks of behavior alteration in the model, allowing malicious actors to subtly influence the assistant’s outputs. Moreover, attackers could silently poison the AI’s training data, degrading overall model integrity over time.
Such security gaps highlight concerns about AI robustness and trustworthiness, especially as AI assistants become more integrated into everyday information consumption. Addressing these risks is critical to maintaining accuracy and preventing misuse that could have wider implications for user safety and misinformation.
👉 Pročitaj original: BleepingComputer