Cybersecurity experts have revealed a series of vulnerabilities in OpenAI’s ChatGPT, which put user data at risk by potentially allowing attackers to extract personal information without consent. This disclosure highlights the importance of addressing security weaknesses in artificial intelligence systems.
The research identified seven specific vulnerabilities and attack techniques that can be exploited against the GPT-4o and GPT-5 models. The findings by Tenable stress the need for continuous monitoring and updates to mitigate these threats effectively, emphasizing that even advanced AI systems are not immune to security risks. OpenAI has been alerted about these issues, and it is crucial for users to stay informed regarding the security measures taken to protect their data.
👉 Pročitaj original: The Hacker News