AI vs AI: What Changes and What Remains in Security During the AI Era

Source: CIO Magazine

The adoption of generative AI technologies has surged by 890% in corporate environments, as evidenced by data from Palo Alto Networks. This rapid increase has been met with an accompanying rise in incidents involving data breaches, where employees utilize potentially perilous AI applications unknowingly. Specific cases have highlighted serious repercussions, such as an AI coding tool erroneously deleting production data, leading to a major debacle for the involved company. Another incident involved Air Canada’s chatbot disseminating incorrect information, resulting in customer losses and a lawsuit. Media outlet Sports Illustrated faced backlash for using AI-generated content without proper disclosures, indicating a risk of eroding employee trust and motivation.

Despite the evolving landscape, the core security risks associated with AI have not fundamentally changed; however, the targets of cybercriminals are diversifying due to increased exposure from employee interactions with AI services. This shift necessitates a reevaluation of security policies and awareness, particularly as more tools are being misused without oversight. The concept of ‘shadow AI’ captures the trend of unauthorized tool usage, which echoes past security challenges with rogue SaaS applications. Experts stress the importance of transparency regarding which AI services employees are engaging with to effectively manage emerging risks. Companies need to transition security practices from manual approaches to ones that embrace automation and AI-driven solutions to keep pace with rapidly evolving cyber threats.

👉 Pročitaj original: CIO Magazine