Oligo Security’s findings revealed that AI frameworks like Meta’s Llama Stack are exposed due to inherent flaws in code sharing practices. Developers inadvertently propagate vulnerabilities by copying insecure patterns, which leads to a chain reaction across AI ecosystems.
The vulnerabilities stem from the misuse of ZeroMQ and Python’s pickle deserialization, which can allow malicious code execution if exposed to a network. Oligo indicated that similar RCE vulnerabilities have been detected repeatedly across AI frameworks over the past year, indicating a structural security gap in the inference ecosystem. Addressing this issue is critical, given that these AI servers handle sensitive data, and their exploitation could lead to data breaches or unauthorized access to GPU clusters.
👉 Pročitaj original: CIO Magazine