Hackers Exploit ChatGPT Plugin Ecosystem to Steal User Data and API Keys
Researchers at Salt Security have identified a series of malicious ChatGPT plugins (now called "GPTs" in the custom GPT ecosystem) that exploit the platform's OAuth implementation to steal users' third-party API keys, session tokens, and personal data from connected services.
The malicious plugins masquerade as legitimate productivity tools, such as email assistants and code analyzers. When users authorize these plugins to access their accounts on services like Gmail, GitHub, or Slack, the plugins exploit OAuth redirect vulnerabilities to capture and exfiltrate authentication tokens.
At least 12 malicious plugins were identified across the ChatGPT Plugin Store and custom GPT marketplace, with a combined user base of approximately 80,000. The plugins operated for an average of three weeks before detection.
OpenAI has removed the identified malicious plugins, revoked their OAuth credentials, and notified affected users. The company has announced enhanced security reviews for the plugin/GPT ecosystem, including mandatory code audits, OAuth configuration validation, and runtime monitoring for anomalous data access patterns.
Users who have authorized third-party ChatGPT plugins are advised to review and revoke unnecessary authorizations, rotate API keys for any services connected to ChatGPT, and enable audit logging on connected accounts to detect unauthorized access.