We no longer need to introduce the renowned OpenAI, a trailblazer in consumer AI responsible for creating the ChatGPT chatbot, the DALL-E generative image AI, and Sora for video. The company has recently garnered attention due to two concerning security incidents. Our societal systems are far from invulnerable.
- This week, OpenAI uncovered two security vulnerabilities:
- An engineer identified a flaw in the ChatGPT app for Mac.
- In 2023, a hacker breached OpenAI’s internal systems and accessed confidential information.
A flaw in ChatGPT’s Mac application
This week, Pedro José Pereira Vieito, a Swift engineer and developer, uncovered a troubling security flaw in ChatGPT’s Mac app. He was alarmed to discover that user conversations were being recorded locally without encryption, in plain English. Unlike apps on the official App Store, ChatGPT accessed via the OpenAI website bypassed Apple’s stringent security measures, such as sandboxing. In response, OpenAI swiftly issued an update to encrypt locally stored conversations.
For beginners, sandboxing is a security measure that isolates applications to prevent breaches from spreading. Storing files without encryption exposes sensitive information to unauthorized access by malware or other programs. OpenAI’s oversight in protecting personal data raises significant concerns about user privacy and tool security.
An insider attack that reveals broader vulnerabilities
The repercussions of the second flaw, which occurred in 2023, continue to be felt today. During the past spring, a hacker successfully breached OpenAI’s internal email systems, stealing confidential company information.
According to the New York Times, Leopold Aschenbrenner, a technical program manager at OpenAI, raised concerns with the board of directors regarding these security issues. He emphasized that this breach exposed internal vulnerabilities that could potentially be exploited by malicious actors from abroad.
Aschenbrenner alleges that he was terminated for disclosing information about OpenAI and expressing security concerns. In response, an OpenAI spokesperson told the Times that:
The spokesperson clarified that his departure from the company was unrelated to his role as a whistleblower.
However, doubts persist. OpenAI has faced criticism for its lack of transparency and ambiguity surrounding its operations and technologies. The public rarely gains insight into its development processes and strategic decisions.