OpenAI, the developer of ChatGPT, is experiencing the largest "talent exodus" since its founding. As key researchers and executives, including co-founders, leave one after another, assessments say internal fissures are deepening. In particular, with an ethics controversy erupting over a recent artificial intelligence (AI) technology contract with the U.S. Department of Defense (the Pentagon), some predict the outflow of talent could accelerate further.
◇ Hardware chief Kalinowski resigns… Raises concerns over "autonomous killing and surveillance"
According to the industry on the 9th, only two of OpenAI's 11 founding members remain at the company: CEO Sam Altman and President Greg Brockman. Most of the other founding leaders have left over the past one to two years or moved to competitors and startups.
Most recently, Caitlin Kalinowski, who oversaw hardware, announced her resignation on the 7th. A key figure in hardware who worked on MacBook design at Apple and led development of the Augmented Reality (AR) glasses "Orion" at Meta, she joined OpenAI in 2024 to lead robotics and consumer hardware strategy.
Kalinowski said on social media, "While I agree that AI can contribute to national security, large-scale domestic surveillance without judicial oversight or autonomous lethal systems operating without human authorization required sufficient debate." She particularly criticized the contract with the Department of Defense as having been announced hastily before technical safety guardrails were clearly defined.
Analysts say the resignation carries significance beyond a routine executive change. Kalinowski led hardware product development at Apple and Meta and had been regarded as a key figure in OpenAI's recent strategy to develop AI hardware products in collaboration with Jony Ive, who led iPhone design. The industry expects her departure could deal a blow to OpenAI's long-term hardware strategy.
The controversy over the Department of Defense contract is spreading into a broader conflict across the Silicon Valley AI sector. The U.S. Department of Defense initially negotiated with OpenAI's rival Anthropic, but talks collapsed when Anthropic sought to include clauses in the contract banning the use of AI technology for large-scale domestic surveillance and fully autonomous lethal weapons. The Department of Defense then designated Anthropic as a "supply chain risk company," and OpenAI signed a separate contract with the department.
During this process, pushback also emerged within OpenAI. Some employees, in an open letter, said the company should "refuse to allow AI to be used for large-scale surveillance and the development of autonomous lethal weapons," expressing concern over leadership's decisions.
The conflict also affected consumer reactions. According to market research firm Sensor Tower, the ChatGPT app deletion rate surged 295% in a single day immediately after news of the Department of Defense contract broke. Meanwhile, Claude, Anthropic's AI model that has emphasized "AI safety," topped the U.S. App Store's free app rankings, overtaking ChatGPT.
◇ Core researchers continue to leave after Sutskever and Murati
Most of those who have recently left OpenAI are moving to competitors or new AI startups. Andrea Vallone, who worked on model policy and safety research, left OpenAI in February this year to join Anthropic, and Vice President of Research Jerry Tuorek also left earlier this year. Tuorek was one of the key researchers who led development of OpenAI's reasoning model "o1."
Core researcher departures continued in 2024 and 2025. Former Chief Technology Officer (CTO) Mira Murati left in 2024 to found the artificial intelligence startup "Thinking Machines Lab," and many OpenAI researchers moved to the company. Co-founder and chief scientist Ilya Sutskever also left the same year to establish a new research organization called "Safe Superintelligence (SSI)."
Another co-founder, John Schulman, went through Anthropic and joined Murati's startup, and Vice President of Research Barrett Zoph is also serving as CTO of Thinking Machines Lab. Jan Leike, co-lead of the Superalignment project, also moved to Anthropic, criticizing that "safety research was pushed aside by product launch pressures."
As key talent leaves one after another, OpenAI's internal research structure is being rapidly reshaped. The industry says more than 50 researchers and engineers have moved over the past year to competitors such as Meta and Anthropic. In particular, Meta has launched a "Superintelligence Lab" and is aggressively hiring researchers from OpenAI.
In Silicon Valley, there is also an assessment that OpenAI is shifting away from its early philosophy of "safe AI development" toward a company focused on government cooperation and commercialization. CEO Sam Altman recently apologized to employees regarding the controversy over the Department of Defense contract, saying it had "a very negative impact on the brand image in the short term."
Altman, however, has emphasized that whether AI is used for defense should be decided by democratically elected governments, not companies. At a recent event, Altman said, "Where AI can be used in the defense sector is a matter for public officials to decide, not corporate executives."