
OpenAI is facing mounting internal turmoil as key personnel continue to leave the ChatGPT developer. With most founding members already departed and ethical controversies surrounding a recent U.S. Department of Defense artificial intelligence contract, observers warn the talent drain could accelerate further.
Most Founders Gone, Key Executives Resign
According to industry sources on January 9 (local time), only two of OpenAI's 11 founding members remain at the company: CEO Sam Altman and President Greg Brockman. The rest have moved to competitors or startups over the past one to two years.
On January 7, Caitlin Kalinowski, who led the hardware division, announced her resignation. A hardware specialist who worked on MacBook design at Apple and led augmented reality glasses development at Meta, she joined OpenAI in 2024 to oversee robotics and consumer hardware strategy.
Having led major hardware development projects at Apple and Meta, Kalinowski is considered to have played a central role in OpenAI's AI hardware venture with Jony Ive, the former iPhone design chief.
As head of robotics—a key growth area for OpenAI—she is estimated to have been promised total compensation of at least $2 million to $5 million annually, including base salary, substantial stock awards, and signing bonus.
Industry analysts say her departure could impact OpenAI's long-term AI hardware strategy beyond a simple personnel change.
"Large-scale domestic surveillance without judicial oversight or autonomous lethal systems operating without human approval needed sufficient discussion," Kalinowski explained on social media regarding her resignation.
"The Department of Defense contract was announced hastily without clearly defined technical guardrails," she added.
AI Safety Researchers Migrate to Competitors
Many researchers who recently left OpenAI are heading to competitors or emerging startups.
Andrea Vallone, who handled model policy and safety research, left the company in February to join Anthropic. Research Vice President Jerry Tworek, who led development of the reasoning model "o1," also departed OpenAI earlier this year.
The exodus of key researchers began in 2024. Former Chief Technology Officer Mira Murati left in 2024 to found AI startup Thinking Machines Lab, attracting several researchers to join her. Co-founder and chief scientist Ilya Sutskever also departed that year to launch a research organization called Safe Superintelligence (SSI).
Another co-founder, John Schulman, worked at Anthropic before joining Murati's startup. Former Research Vice President Barret Zoph now serves as CTO at Thinking Machines Lab, overseeing technology development. Jan Leike, co-lead of the superalignment project, also moved to Anthropic after criticizing that "safety research was sidelined by product launch pressure."
Industry observers estimate more than 50 researchers and engineers have moved to competitors including Meta and Anthropic over the past year. Meta has been particularly aggressive in recruiting OpenAI talent, establishing a new "Superintelligence Lab."
Defense Contract Controversy Triggers User Backlash
The recent talent exodus stems directly from controversy over the AI collaboration contract with the U.S. Department of Defense.
Anthropic, which had been supplying its AI model "Claude" to the Pentagon, clashed with the Donald Trump administration over demands for "unrestricted military use" of AI. The Department of Defense subsequently designated Anthropic a "supply chain risk company," removed it from contracts, and signed a separate agreement with OpenAI.
This triggered internal pushback at OpenAI. Some employees issued an open letter opposing the use of AI technology for mass surveillance or autonomous lethal weapons.
Consumer reaction was swift. According to market research firm Sensor Tower, ChatGPT mobile app deletions surged 295% in a single day after news of the defense contract broke. Meanwhile, Anthropic's Claude—which has emphasized "AI safety"—rose to the top of the U.S. App Store free app rankings.
CEO Sam Altman recently acknowledged to employees: "We shouldn't have rushed to announce the contract on Friday."
"The matter was very complex and required clear communication. I sincerely tried to de-escalate the situation, but it only appeared opportunistic and sloppy," he apologized.
However, he maintains that decisions about AI's defense applications should be made by democratically elected governments, not corporations.
With the defense collaboration controversy, key talent departures, and user backlash converging, observers expect internal conflicts and ethical debates surrounding OpenAI to continue for some time.
