Altman Sam, CEO of OpenAI./Courtesy of Yonhap News

OpenAI, the developer of ChatGPT, is reportedly considering a contract with the North Atlantic Treaty Organization (NATO) after the Ministry of National Defense in the United States. But after the defense ministry contract, consumer backlash has spread, leading to "star-rating attacks" and user defections.

OpenAI Chief Executive Officer Sam Altman said on the 3rd (local time) at an all-hands meeting that the company is reviewing a contract to deploy its artificial intelligence (AI) model to NATO's classified network, the Wall Street Journal (WSJ) and CNBC reported.

Altman acknowledged the controversy that erupted after OpenAI signed a contract with the Ministry of National Defense shortly after the ministry designated rival Anthropic as a "supply chain risk" corporation, saying it brought "a very negative brand effect in the short term."

He said, "We tried to do the right thing, but it felt like getting completely trampled," and apologized to employees. He explained that the Ministry of National Defense contract includes clauses banning large-scale domestic surveillance and autonomous lethal weapons, but emphasized, "The company does not have the decision-making power over how the Ministry of National Defense uses AI."

Consumer reaction, however, is cold. According to market research firm Sensor Tower, the ChatGPT app's deletion rate surged 295% in a single day immediately after news of the Ministry of National Defense contract. One-star reviews jumped 775% on Feb. 28 and rose another 100% the next day. In contrast, five-star ratings fell 50%.

Web analytics firm Statcounter also found that ChatGPT's market share fell 5.5 percentage points in February, while Anthropic's "Claude" rose 2.7 percentage points. Claude topped the U.S. App Store's free app rankings and saw downloads surge.

According to Bloomberg, Anthropic's annualized revenue has recently increased to $19 billion (about 27.4 trillion won).

The debate is spreading inside Silicon Valley as well. About 100 OpenAI employees and about 830 Google employees—roughly 900 people—demanded in an open letter that their management refuse the use of AI for military and surveillance purposes. About 180 tech industry figures also urged the withdrawal of the "supply chain risk" designation for Anthropic.

By contrast, U.S. Federal Communications Commission (FCC) Commissioner Brendan Carr criticized, saying "Anthropic made a mistake," and Palantir CEO Alex Karp also indirectly rebuked Anthropic while emphasizing technology companies' responsibilities in the military sector.

As OpenAI seeks to broaden its cooperation to NATO, the controversy over whether AI corporations should cooperate with the military is intensifying.

※ This article has been translated by AI. Share your feedback here.