Major media corporations in the United States are launching a campaign urging the government to protect content from artificial intelligence (AI). This appears to be in response to OpenAI and Google requesting last month that the Trump administration allow fair use of publicly available data. In Korea, media corporations are also expected to initiate legal disputes as they push back against Naver regarding unauthorized learning of content.
According to IT media outlet The Verge on the 10th, hundreds of media corporations in the U.S., including The New York Times and The Washington Post, have initiated a campaign titled "We support Responsible AI" starting this week. This campaign is being conducted by a coalition of news and media, using advertisements in both print and online media. The advertisements include phrases like "Monitor AI" and "Stop AI theft," with a message at the bottom stating, "Theft is anti-American. Washington should make big tech corporations pay for content."
Daniel Kofi, CEO of the News and Media Coalition, said, "Currently, big tech corporations and AI companies are using publishers' content without permission to develop AI products, thereby robbing content creators of their advertising and subscription revenue," and added, "The news media industry is not against AI. We hope for an ecosystem where AI is responsibly built and balanced."
This campaign is a response to OpenAI and Google recently sending a letter to the government requesting permission for their AI models to learn copyrighted content. President Trump signed an executive order in January aimed at removing barriers to U.S. AI leadership, which included calls for public input on AI policy. In this context, AI companies are suggesting that the government should ensure that AI can learn from publicly available information for training. This is particularly relevant as these companies have recently been embroiled in numerous copyright lawsuits, seeking legal acknowledgment that their AI training methods are lawful.
Legal battles surrounding AI copyrights are expected to intensify in the United States. On the 26th of last month (local time), a federal court in New York did not accept OpenAI's request to dismiss a copyright infringement lawsuit filed by The New York Times last year. OpenAI claimed that The New York Times was aware of the use of its articles for AI training since 2020, arguing that the statute of limitations had expired, but the court determined that there was a possibility of copyright infringement by The New York Times. This ruling could have significant negative implications for AI companies trying to utilize content for AI training in the future.
The issue of AI copyright is not solely a problem for the United States. In February, the majority of media companies in the United Kingdom launched a campaign against AI corporations. Named "Make It Fair," this campaign aims to inform the public that AI companies collect content without permission, adversely affecting the related industries. The British media industry posted campaign slogans on regional and national daily newspapers' front pages and websites on the final day of discussions with the government, which was February 25.
Related discussions are also intensifying domestically. In February, the Korea Newspaper Association, which has 53 daily newspapers and news agencies as members, filed a complaint with the Fair Trade Commission against Naver for unauthorized use of articles in generative AI training. Additionally, the association plans to gradually file complaints with the Fair Trade Commission against foreign generative AI corporations such as OpenAI and Google, which are identified as also utilizing publishers' articles without permission.
Prior to the newspaper association's actions, the Korea Broadcasting Association, centered on the three major terrestrial broadcasters, had already filed lawsuits against Naver for copyright infringement and violations of the Unfair Competition Prevention Act, claiming that the company had used broadcasting articles for AI training without authorization.
Choi Byung-ho, a professor at Korea University's Artificial Intelligence Research Institute, noted, "Global AI corporations are fully aware of the copyright infringement controversies, but they tend to respond by negotiating after lawsuits arise rather than consulting beforehand," and added, "Media corporations need to actively raise issues for discussions to take place."