NC AI said on the 30th that it has officially applied its in-house artificial intelligence (AI) safety technology Safeguard to NCSOFT's customer service chatbot Answer (NCER).
The Safeguard technology developed by NC AI's AI Safety team operates on a security framework composed of a total of three teams. First, the red team studies new malicious attack patterns such as AI jailbreaks; the blue team develops defensive technologies against them; and the purple team integrates the results of the two teams and reflects them in policy in a cyclical structure.
NC AI, in collaboration with NCSOFT's Publishing Coordination Center, has established a chatbot negative regulatory policy that reflects the characteristics of the gaming industry. This policy includes inappropriate content related to game services, exploits related to paid items, and prohibited behavior, implementing industry-tailored security that goes beyond general AI safety standards.
Answer with Safeguard technology meets the general standards for blocking inappropriate content—such as discrimination, hate, profanity, and obscenity—presented in international benchmark studies with the nonprofit ML Commons, while additionally reflecting safety requirements unique to the gaming industry.
In Lineage W and Lineage 2M, where NC AI's technology is applied, a spam filtering system is in operation that detects and blocks advertising patterns in 13 languages, playing a key role in maintaining a healthy in-game chat environment.
Lee Yeon-su, head of NC AI, said, "The application of the Safeguard technology is an important starting point for NC AI to secure a technological edge in AI safety and to solidify its position as a responsible AI development corporations."