Illustration = ChatGPT

Cybersecurity corporations Kaspersky said on the 18th that "dark AI," which exploits large language models (LLMs) and bypasses common artificial intelligence (AI) safeguards or rules to carry out cyberattacks and exfiltrate data, is running rampant.

Kaspersky especially projected that more sophisticated and stealthy attacks via dark AI will increase in the Asia-Pacific region. According to Kaspersky, dark AI refers to deploying unrestricted LLMs within an entire framework or chatbot system for malicious, unethical, and unauthorized purposes.

Sergei Lozhkin, head of the global research and analysis team for the Middle East, Turkey and Africa (META) and the Asia-Pacific region at Kaspersky, said, "Malicious actors are using AI to enhance their attack capabilities," and added, "We are now entering an era in which AI becomes the shield in cybersecurity and across society, and dark AI becomes the sword."

Lozhkin identified "BlackHat GPT" as the most common and widely known way AI is being maliciously used today. Emerging in mid-2023, this attack technique refers to AI models intentionally built or modified to support malicious code generation, drafting phishing emails for mass targeted attacks, creating voice and video deepfakes, and assisting red-team operations. "BlackHat GPT" is known as a private or semi-private AI model, designed or altered to support cybercrime, fraud, and automated attacks. In fact, OpenAI, the developer of ChatGPT, recently said it had disrupted more than 20 cyber operations that sought to abuse its AI tools.

Lozhkin said, "State-backed threat actors are currently using LLMs in operations," and added, "We expect attackers to develop even more cunning methods to weaponize generative AI across public and private threat ecosystems going forward."

According to an OpenAI report, malicious actors used LLMs to generate convincing fake personas, deceive victims, and create multilingual content to bypass existing security filters.

Lozhkin emphasized, "AI is inherently unable to distinguish right from wrong and merely follows instructions," and added, "Corporations and individuals must invest in AI-based threat detection technologies and keep learning how dark AI tools can be misused."

※ This article has been translated by AI. Share your feedback here.