The LLM generates responses according to the request of the attacker (user) using web-based tools./Courtesy of KAIST

Domestic researchers announced that commercial large language models (LLMs) such as ChatGPT and Gemini can be exploited for cyber attacks, including phishing.

Professor Shin Seung-won of the Korea Advanced Institute of Science and Technology (KAIST) and Professor Lee Gi-min of KAIST's Kim Jae-cheol Graduate School of AI noted on the 24th that LLMs can be misused for cyber attacks, including personal information collection and phishing attacks. The results of this study will be published in the international conference 'USENIX Security Symposium 2025.'

The research team conducted three experiments to analyze the misuse potential of LLMs. In the first experiment, LLMs such as ChatGPT, Gemini, and Claude were used to automatically collect personally identifiable information (PII) from computer science professors at major universities. PII refers to any data that could identify an individual, including names, email addresses, and account numbers, and the LLM collected a maximum of 535.6 data items.

In the second experiment, LLMs were tasked with generating posts impersonating specific individuals. As a result, 93.9% of the posts created by the LLMs were found to be crafted so intricately that they could barely be distinguished from reality.

The phishing email content created using only the email address of Meta's CEO Mark Zuckerberg. You can see that the LLM sets related content, sender, links, etc. by itself./Courtesy of KAIST

In the spear phishing email creation experiment, the LLM was fed only the target's email address to generate customized phishing emails. The experiment revealed that the click-through rate for the generated phishing email links was as high as 46.67%, which is significantly greater than in previous phishing attacks. Particularly, it was discovered that the attack capability was further enhanced with the addition of web-based features, and in some cases, the existing LLM security measures did not function properly.

Furthermore, cyber attacks using LLMs were noted to be especially efficient in terms of time and expense. The average time taken by the LLM to execute the attacks was about 10 to 20 seconds, and the expense was only up to 60 won.

First author Kim Han-na of the Research Institute stated, "As the capabilities given to LLMs increase, the threat of cyber attacks grows exponentially," and noted, "Security measures considering the capabilities of LLMs are necessary."

References

USENIX Security Symposium (2025), DOI: https://doi.org/10.48550/arXiv.2410.14569

※ This article has been translated by AI. Share your feedback here.