
“As artificial intelligence (AI) advances, the application of software will expand throughout society, maximizing efficiency and enriching human life. This is the positive future that AI will bring.”
Yoon Song-yi, chairperson of the NCSOFT Culture Foundation, met with ChosunBiz on the 7th at the foundation's office in Jongno-gu, Seoul, and said this. Yoon is also serving as an advisor at the Center for Human-Centered AI at Stanford University, alongside former Google Chairman Eric Schmidt, and is on the board of the Massachusetts Institute of Technology (MIT). In February of this year, she was appointed to the board of global PC company HP due to her recognized expertise in AI. Until last year, Yoon worked as the Chief Strategy Officer (CSO) of NCSOFT, leading the development of the first proprietary AI language model in the domestic gaming industry.
After discussing with scholars such as Fei-Fei Li, co-director of the Human-Centered AI Institute at Stanford University, who is referred to as the 'DAEMO' of the AI deep learning field, and James Mickens, professor at Harvard University’s Department of Computer Science, he published a book titled “The Most Human Future” in 2022, reflecting on the future where humans and AI can coexist. Recently, he has been creating an AI investment fund (Principal Venture Partners) amounting to $100 million (approximately 145 billion won) to discover promising AI startups.
Yoon noted, “As AI advances, the expense of software development will decrease, and small and medium-sized enterprises will have more opportunities to grow. As software is utilized in fields we have not thought of, and data accumulates, every sector of society will benefit.” The following is a Q&A with Yoon.
– How does AI enrich the future of humanity?
“As AI advances, software will be distributed more widely across many sectors of society than ever before. The advancement of AI will reduce software development expenses. With the spread of software, there will be many more things that can be done better than now. People think that AI is attached only to information technology (IT), but software will be applied in all sectors of society, and everyone will benefit from it. Non-experts will also be able to handle software, and work efficiency will improve. We will be able to see data that we could not perceive in the past. AI will maximize the scalability of software applications, and through this, humanity’s future will be enriched.”
– There must also be a negative future that AI could bring.
“The advancement of AI technology could exacerbate inequality. This is already a problem humanity faces, but the concentration of technological monopolies and power could worsen the inequality issue. It was the same when platform technology emerged in the past, and this trend will strengthen even more in the current wave of AI. The gap between those who are marginalized from the benefits of AI technology and those who can use AI will continue to widen. This is why the role of the public (government) is becoming increasingly important. The dehumanization issue, where humans are treated as lacking the qualities that define them, is also a negative issue. Due to the advancement of AI technology, people could be treated like parts of a machine, losing their humanity.”
– Are there no other concerns?
“As AI technology progresses, essential human abilities such as critical thinking could weaken, and AI could be misused to homogenize cultural preferences and thoughts globally. For example, the AI algorithms we use provide convenience by utilizing user data but could lead user preferences in a specific direction based on aggregated data.
The growing problem of 'deepfake,' which is being advanced by AI technology, is also a concern. This could paralyze our societal system based on trust and reduce it to a society of distrust. To minimize these negative impacts and maximize the positive functions, guidelines like AI ethics must be established. We need to determine what AI can and cannot do and what it can change and cannot change.”
– Is AI ethics a necessity rather than a choice?
“It is essential. The government needs to establish what AI can and cannot do through social discussions. The legitimacy of AI ethics is strengthened through the discussion process of various stakeholders including corporations and civil society. In the current situation, where the competition for AI between countries is intensifying, the introduction of regulations like AI ethics will also help each country secure its AI sovereignty.
Additionally, it is important for developers to create technology safely and ethically from the early stages of development. Renowned universities abroad, such as Stanford, have made interdisciplinary AI ethics education, known as 'Embedded EthiCS,' mandatory. Embedded EthiCS refers to a multidisciplinary curriculum that explores ethical issues throughout the educational process to enable engineering that considers ethical and social implications. Since 2020, the NCSOFT Culture Foundation has been sponsoring the development of Embedded EthiCS education curriculum aimed at MIT, Stanford, and Harvard.
– Can we trust AI?
“AI is created through data, and its properties vary depending on the data it is trained on. Among people globally, 35% are still exposed to environments without internet access. Their data is excluded from AI learning. Currently, AI learning is led by the United States. This means that cultural bias stemming from U.S.-centric data learning could be deepened.
Bias exists in any society or country. However, if AI is trained on data that contains such bias, it will inevitably learn that bias. People think AI is a mechanical algorithm, so it would be value-neutral and fair, but that belief is very dangerous. We must not blindly trust AI, and it is essential to educate the public to use AI while considering that aspect (the point that AI can also learn biases).
– How can AI and humans coexist?
“AI is merely about quickly learning vast amounts of data to produce statistically meaningful results. The fusion of technology from other fields and the creation of innovation are unique abilities of humans. In other words, when human creativity is combined, the significance of AI increases. A world where AI and humans coexist refers to a society where technology does not threaten humanity or make it lose its essence, but rather a world where AI and humans develop harmoniously through humanity. By creating new values through the synergy between human creativity and AI technology, we can preserve humanity and build the most human future possible.”
– Domestic corporations such as KT, Kakao, and Naver are reportedly giving up on developing their own large-scale language models (LLMs) and are pivoting to collaborate with global models. How do you view this trend from the perspective of managing an AI investment fund?
“I believe cases like DeepSeek have provided good incentives for the domestic industry. In fact, if there is sufficient manpower and capability, one can attempt to create an independent model, and securing core technology is necessary to actively adjust the scale or efficiency of the model. Additionally, when collaborating with overseas companies, it is easier to apply one’s own criteria to the use of technology and data. On the other hand, if one only relies on external models, there is a risk of 'exploitation or appropriation' of information at the government level, or being unilaterally blocked from the service. Therefore, establishing one’s own technology from the beginning is crucial to securing negotiation power and leadership, and I believe there is enough possibility because the domestic industry has experience in developing core technologies directly in the past.”
– When investing in AI companies, are there specific technological fields or types of companies you particularly focus on?
“I am very interested in corporations that can effectively utilize vast amounts of data in industrial settings. In the insurance sector alone, the structure of insurance terms is excessively complicated and data is piling up. Introducing AI can streamline this and enable a deeper understanding of consumers, leading to significant innovation. The same goes for healthcare. Clinical and biometric data are already overflowing, but we have lacked the capacity to analyze it adequately for practical use.
Now, with advancements in AI and Internet of Things (IoT) technology, it is possible to collect and analyze data from various sensors and devices on a single platform to identify early signs of diseases or patient conditions. In the early 2000s, a researcher named Thomas Mann, who was in a lab like mine, attempted to create a 'wearable computer' that recorded his daily life throughout the day, but at that time, there was not enough computing power to process it, so he did not achieve significant results. Today, even if large-scale data accumulates, we can derive meaningful results using AI. I currently manage an AI investment fund and am prioritizing investment in corporations that can create new value by solving this 'data oversupply'.”