The Financial Supervisory Service said on the 15th that it has established an artificial intelligence (AI) risk management framework for finance. The framework is a voluntary guideline without legal force.
Under the framework, financial companies must set up a decision-making body and a dedicated unit for AI risk management. Through this, they should establish a management system in which checks and balances operate based on clear accountability.
The AI-related decision-making body will deliberate and decide on key matters such as establishing and revising AI ethics principles and internal rules, setting risk management and consumer protection policies, and approving high-risk AI services. The decision-making body must report related matters to the CEO on a regular basis. For financial companies that have adopted a responsibility map, they were also told to consider ways to clearly reflect internal controls and risk management responsibilities related to AI.
Financial companies must establish an independent, dedicated risk management unit to control and manage all AI-related operations. At the same time, they must establish internal rules such as AI risk management regulations and guidelines, and prepare an operating manual that specifies them.
Financial companies must also prepare a separate AI risk assessment system. They must build a comprehensive, risk-based assessment system focused on quantitative elements among the seven core principles of financial AI—legality, reliability, consumer protection, and security. Based on the results of risk assessments, financial companies must also carry out differentiated controls and management according to the risk level of AI services.
The Financial Supervisory Service (FSS) will distribute the AI risk management framework for finance through industry associations by sector and gather feedback from the financial sector through briefings and meetings. It then plans to finalize the plan and put it into effect in the first quarter of this year.