Authors: Wei QUAN丨Kevin DUAN丨Chuo Ming LEONG丨Song HONG
Introduction
On November 17, 2025, the Monetary Authority of Singapore ("MAS") released the Consultation Paper on Guidelines on Artificial Intelligence Risk Management (the "Guidelines"). As a consultation paper, the Guidelines will not be finalized before the end of the consultation period on January 31, 2026. When officially promulgated, the Guidelines will serve as MAS's regulatory expectations for financial institutions, and the industry will be given a 12-month transition period to adjust for and to implement the Guidelines.
Currently, AI (especially generative AI) technologies are sweeping the global financial sector. Regulators in many countries are actively exploring how to balance financial innovation and risk prevention. The Guidelines released by MAS at this time is undoubtedly an important practice in this exploration process. Different from some jurisdictions that are still discussing principles or macro-level frameworks, the Guidelines have built a lifecycle AI risk management framework for financial institutions, from top-level governance to specific technology implementation. Not only do the Guidelines provide a clear compliance roadmap for Chinese financial institutions with operations in Singapore or that intend to expand their business in new markets, but the regulatory philosophy, framework design, and specific requirements of the Guidelines also have important implications for Chinese companies that are establishing and refining their own finance AI compliance systems.
This article aims to provide an analysis on the core contents of the Guidelines, and on this basis to explore the potential impacts of the Guidelines on PRC financial institutions, as well as its implications for the future development of financial AI regulation in China.
Core framework and main contents of the Guidelines
I. Implementation of the risk-based approach and proportionality principle
The core regulatory philosophy of the Guidelines is the risk-based approach and proportionality implemented by the Guidelines from the beginning to the end. MAS clearly recognises that the AI application scenarios in the financial sector are complex and diverse. From the optimization of auxiliary background operations to the core risk control model that determines customers' credit fate, the potential risks cannot all be mentioned in the same breath. As a result, a one-size-fits-all approach to regulation would not only stifle innovation but also fail to channel limited regulatory and compliance resources to where they are most needed.
To this end, the Guidelines clearly state that financial institutions should determine the extent to which they should adopt the requirements of the Guidelines depending on the size and nature of their own operations, the breadth and depth of their AI applications and their overall risk exposure. The implementation of this principle mainly depends on the risk materiality assessment mechanism set out in the Guidelines.
This mechanism requires financial institutions to assign a risk rating to each AI application across three dimensions: Impact, Complexity, and Reliance.
Impact: The possible consequences to the financial institution (e.g. finance, operational, regulatory, reputation), their customers or other interested parties (e.g. fairness, ethical breaches, consumer protection) in the event of failure, malfunction, or poor performance of Al systems or models. You should also consider the nature and sensitivity of the data processed by the AI system or model.
Complexity: This is derived from the nature of the AI technology used, the novelty of its application, or the data used. This risk dimension may change as the understanding of AI technologies evolves. For example, with more research and greater familiarity, the complexity of a new AI technology that is initially poorly understood may change.
Reliance: This considers the degree of autonomy given to the Al system or model, the degree of human involvement or oversight in the processes it supports, and the availability of alternatives.
For example, a high-impact, highly complex, and highly business-dependent AI system (for example, using AI as a risk control model for reviewing loan applications) would undoubtedly require the most stringent governance and control. Conversely, a lower-risk AI application (for example, assisting bank programmers with code completion) would require less stringent compliance measures. This differentiated regulatory requirement reflects MAS's pragmatism and wisdom as a mature financial regulator and provides guidance for financial institutions in seeking a balance between compliance and efficiency.
II. From the FEAT principles to operable risk management systems
Back in 2018, MAS joined with the industry to publish the "FEAT" principles, which stand for and focus on fairness, ethics, accountability and transparency, laying the groundwork for the responsible use of AI in the financial sector. To support financial institutions' implementation of the FEAT principles in practice, MAS has partnered with the industry to launch a series of collaborations. For example, the Veritas Initiative, launched in 2019, aims to develop assessment methodologies and toolkits to help institutions test the fairness of their AI systems. With the rise of generative AI, MAS has also supported the industry to establish the MindForge Project Alliance, dedicated to studying the risks and opportunities of generative AI, and is developing an AI Risk Management Manual as a companion industry reference.
The Guideline's core value is that it successfully deconstructs the abstract spirit of the FEAT principles into an operational and auditable risk management system, which includes the following core requirements:
1. "Visibility" – Establish a mechanism for AI identification: Financial institutions' first obligation is to "see" risk. To this end, clear definitions, criteria, and processes must be established to systematically identify all AI systems in use or planned for use within an institution, regardless of whether they are internally developed, procured, or open-source. This process needs to be undertaken and documented by a separate control function, such as a risk or compliance department. All AI applications identified must be enrolled in a centralized, accurate, and up-to-date list. We understand that this list will form the core basis for MAS's regulatory inspections.
2. "Clarity" – Implement a risk materiality assessment: Financial institutions have an obligation to establish an objective, consistent methodology for assessing the risk materiality of each AI application on the list. As mentioned earlier, the assessment is centered on three core dimensions: impact, complexity, and dependency. The assessment results not only determine the strength of the appropriate controls for the AI application, but also serve as a basis for reporting the AI application's overall risk profile to senior management.
In this way, the Guidelines provide an "operational manual" with detailed examples for financial institutions, and help financial institutions define appropriate AI compliance concepts (see the figure below for details), enabling financial institutions to internalize the ambitious AI ethics goals into concrete steps of internal risk management, compliance review and technology development, and truly achieve the leap from "getting the word out" to "putting into practice".

III. Closed-loop control across the AI lifecycle
The Guidelines establish a risk-control framework that covers the lifecycle of AI applications. MAS emphasizes that AI risk management is not a one-time review before a model is launched, but rather a continuous, dynamic process throughout the life of an AI system. This end-to-end management model ensures that AI risk is effectively monitored and managed by financial institutions as the model iterates and the external environment changes dynamically. This closed-loop control consists of the following key phases:

IV. Emphasize top-level governance and high-level oversight
The Guidelines make it clear that the "brain" of AI risk management is placed at the highest level of a financial institution's governance. The Guidelines repeatedly emphasize that the board of directors and senior management are the ultimate "gatekeeper" of AI risks and must assume the primary and ultimate responsibility for the oversight of the entire AI risk management framework. This requires them not only to approve an institution's AI strategies and risk appetites, but also to actively improve their own AI expertise to ensure effective oversight.
For a financial institution whose AI risk exposure has been assessed as significant, the Guidelines also suggest that a cross-functional committee be set up to comprise experts from risk, compliance, technology, business, etc., to achieve overall coordination and proactive management of AI risks. This top-down governance design aims to ensure that AI risk management can receive adequate attention and resources, and can align with an institution's overall strategy, and is the fundamental guarantee for effective operation of the entire risk management system.
V. Special compliance concerns for emerging AI technologies
Of particular interest is the strong emphasis given to emerging technologies, such as generative AI and AI agents, and the proactive compliance requirements set out in the Guidelines. As for generative AI, in the application, institutions should focus on assessing and controlling its risk of producing "hallucinations", output of inaccurate or harmful content, disclosure of sensitive information in training data and exposure to new types of attacks such as prompt injection.For AI agents with higher autonomy, the focus of risk control is on ho
w to constrain their unpredictability, and ensure that their autonomous decisions and actions are always within preset and safe boundaries, and are consistent with the institution's business objectives and the best interests of their clients.
Impact on PRC financial institutions and recommendations
The issuance of the Guidelines is not only binding on local financial institutions in Singapore but also poses a direct compliance challenge for PRC financial institutions with branches in Singapore, or operations with close linkages to the Singapore market. In the meantime, for financial institutions only operating within mainland China, this world-class regulatory document also serves as a benchmark and a great reference for them to enhance their own AI risk management capabilities. We suggest that relevant institutions assess the impact thereof and formulate corresponding strategies from the following aspects.
I. Chinese financial institutions operating in Singapore
For Chinese banks, securities dealers, insurance companies, etc. that have already obtained licenses and carry out business in Singapore, following the Guidelines will be an important compliance obligation. Such institutions should immediately undertake the following actions MAS has recommended during the 12-month transition period:
1. Initiate AI inventory and risk assessment: An inventory of all the AI systems used in the Singapore business should be conducted as soon as possible, and an AI inventory that meets the requirements of the Guidelines should be established. Moreover, risk importance assessment of existing and new AI applications should be completed according to the three-dimensional framework provided by MAS.
2. Conducting comprehensive gap analysis: Set up a special working group with the participation of Singaporean local compliance, risk control, technology, and business departments as well as the relevant functional departments of the group headquarters to compare with the Guidelines their existing AI governance structure, policies and processes, technical tools, and the capacity of personnel, so as to comprehensively identify compliance gaps within the organization.
3. Amendments to localization policies and process transformation: Based on the results of the gap analysis and risk assessment, relevant existing policy documents should be amended and special AI risk management policies should be formulated. Meanwhile, the development, testing, implementation and monitoring processes of AI projects should be adjusted and the control requirements of the Guidelines incorporated therein.
The Guidelines act to raise the market access threshold for PRC companies that plan to apply for various financial licenses in Singapore. A sound and credible AI risk management plan is likely to be an important component of the application materials submitted to the MAS for these licenses. Applicants may need to prove that they not only have innovative technologies and business models, but also have consistent risk management capabilities. We recommend that relevant enterprises should plan and establish their own AI governance and risk management systems in accordance with the requirements of the Guidelines before commencing the license application process.
II. Financial institutions in mainland China that do not have operations in Singapore
While the Guidelines are not directly legally binding, its value as a benchmark of international best practice should not be overlooked for financial institutions mainly operating in mainland China. At present, the application of generative AI in the domestic financial industry has begun to emerge. The technology has great potential, from intelligent customer service and marketing copy generation to auxiliary code writing. However, the associated risks of "hallucinations", data security, and bias amplification are also becoming increasingly prominent.
At present, the regulation of financial AI in China is seen as being "multi-departmental, multi-level, and sporadic". For example, the Financial Industry Standard Criteria for Evaluating Financial Application of Artificial Intelligence Algorithms focuses on the assessment of technical indicators such as security, interpretability and accuracy of AI algorithms; The Measures for the Supervision of Information Technology Outsourcing Risks of Banking and Insurance Institutions deal with the management of outsourcing risks of banking and insurance institutions; and the Interim Administrative Measures on Generative Artificial Intelligence Services, issued by the Cyberspace Administration of China, focuses on the specific application forms of AI. These standards play an important role in their respective areas, but they lack a unified, top-level governance framework to integrate them.
With the continued opening up and internationalization of PRC financial markets, it is an inherent requirement for financial institutions to meet internationally advanced risk management standards. We recommend that domestic financial institutions:
1. Take the Guidelines as a "physical check-up" for internal AI risk management: Take the initiative to use the Guidelines to review and evaluate their maturity in the aspects of AI governance, risk culture and technical control, and identify potential weaknesses and risk areas.
2. Take the Guidelines as a "reference book" to improve internal systems: When formulating or revising internal AI-related management systems, financial institutions should fully learn from the specific practices in the Guidelines on AI lifecycle management and control, third-party risk management, and emerging technology risk response, so as to improve the scientific and forward-looking approach of their internal systems.
3. Take the Guidelines as a "textbook" for talent training: Organize senior executives, risk managers, compliance officers, and technical personnel to study the Guidelines, and enhance the entire organization’s level of AI risk awareness, so as to get well prepared for a more complicated AI application and a more stringent regulatory environment in the future.
Conclusion
Undoubtedly, the Guidelines released by the Singapore MAS constitute an important "stepping stone" in the deep-water area of global financial AI governance. With a rigorous yet flexible, comprehensive yet pragmatic attitude, it clearly sends the regulatory signal to the market, "embrace innovation, but strictly observe the bottom line". For PRC financial institutions in the wave of digital transformation, a thorough study and reference to the Guidelines will not only help them understand the needs of overseas advanced regulatory practice, but also provide them with an opportunity to enhance their own AI risk control capabilities.
Important Announcement |
|
This Legal Commentary has been prepared for clients and professional associates of Han Kun Law Offices. Whilst every effort has been made to ensure accuracy, no responsibility can be accepted for errors and omissions, however caused. The information contained in this publication should not be relied on as legal advice and should not be regarded as a substitute for detailed advice in individual cases. If you have any questions regarding this publication, please contact: |
|
Wei QUAN Tel: +86 21 6080 0946 Email: wei.quan@hankunlaw.com |
|
Kevin DUAN Tel: +86 10 8516 4123 Email: kevin.duan@hankunlaw.com |
|
Chuo Ming LEONG Tel: +65 6013 2968 Email: chuoming.leong@hankunlaw.com |