Financial Regulators Urged to Embed Ethics in AI Systems

Financial regulators around the world are increasingly relying on artificial intelligence (AI) to manage economic oversight, yet many are doing so without established ethical guidelines. According to the latest findings, over **67%** of financial authorities are now utilizing AI technologies, but **37%** lack any formal governance or ethical framework to guide their use. This significant gap raises serious concerns regarding market integrity, financial inclusion, and public trust.

The reliance on AI spans a variety of applications, from detecting complex money laundering patterns to predicting systemic banking shocks. Central banks, securities commissions, and market conduct regulators are adopting these technologies to enhance their monitoring capabilities. When effectively implemented, AI can promote financial inclusion by identifying gaps in access to finance or managing climate-related financial risks. However, the absence of robust governance can lead to diminished trust and potential harm to the public.

Current State of AI in Financial Supervision

The **State of SupTech Report 2025** highlights a critical disconnect between the rapid adoption of AI and the governance frameworks in place to regulate its use. Despite the technological advancements, more than half of the surveyed authorities admit to lacking clear governance structures for AI-enabled supervisory technology (suptech). Alarmingly, only **3%** have developed dedicated internal frameworks for suptech applications, and merely **4%** align their practices with international standards like the **OECD AI Principles** or the **EU AI Act**.

Furthermore, the report reveals that only **6%** of agencies conduct regular ethical audits, and a mere **5%** publish transparency reports detailing how AI influences their supervisory decisions. The limited recognition of ethical risks is particularly concerning, as only **8.8%** of authorities view ethical concerns as significant barriers to AI deployment. Even fewer—**8.1%**—consider algorithmic bias a challenge worth addressing, despite the potential for AI to exacerbate existing inequalities.

Marlene Amstad, chair of the **Swiss Financial Market Supervisory Authority (FINMA)**, emphasized the importance of accountability in supervisory decisions. She stated, “Supervisory decisions must remain explainable and accountable,” advocating for a “human in the loop” approach in significant interventions to ensure that responsibility does not shift from human judgment to algorithms.

The Need for Strong Data Governance

A critical aspect of ethical AI deployment lies in data governance. Approximately **64%** of financial authorities cite fragmented or inconsistent data as a major challenge. Poor-quality or incomplete data can lead to biased outputs, particularly affecting consumer protection and financial inclusion efforts. Ethical failures often originate from inadequate data practices, highlighting the need for strong governance frameworks that encompass data ownership, documentation, and quality controls.

Bernard Nsengiyumva of the **National Bank of Rwanda** stressed that robust data governance is essential for ethical AI, stating, “Without it, even the most well-intentioned AI systems can reinforce blind spots.” The urgency for strong foundations becomes even more pronounced as regulators move towards more autonomous AI systems. These “agentic” systems offer efficiency but also introduce new vulnerabilities that can undermine supervisory control if not properly managed.

Some authorities are already taking proactive steps to embed ethical considerations into their AI frameworks. The **Financial Conduct Authority (FCA)** in the UK has established a data and AI risk hub, which requires independent evaluations of every use case prior to deployment. This initiative fosters a culture of assurance and encourages supervisors to consider the implications of their AI tools.

Similarly, the **Bank of Tanzania** has set up an AI and data innovation hub focused on creating explicit guidelines centered on transparency, fairness, and accountability. Such initiatives demonstrate that it is possible to integrate ethics into AI-driven regulatory practices.

In conclusion, the path forward for financial regulators is clear: the implementation of AI systems must be accompanied by robust accountability frameworks. As the landscape shifts towards greater reliance on AI, it is essential that regulators prioritize the establishment of governance structures that ensure ethical oversight. With over **60%** of authorities moving towards AI-driven solutions while lacking basic accountability, the risk of discriminatory outcomes and a loss of public trust looms large. To maintain their role as guardians of financial stability, regulatory bodies must make ethical governance a cornerstone of their supervisory infrastructure.