We use Google Analytics to see how people use our website. This helps us improve the website. The data we have is anonymised. Learn More

analytics-trigger

This site uses cookies

We use cookies to give you a better browsing experience, to improve our website by learning more about our visitors and the pages they visit, and to market our programmes and activities to you. Learn more about the cookies we use

Manage cookies

Necessary Cookies

Necessary cookies enable core functionality. The website cannot function properly without these cookies, which can only be disabled by changing your browser preferences. You consent to these cookies if you continue to use this website.

Analytical Cookies

Analytical cookies help us to improve our website by collecting and reporting information on its usage.

Analytical Cookies: on

Navigating the intricacies of AI in suptech

This is the second post of the Cambridge SupTech Lab log series ” Artificial intelligence (AI) in financial supervision: Promises, pitfalls, and incremental approaches”. The first post of the series is available here.

 

While the transformative potential of artificial intelligence (AI) in suptech is apparent, integrating such sophisticated technology is a complex process. In the endeavor to decode the intricacies of AI in suptech, we double click on two pivotal themes that emerged in the first post of our series:

  • The “human in the loop” paradigm, ensuring human oversight in AI-driven processes
  • Addressing the “black box” phenomenon, which shields AI’s internal workings from straightforward interpretation

In this post, we also introduce into our conversation a third theme of “accountability” in AI. By bringing characteristics of AI systems like explainability and transparency to the foreground, this conversation around accountability aims to demystify and mitigate AI’s potential for opaque decision-making.

While keeping a human in the loop is an imperative for all AI systems, it is particularly so for the “black box” case we elaborate below. For these particularly complex systems, accountability via human in the loop can only be unlocked through processes and tools that ensure explainability and transparency to the humans responsible for the system.

Interweaving these themes, the overarching narrative illuminates the underlying goal of making AI in suptech both technically adept and ethically aligned.

Human in the loop: supervisors as strategic compasses in AI applications

Ensuring human oversight, the concept of having a human in the loop is a foundational principle in the AI realm. Humans have a critical role in the governance and management of AI; the involvement of experts to actively monitor and manage AI, to fine-tune the system, and to make ethical decisions, is crucial. Skilled professionals are required to examine the AI’s intricacies, add context, and surface gray areas for discussion and decisioning. This human touch is particularly critical to inject contextual understanding into areas where a single mistake can be costly, like risk management and financial crime detection. As with all technologies, AI systems must serve to augment rather than imitating or fully automating our capabilities, while also respecting the parameters, benchmarks, and boundaries set by the humans that ultimately hold responsibility for actioning the outputs of such systems.

Supervisors and data scientists alike must embody and exemplify this “human in the loop” concept. This is especially vital as financial ecosystems and technologies evolve, requiring continuous adaptation of supervisory methodologies. Humans, thereby, do not merely serve as overseers nor bystanders, but rather as strategic compasses for AI. Those leveraging these systems are both responsible and increasingly accountable for ensuring that suptech tools align with regulatory and ethical expectations. This must be embedded in roles, processes, and strategies, and prioritized as part and parcel of managing the intricacies and risks through financial supervision itself.

Navigating  the “black box” problem

The “black box” problem further emphasizes this need for the human element. Unlike humans who can typically explain their actions, some machine learning models and other AI systems can operate as a “black box.” By performing calculations, making predictions in manners that are often not directly interpretable by humans, they can consequently inform or even make decisions that are not understood by humans. Thus, the imperative for the human in the loop factors prominently in these scenarios.

This “antagonistic” relationship burgeons as human supervisors seek to reap the benefits of AI while also comprehending, controlling, and validating the often obscure, non-linear decisions made by AI.  For all implementations of AI, and particularly for models most susceptible to these “black box” concerns, it’s crucial to establish a nuanced dynamic wherein the comprehensibility and ethical use of AI are recurrently scrutinized. More directly, the antidote to this problem of opacity is to shine a light into these models by methodically leveraging tools and approaches to ensure interpretability and explainability of results.

Ensuring the accountability of AI through explainability and transparency

Thus the theme of accountability, comprised of explainability and transparency, is paramount in AI. The obscured nature of AI, or the “black box”, complicates the supervisors’ task to ascertain how fair and certain these models’ predictions are, and the outcomes that derive from them. This absolutely critical in AI-augmented supervision, where ensuring compliance with regulatory standards (e.g., non-discrimination in lending) is paramount. Furthermore, the balance between transparency and accuracy becomes pivotal, where achieving optimal, reliable outcomes (accuracy) without compromising understandability (transparency) is indispensable.

Explainability and transparency in AI are pivotal in navigating through the “black box” problem’s complexities. While the black box phenomenon challenges the understanding of AI’s decision-making processes, explainability aims to demystify these algorithms’ decisions, ensuring that the outcomes or predictions generated by an AI system are comprehensible and, therefore, accountable. The puzzle is complex: How do we achieve clear insights, keeping the human in the loop without sacrificing the predictive power of an algorithm, especially when this algorithm leverages methods that are intrinsically difficult to explain?

Take, for example, deep learning models, which can identify patterns and trends in vast financial datasets. Yet, they do so often without providing a straightforward rationale for their decisions in the manner we’ve come to expect from statistical and even classical ML models. In the scenario that an AI-based system flags a seemingly legitimate transaction as suspicious for money laundering, the financial supervisor would need to trace back through potentially thousands of nonlinear, high-dimensional parameters to understand the decision. The challenge? Ensuring that the model’s decision is not just accurate but also transparent to both the developer and the end-user. While powerful tools such as LIME and Shapely Values do serve to mitigate some of these challenges, it remains the unfortunate case that it is easier to deploy a complex model than to ensure its interpretability.

Another question permeates these considerations: Does building one’s own model aid in opening the black box? In other words, does migrating toward the “build” end of the “build vs buy” spectrum necessarily solve the problem? The answer is nuanced. While building a proprietary AI system does allow greater control and thus potential for transparency into the model’s internal workings, it does not inherently resolve the black box problem. Particularly for complex models like neural networks, where the relationship between input and output is not immediately apparent without augmentative tools. Thus, whether it is an AI solution procured from a vendor or built in-house, it is the responsibility of those deploying and maintaining the solution to do the hard work of ensuring explainability and interpretability, along with ensuring strategic human oversight.

In the realm of financial supervision, especially where a risk-based approach is pivotal, machine learning applications will play a crucial role. By identifying complex patterns from massive financial datasets, they indisputably are already offering invaluable insights. But as decisions hinge on evaluating potential risks, understanding these complex patterns within financial data is essential. The challenge for the supervisors: ensuring that the decisions derived from AI systems are both explicable and justifiable within the framework of existing regulatory and supervisory methodologies.

ML algorithms learn from the data and develop their own rules, which may not be immediately understandable to the human supervisor. These ‘black box’ decisions need to be translated into a format of reasoning that human supervisors and stakeholders can comprehend.

In AI, the most accurate model is not always the most explainable, and vice versa. The goal of maximal precision and reliability of the output, must be balanced against the imperative of ensuring ultimately a Secure, Accurate, Fair, and Ethical (SAFE) AI system. All accurate models will to some extent precisely identify patterns or anomalies within financial data, thereby aiding in robust, data-driven decision-making. However, the models used in financial supervision must also be compliant with SAFE principles. The outcomes of AI are not only the intended ones, such as improved efficiency, but also the unintended, such as algorithmic biases. In the context of suptech, this implies that AI systems must not only make precise predictions, but also adhere to regulatory norms as well as documented and enforced ethical principles. Both technological and human-centric outcomes must be thoroughly examined and understood to ensure successful integration of such a solution.

Inbuilt biases are a particularly fundamental threat to the accuracy – and ethics – of AI. Humans design and train AI systems on data that necessarily, if unwittingly, echo human biases and can suffer from the ‘garbage in, garbage out’ problem. If the data used to train the algorithm is flawed, it will jeopardize the outcome. The quality of the data that humans provide to the algorithms ultimately determines the outcome. While data is by no means the only source of bias, it is a crucial part of ensuring downstream impartiality, fairness, and transparency in AI’s decision-making processes thus must be an essential imperative for supervisors and for the financial authorities within which they operate.

Data and privacy

Given AI’s significant dependence on data, concerns about data privacy and security also take center stage. This concern goes beyond a fear of data breaches, including of sensitive personal or financial information. AI systems require access to structured and unstructured financial data, encompassing transaction records, customer information, trading data, and more, to formulate insights and predictions. The variety of data ingested by solutions developed during the 2023 cohort of the Lab’s Launchpad accelerator serve as practical examples:

For these specific suptech examples and for AI solutions in general, ensuring the secure processing and storage of these data is imperative to preserve confidentiality and adherence to data protection policies and regulations. As the AI systems continually analyse vast datasets, skewed or flawed data can lead to unintentional biases getting embedded with AI models, leading to potentially discriminatory or unjust outcomes in financial decisions.

Further, the technical nature of data governance and management for AI requires expertise in data science and analytics, which can be difficult to source and keep. The recent research of the Cambridge Suptech Lab identified that 55% of financial authorities face the challenge of an insufficient number of staff with data analytics skills. The FCA and the Bank of England also highlight this issue, suggesting that the lack of technical expertise could limit the further exploration and adoption of AI in supervisory activities. This scarcity is amplified due to the high stakes involved in financial oversight, wherein algorithmic errors or biases have significant and widespread repercussions.

Navigating the integration of AI in suptech

To alleviate these challenges, the measured, thoughtful, and timely adaptation of AI is a requisite (and resource) for financial authorities that are navigating the integration of AI into suptech. With steadfast commitment to further the responsible adoption of AI, strong interagency collaboration, and continuous learning, supervisors can reap the benefits of their AI-powered transformation while mitigating and managing the risks that such innovation presents.

The Financial Stability Board and the European Central Bank have also emphasized the need for robust collaboration between the public and private sector to ensure responsible AI adoption in financial services. Public agencies can increasingly leverage the innovative capabilities of the private sector by forming partnerships with AI providers that source advanced, proprietary technologies and expertise that is beyond their own capabilities. These partnerships may encompass not only procuring AI solutions from the private sector, such as the aforementioned Launchpad examples, but also involve engaging in co-creation initiatives, such as hackathons, workshops, and joint development projects. The intention behind such collaborations is to meld the innovative prowess and tech capabilities of the private sector with the regulatory insight and standards upheld by public agencies. Thus, supervisors provide private entities with insights into regulatory expectations and standards. In this synergistic manner, each organization sticks to its core competencies while supporting the other’s success.

The increasing legal clarity around the adoption of AI will also support the adoption by financial authorities. In a pioneering initiative, the European Parliament recently adopted the proposed AI Act, a comprehensive legal framework aimed at promoting trustworthy and safe AI applications, while also ensuring robust protection of users’ rights. The AI Act provides the legal framework for AI deployment across many sectors, including financial services, and is therefore a guide for the supervisors that use suptech to oversee them. The recent developments in the adoption of AI regulation in the USA should also be highlighted, reflecting the growing recognition of the importance of governing AI technology responsibly. We will dive more deeply into the implications of the interplay of these public sector regulation and private sector policy in the fifth post in this series.

Overall, the responsible development, deployment, and maintenance of AI-based suptech solutions will be delivered keeping the “human in the loop,” shining a light into “black boxes” through a commitment to formal accountability through “explainability and transparency,” and protecting all increasingly digital financial citizens through adherence to data and privacy regulations and best practices.

What’s next

In our third of this series “Generative AI in financial supervision: a brief history of a revolution in progress,” we will delve into the world of Generative AI and how it impacts financial supervision. We will examine Generative AI through the lens of financial supervision, which can at least serve to de-hype the conversation within this space.  

Categories

Authors
avatar

Cambridge SupTech Lab

Cambridge SupTech Lab

avatar

Jose Miguel Mestanza Hirakata

Cambridge SupTech Lab

avatar

Juliet Ongwae

Cambridge SupTech Lab

avatar

Kalliopi Letsiou

Cambridge SupTech Lab

avatar

Maryeliza Brasa and Samir Kiuhan-Vasquez

Cambridge SupTech Lab

avatar

Matt Grasser

Cambridge SupTech Lab

avatar

Matt Grasser and Kalliopi Letsiou

Cambridge SupTech Lab

avatar

Simone di Castri

Cambridge SupTech Lab

Related blogs

Women in Banking – Sonja Kelly (WWB)

Women in Banking – Sonja Kelly (WWB)

For the month of March, in honor of International Women's Day, the Cambridge SupTech Lab will be featuring inspiring leaders of women in finance and financial supervision. We asked them questions about women in banking as well as about career advice and mentorship....

read more