We use Google Analytics to see how people use our website. This helps us improve the website. The data we have is anonymised. Learn More

analytics-trigger

This site uses cookies

We use cookies to give you a better browsing experience, to improve our website by learning more about our visitors and the pages they visit, and to market our programmes and activities to you. Learn more about the cookies we use

Manage cookies

Necessary Cookies

Necessary cookies enable core functionality. The website cannot function properly without these cookies, which can only be disabled by changing your browser preferences. You consent to these cookies if you continue to use this website.

Analytical Cookies

Analytical cookies help us to improve our website by collecting and reporting information on its usage.

Analytical Cookies: on

Artificial intelligence in suptech: The need for public sector adoption and adaptation

This is the first post of a series dedicated to “Artificial intelligence (AI) in financial supervision: Promises, pitfalls, and incremental approaches.”

 

Artificial Intelligence (AI) has already reshaped the financial services sector. The industry has already fully embraced AI across nearly all facets, including data analytics, risk management, compliance, and customer services.

But as this technology develops, so does the risk landscape.

AI has introduced novel forms of financial fraud, cyber threats, and other regulatory challenges at a faster pace than ever before. This dizzying speed of transformation within the private sector means the public sector – primarily financial authorities, as well as consumer protection agencies – need to keep up and, ideally, stay a step ahead.

In addition to the speed of technological change, financial authorities face compounding risks and challenges like inflation, climate change, and inequality.

Financial authorities find themselves on a twin path. First, they need to discover and embrace the value of AI while simultaneously tackling the novel and newly magnified challenges it brings forth. Luiz Pereira da Silva, Deputy General Manager of the BIS, recently reflected on this crossroads:

“AI poses unprecedented challenges for all of us. For central banks, one can see increased powers to monitor price and financial stability with the help of AI. At the same time, there is a possibility that AI might take over key decisions on price and financial stability and challenge what has been so far the “art” of central banking: a reliance on many models…Your generation will have to reflect on these new challenges sooner than later.”

Successfully traversing this path requires developing new skills, fostering a culture of innovation, and building systems that can effectively integrate AI technologies.

AI in financial services, while transformative, brings forth new frontiers in risk and regulation. The public sector’s approach to these issues will largely influence the future direction of AI in suptech and the wider financial services sector. The road ahead is as exciting as it is daunting. It will require the public agencies to embrace innovation.
And even a little bravery.

AI in suptech: early adoption, early concerns

AI’s increasingly prominent role in financial services has further stoked an already lively discussion on its pros, cons, and appropriate guardrails among public and private sectors. While it continues to peak in the hype lifecycle, policymakers iterate on nascent regulation (e.g., in the EU and USA) and researchers continue to surface approaches to address the very real challenges.

For example, credit scoring is now powered by AI, automating the evaluation process by examining vast amounts of labeled or unlabelled data, including credit history and even social media activities and other digital imprints. This type of real-time analysis results in better risk detection while holding the promise of more inclusive credit rating systems.

Yet, it raises questions around biases perpetuated, and possibly enhanced, in the algorithms that generate discriminations in lending. As AI systems can decide who gets a loan or how risks are assessed, it’s crucial to understand how these decisions are made.

Meanwhile, like most public sector agencies, supervisory authorities are typically under-resourced and under-funded when compared to the private sector, and struggle to keep up with innovation in the private sector. Additionally, as research from the Cambridge SupTech Lab shows, the public sector lags in deploying innovative tools, technologies and techniques to enhance its own capabilities. Due to skill deficits, agencies often depend on external expertise, which can lead to potential misunderstandings of processes and distrust in results. Legal and operational challenges related to AI, including algorithmic biases, poor data quality, maintenance issues, or the lack of transparency in AI systems (the “black box” problem), make supervisors cautious about AI and call for a careful approach to its implementation.

Despite these concerns, the public sector has made steps towards incorporating AI into its supervisory toolkit and is reaping the associated benefits. Responsibly deployed AI has significantly improved efficiency in decision making, resource allocation and customer services, resulting in enhanced risk management procedures and customer interactions.

The Monetary Authority of Singapore, for example, is actively delving into AI solutions for risk detection and regulatory compliance, which is also the focus of a BIS-led experiment called Project Aurora, using privacy-enhancing technologies and advanced analytics to enhance collaborative analysis and learning (CAL). Also, through a recent partnership with Google Cloud, MAS is furthering its exploration in responsible generative AI applications.

Meanwhile, the US Federal Reserve uses machine learning to analyze textual data to identify potential financial crises.

In Europe, the European Central Bank has adopted a Digitalization Roadmap and established the SupTech Hub to explore AI capabilities to enhance its supervisory activities. The European Insurance and Occupational Pensions Authority (EIOPA) has outlined its approach to suptech and implementing AI applications in its 2020 Suptech Strategy.

According to the Cambridge SupTech Lab, 71% of financial authorities are engaged in suptech activities, but the granularity of AI’s penetration reveals a varied landscape. When viewed through the lens of the SupTech Generations 2.0 framework, the divergence in maturity and adoption levels becomes evident – ranging from manual to tech-enabled to AI-driven solutions. Advanced economies, often buoyed by substantial resources and technical expertise, tend to exhibit higher maturity in AI-driven suptech, while emerging markets might still be grappling with earlier generations of suptech implementations. This disjunction reflects not merely a technological gap but underscores a divergence in funding, human resources and capacity in navigating the financial supervisory landscape.

Distilling the data to satiate supervisors with insights

The complexity of financial data flowing from financial institutions to supervisors threatens to leave supervisors drowning in data while thirsting for insights. Monitoring regulatory compliance and risk management through unstructured, text-based reports is time-consuming and error-prone. Collaborating across borders to detect and prevent financial crime is difficult without a common dataset on which to train models.

But there is a better way.

Imagine a world where lengthy financial documents transform into concise summaries and raw data becomes easily explained reports driven by meaningful insights. AI is poised to help realize this vision while addressing the challenges that financial supervisors face.

AI’s influence on suptech stems from its transformative ability to process and interpret vast amounts of data at exceptional speed and accuracy. AI applications use algorithms, machine learning, and advanced analytics to train and ‘learn’ from data, discern patterns, forecast outcomes, and automate decision making.

That list makes it easy to see why AI will make a significant impact on financial supervision practices.

Supervisory bodies can now access the data and resources needed to monitor the financial system on a wider scale and at a level of detail previously unattainable. According to the Financial Stability Board potential benefits of the insights surfaced from this data by AI-based suptech include:

  • Enhanced decision-making in supervisory measures
  • Enhanced extraction of more meaningful data through data collection and visualization techniques
  • Cost reduction and real-time monitoring of threats that impair market integrity and financial stability.

AI applications can reduce the time required for data analysis tasks from days to mere seconds, dramatically amplifying the efficiency and responsiveness of supervisory authorities. Moreover, machine learning models employed in risk prediction can enhance the predictive accuracy rate, by providing early warnings and facilitating timely interventions to the financial system. For instance, in Brazil, the central bank has utilized AI and machine learning to develop a predictive model that forecasts inflation and consistently outperforms accuracy metrics of traditional econometric models, thereby aiding in identifying potential market manipulations and ensuring a stable economic environment

The Asian Development Bank has highlighted the potential of AI to scrutinize and analyze millions of transactions, revealing patterns that might indicate market manipulation or fraud. AI’s potential in boosting consumer protection is also significant. It can comb through enormous amounts of customer data to detect potential instances of unfair practices or market manipulation. This ensures the protection of consumer interests while fostering a fair and transparent financial services ecosystem. The FCA and the Bank of England have underlined this deep-seated potential of AI.

Among the emerging supervisory developments, one in particular stands prominently as a potentially transformative revelation accelerated by adoption of AI: The shift from the ‘traditional’ rules-based approach to a smarter, proactive risk-based one.

Risk-based supervision (RBS) is a comprehensive, formally structured system that promotes the shift from traditional, often static, rules-based models towards dynamic, model-based statistical inferences. AI empowers the risk-based approach by enhancing predictive analytics, offering richer, data-driven insights, and ensuring that supervisory efforts are not just compliant but future-ready and robustly informed. According to RBS frameworks, financial authorities assess risks within the financial system, giving priority to the resolution of those risks identified as high, aimed at increasing the effectiveness of supervisory outcomes whilst also increasing efficiency through a better focus of their limited resources.

For example, rather than manually evaluating and tagging supervised entities based on a static set of rules, supervisors can use AI to cluster financial institutions with similar characteristics (risks) together. This could be based on the characteristics of their assets and portfolios, as well as the quality of risk management.

In many countries, this AI-powered risk-based approach is already becoming a foundational part of the supervisory framework. Despite facing technological and financial constraints, these countries are taking strides towards integrating AI to enhance the quality and efficacy of their supervisory activities. For example, South Africa, among other African countries, is leveraging a risk-based approach and applying it via AI and ML suptech solutions within the insurance sector.

With its capacity to analyze, learn, predict, and make decisions, AI opens vast opportunities to the industry and financial authorities alike.
In the Lab’s forthcoming AI in Suptech Report, we discuss whether it might be appropriate to consider reestablishing the concept of the risk-based approach to its core and move beyond simply managing the risks of the financial institutions with AI suptech tools.

Don’t forget the human in the loop

In all of this, we can’t forget the human touch.

A “human in the loop” strategy remains essential for overseeing AI activities and preventing undesirable outcomes. This approach includes both the management of AI within the agency and establishing the legal guidance and guardrails for the financial industry to deploy AI. Regulators and supervisors must possess the appropriate skills and understand AI to fully reap its benefits, while also effectively addressing the associated risks. They must be capable of providing timely and smart responses to address the new challenges that will inevitably arise as the use of AI in the market and in suptech continues to become more prevalent.

However, the journey does not end there. With AI, new challenges will inevitably and unceasingly arise, and the corresponding use of AI in suptech will necessarily continue to develop.

As we earnestly examine the dynamic set of answers to that question to responsibly and mindfully deploy such systems, the result will be more inclusive and efficient financial ecosystems that will help individuals, households and businesses reach their full potential.

What’s next

In the second post of this series, “Navigating the intricacies of AI in suptech,” we delve into the key insights for financial supervisors while exploring solutions at the crossroads of technological innovation and financial supervision. 

Categories

Authors
avatar

Cambridge SupTech Lab

Cambridge SupTech Lab

avatar

Jose Miguel Mestanza Hirakata

Cambridge SupTech Lab

avatar

Juliet Ongwae

Cambridge SupTech Lab

avatar

Kalliopi Letsiou

Cambridge SupTech Lab

avatar

Maryeliza Brasa and Samir Kiuhan-Vasquez

Cambridge SupTech Lab

avatar

Matt Grasser

Cambridge SupTech Lab

avatar

Matt Grasser and Kalliopi Letsiou

Cambridge SupTech Lab

avatar

Simone di Castri

Cambridge SupTech Lab

Related blogs