I. INTRODUCTION 

Thank you to the OECD and FSB for organizing today’s roundtable, Artificial Intelligence in Finance, and for inviting me to speak.

Artificial intelligence or “AI” has the potential to transform many aspects of our lives and society and, in the last year and a half, has captured our imaginations. But, like earlier waves of technology that offer great opportunities, it also brings challenges and fears.

Financial firms have been using some kinds of AI for many years. Yet recent advances in computing capacity and the latest developments in AI – like generative AI or GenAI – represent a dramatic step up in its capabilities. New AI models can ingest a wide range of data, generate content, and have a greater capacity to evolve on their own and automate decision making. These technologies are developing rapidly, and firms and regulators alike are still in the early stages of understanding how the newest AI models could be used by financial institutions. It is crucial that we continue to deepen our understanding of potential benefits and risks and to ensure that the regulatory framework keeps pace. This is a big undertaking. Events like today’s are an important part of building our understanding.

My remarks today will focus on how financial policymakers are learning about the use of new AI tools by financial firms, and what kinds of risks these tools could introduce to the financial system. Adoption of new technologies in finance is not new. Financial firms are innovating continuously in order to increase efficiencies and offer new services. Policymakers have experience with changing technologies and have developed regulatory frameworks focused on building guardrails, regardless of the underlying technology used. In other words, we are not starting from scratch in thinking about how to address the risks of AI while also allowing for the opportunities from it to be realized. The primary question today is whether new AI models are fundamentally distinct from existing technology or if they will be used in such a different way that the current regulatory frameworks are not sufficient or do not apply.

With that framing in mind, I will start my remarks today with a characterization of the technology and how financial institutions use AI today. These current uses can help us think about how financial firms perceive their opportunities and how they may want to use AI in the future. I will then consider the potential risks and our financial regulatory framework for assessing and addressing these risks. I will end with some questions for this group to consider.

II. DEFINING AI 

Artificial intelligence is a broad concept and resists a precise definition. Here, I will use AI to describe any system that generates outputs – which can be forecasts, content, predictions, or recommendations – for a given set of objectives. From this conceptual framing of what AI systems do, we can think about the underlying technology as falling into three categories: “early” artificial intelligence, machine learning, and newer generative AI models. These categories roughly track the order in which they were developed, but many AI models combine elements across these three categories.

First, early AI describes rule-based models. Many computer programming languages are basically rules-based AI and have been used since the 1970s. Generally, these systems solve problems using specific rules applied to a defined set of variables. We have all experienced customer service that uses a rule-based AI. We ask a question that leads to pre-defined follow-up questions, until we get a pre-packaged answer or press zero enough times that we can talk to a human. Internal loss forecasting models or early algorithmic trading, for example, also might be considered forms of early artificial intelligence. We are very familiar with these kinds of tools in finance.

Second, in contrast to rules-based systems, machine learning identifies relationships between variables without explicit instruction or programming. In machine learning, data are the key input and the system identifies patterns from the data. Learning can be reinforced, for example, by providing feedback to the system about whether the output is good or bad. From this feedback, the machine learning model can learn to perform better in the future. Machine learning is also embedded into many existing processes for financial institutions. For example, it has long been used to develop fraud detection tools. Machine learning also enables the mobile banking app on your phone to read handwritten checks.

The latest AI models can be characterized by their ability to generate new content – from text to images to videos. Instead of being limited to a defined set of potential responses in a defined format, GenAI can produce a range of responses in a range of formats. For example, “Give me recommendations for where to eat in Paris but composed as a poem in iambic pentameter.” These models are flexible and often dynamic, learning from experience in generating responses and through ingesting new information. More advanced AI systems, which are still being developed, aim to be highly autonomous with capabilities that match or exceed human abilities.

III. USE OF AI

Our experience with earlier technological changes can help us understand how financial institutions might be approaching newer artificial intelligence and give us insights into its potential benefits and opportunities.

I’ll start with a few thoughts on where we might see these benefits.

First, what are the best use cases? Many use cases stem from AI’s ability to better process volumes and types of information that otherwise may be impractical or impossible to analyze. In the context of financial firms, AI can be used for two purposes: First, AI can be used to reduce costs or risks, and increase productivity, such as by automating some back-office functions or providing routine customer service. For example, by analyzing a broader array and amount of data, AI may be able to identify patterns that suggest suspicious activity. Second, AI can be used to develop new products, such as more tailored services. For example, by processing more data from more sources to better understand their customers, AI may enable greater customization of customer’s online financial experiences.1 

That said, because AI is largely a function of the data it is trained on, not all AI tools are equally well suited to each kind of task. For example, large language models, by definition, are trained primarily with language. As a result, they may be better suited to language-based tasks like customer service than a task like assessing Value-at- risk, and many financial institutions are exploring use of large language models to support customer chatbots.2

Second, what are the development and adoption costs? AI tools may require significant investment – for example, to develop or purchase the tool, or invest in computational resources. While some operational costs have fallen and computing capacity has expanded, resource demands will be an important consideration for the kinds of use cases that firms are pursuing or prioritizing. At the same time, there could be costs of falling behind competitors who use AI to improve their services.

In addition, financial institutions also need to work within appropriate governance structures and risk appetite boundaries. For example, if a tool were to replace human operators, how costly would a mistake be from not involving active human intervention? If a tool does not replace a human operator, would there still be productivity gains?

We do not have a full picture of the ways that various financial institutions are using AI. The FSB’s work in 2017 highlighted many, now relatively mature, uses, such as credit underwriting and trading execution. More recently, many authorities are conducting surveys and requesting public input as well as talking directly to firms. From this work, we see AI being used in three primary areas. First, AI is being used to automate back-office functions and aid compliance. Fraud and illicit finance detection tools are an example of this. Second, AI is being used for some customer facing applications, like customer service chatbots. Finally, some financial institutions are looking for ways to incorporate AI into their product offerings. Some of these applications are familiar – like trading strategies – but are relying more on AI than they have in the past.3 Some hedge funds advertise that their strategies are based entirely on predictive AI, while previously AI might have been used to inform human decisionmakers. In other cases, AI has resulted in new types of products. For example, some insurance brokers are experimenting with new software to help their clients, such as to manage supply chain risk, based on satellite imagery processed by AI.4

It is still early days, but firms are pursuing a wide range of strategies for how to use new AI tools. They appear to be proceeding cautiously, especially when experimenting with GenAI, and at the same time making changes to internal governance.5 Some fintech firms that are subject to less regulation may be proceeding more quickly.

IV. RISKS

As these use cases highlight, AI may offer significant benefits to financial institutions in reducing costs and generating revenue. But as financial institutions explore new ways to benefit from AI, policymakers and financial institutions alike must consider the potential broader risks. We can consider these risks across a few categories – first, risks to individual financial institutions,; second, risks to the broader financial system; third, changes in the competitive landscape; and finally, implications for consumers and investors.

Risks to Financial Institutions – Microprudential considerations

A key risk for financial institutions using AI tools is model risk. Model risk refers to the consequences of poor design or misuse of models. Addressing model risk includes managing data quality, design, and governance. The data, design and governance of models are critical components of effective and safe development of AI, and its use. For example, it is important to consider where limitations in data can skew a model’s outputs. Models trained on historical data will, by definition, be informed only by the historical examples of stress or outlier events contained in the underlying data. While these types of events stand out in our memories, they are relatively few and unlikely to be repeated in the same ways. This limitation means that some models that could be used for trading may be less robust or predictive in future periods of stress.

It is also critical to consider how the model is being used. Even if a model is well designed, it can present risks if used or interpreted inappropriately. As firms become more comfortable with AI models and outputs, it may become easy to forget to question the models’ underlying assumptions or to conduct independent analysis. We have seen these kinds of dependencies in the past. For example, prior to the financial crisis, banks and market participants relied on credit rating agencies to an extent that reduced their capacity for independent assessments.6 Newer AI tools may create or exacerbate some of these existing challenges for governance and oversight. These tools can be less clear in their reasoning, more dynamic, and more automatic. For example, the speed and independence of some AI tools exacerbates the problem of overreliance as the opportunity for human intervention may be very short. This is particularly true for applications like trading strategies because of the speed required.

Relatedly, use of AI tools may increase reliance on vendors and critical service providers. While the use of third parties can offer financial institutions significant benefits, these dependencies can introduce some risks. For example, AI tools require significant computing power and may increase reliance on a relatively small number of cloud service providers. There is likely less visibility into the AI tools developed by vendors than those developed in house.

Operational risks related to AI may also come from outside the financial institution. These include AI-enabled cyber-attacks, fraud, and deep fakes. Widely available GenAI tools are already expanding the pool of adversaries and enabling all adversaries to become more proficient. While the tactics are often not new – like phishing – they have become more effective and efficient in the last year. For example, in a reported incident earlier this year, an employee of a multinational financial institution was tricked into transferring $25 million after attending a video conference call with an AI deepfake of the firm’s chief financial officer.7

Financial Stability and Macroprudential Considerations

We should also consider whether AI use by financial firms could present financial stability risks – that is, risks to the broader financial system. For example, AI models may introduce or amplify interconnections among financial firms if model outputs are more highly correlated because they rely on the same data sources, or if firms are using the same model. In some cases, one model’s output may be an input to another model. These interconnections may exacerbate herding behavior or procyclicality. Where models inform trading strategies that are executed automatically, incidents like flash crashes may be more likely. Complexity and opacity are also of concern. To the extent that models are not transparent in their reasoning or rely on a wider range of data, it is difficult to predict how models might perform.

Changes in Competitive Landscape

AI has the potential to change the competitive landscape of financial services. This could happen in one of several ways. First, the significant investments of computing power and data required to develop AI models may advantage certain institutions over others. Small institutions with less access to data may be disadvantaged in their ability to develop or access AI. Institutions that have not migrated to cloud services may be less able to access or use AI. Alternatively, it is possible that the investments needed to develop AI models are so significant that financial institutions converge on a single model, thus leveling the playing field between large and small institutions.

In addition, competitive dynamics outside of financial institutions may be relevant. AI tools and the cloud services that these tools depend on are being developed most intensively by a handful of companies that are not themselves financial institutions. AI may also bring new entrants into financial services, including technology providers that may be looking to make use of data collected in other contexts.

Consumers and Investors

In my remarks today, I have focused mostly on the impact of AI on financial institutions and the financial system, but I would like to spend a few moments on the potentially significant implications for consumers and investors, as well.

We can think of these implications along two lines. First, while data have always been critical to financial services, AI further intensifies this demand for data. As a result, AI may amplify existing concerns regarding data privacy and surveillance. And, if data are collected, it must be stored, raising data security concerns.

Second, the outputs of AI tools have significant implications for consumers and investors. Lenders using AI models may be able to develop a more comprehensive picture of creditworthiness by using many times more variables, including data from less traditional sources.8 Investment advisors are also experimenting with use of machine learning or predictive AI to provide more tailored advice. A particular area of concern, however, is the potential for AI tools to perpetuate bias. Historical data – whether used in traditional modeling or AI – embeds historically biased outcomes. A lender’s reliance on such historical data may be particularly problematic if the reasoning of a model is not clear, and if a decision may result in a consumer being denied service or credit in wrongful ways. In addition, with more and varied data being used, consumers will face challenges in correcting inaccuracies in their data. It is also important to take care that alternative sources of data, which may be less transparent and obscure embedded biases, are not proxies for race, gender, or ethnicity.

V. FRAMEWORK FOR ADDRESSING RISKS

Many of the risks that I have described are familiar to financial regulators. When we consider how best to assess and mitigate financial risks posed by new AI developments, again, we are not starting from scratch. For example, principles of model risk management establish a framework for model design, governance, audit, and data quality. Principles of third-party risk management address risks associated with vendors and other critical service providers. Fair lending, fair credit, and data privacy laws are designed to address risks to consumers, and securities laws are designed to protect investors. Likewise, AI tools used for compliance, such as for AML/CFT compliance, must fit these regulatory requirements. While this framework is not specific to AI technology, it applies to AI and is designed to address risks regardless of the technology used.

It is from this starting point that we consider whether AI presents risks that are not adequately addressed in the existing framework. These risks could be of the same type, but of greater magnitude, or they may be entirely new types of risks. The technology is developing rapidly, and we should work to ensure that the policy framework is keeping pace. To that end, I would like to conclude by posing several questions to help guide this discussion:

First, where might AI amplify some known, familiar risks? For example, we have long understood the importance of data quality in modeling credit and market risk. Because AI relies on more and more different types of data, these concerns may be amplified.

Second, does AI present different kinds of risks? It may also be that AI presents categorically different types of risks. For example, AI acts according to defined objectives and it may be challenging to specify all relevant objectives. You may want AI to maximize profit, but within legal and ethical boundaries that can be difficult to define completely. If an AI model acts on its own and is capable of updating its reasoning with no human intervention, this may undermine accountability for wrongdoing or errors.

Third, are there changes in the competitive landscape that could have implications for the regulatory framework? AI may change the competitive landscape. These changes could occur both among financial companies – for example, firms with greater access to data or to computational power may be better positioned to compete. It could also occur between financial and non-financial companies, as some nonfinancial firms already have significant access to data and computing power, and have shown some interest in providing financial services directly. If this shifting landscape affects the ability to address risks in the financial sector, what adjustments should be considered, for example, for certain kinds of institutions or certain kinds of relationships?

Finally, what are the opportunities for financial regulators and other authorities to use AI? It is still early days for policymakers too. We are exploring opportunities to identify data anomalies to counter illicit finance and fraud, and to find better ways for the private sector to build more comprehensive databases to improve fraud detection.9 This is a high value proposition with a manageable risk if we work together across the public and private sectors. We might also consider what are other possible use cases and what considerations should guide those use cases.

There are certainly other questions that could be asked now and certainly more to come for policymakers as AI technology continues to develop. Events like these and the ongoing work of the FSB and OECD are critical to deepen our understanding of the potential uses of AI by financial institutions and to ensuring that our policy framework keeps pace with technological change.

Thank you.

  1. See, e.g., Dynamic Customer Embeddings for Financial Service Applications []
  2. See Chatbots in consumer finance | Consumer Financial Protection Bureau []
  3. See e.g., MIT-UBS-generative-AI-report_FNL.pdf []
  4. See e.g., Marsh McLennan launches AI-powered solution to transform supply chain risk management []
  5. 2023 IIF-EY Survey Report on AI_ML Use in Financial Services – Public Report – Final.pdf []
  6. See Reducing Reliance on Credit Ratings – Financial Stability Board []
  7. See Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector []
  8. See Assessing the Impact of New Entrant Non-bank Firms on Competition in Consumer Finance Markets []
  9. Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector []