By: Naomi Kent, Managing Director and Advisor, The C-Suite Advisory, Felix Global
I had the pleasure of sitting down with Barbara Mace, a respected expert, for an engaging discussion that dives deep into the fusion of Artificial Intelligence and the Financial Services Industry. Throughout this Q&A, Barbara shares her insights into the dynamic intersection of AI and finance. Leveraging her experience and acute awareness of industry trends, Barbara takes us on a journey, exploring topics ranging from AI’s transformative impact on financial risk management to the challenges posed by biases in training data. Join us on this exploration as we uncover the opportunities, navigate the risks, and delve into the ever-evolving dynamics of the intricate relationship between artificial intelligence and the financial realm.
Q1: How does AI contribute to the financial services sector, and what areas are expected to see improvements?
Barbara Mace: AI plays a crucial role in the financial services sector, with its impact expanding across various institutions like banks, fintechs, hedge funds, and more. According to the International Monetary Fund (IMF), there’s a projected doubling of AI spending by financial institutions to $97 billion by 2027. This growth is attributed to advancements in areas such as fraud detection, compliance with AML rules, risk management, stress testing, regulatory automation, and enhanced lending decisions.
Q2: How is AI transforming business practices in financial risk management, and can you provide a specific example?
Barbara Mace: AI is revolutionizing financial risk management, exemplified by companies like Delfi Labs Inc. in New York City. By leveraging AI and machine learning algorithms, Delfi can simulate thousands of scenarios to assess a bank’s balance sheet sensitivity to potential interest rate volatility. This technology automates financial optimization, making it computationally feasible to create ideal hedging strategies and value various financial instruments accurately. I am proud to currently serve on their Advisory Board.
Q3: What challenges arise from bias in AI related to training data, and how does it affect different sectors?
Barbara Mace: Bias in AI training data is a growing concern, as historical data often contain embedded biases from past human decisions. This bias can manifest in various sectors, including unfair lending practices, healthcare disparities, college admissions, recruitment biases, and criminal justice algorithms. Addressing these biases is crucial for ensuring fair and ethical AI applications in the future.
Q4: How reliable is AI in predictive analytics, and what considerations should be taken into account?
Barbara Mace: AI-enabled technology has significantly improved predictive models, assuming reliable inputs and unbiased training data. However, there’s a question of performance in rare and unique situations where historical data may not be informative. Trust in AI accuracy depends on the reliability of inputs and the ability to handle unforeseen circumstances.
Q5: What opportunities and risks are associated with the use of AI in finance, and how are regulatory bodies responding?
Barbara Mace: The use of AI in finance presents opportunities for efficiency, improved credit decisions, and innovative solutions. Regulatory bodies like the Basle Committee on Banking Supervision and the Bank for International Settlements recognize benefits in areas like lending efficiency and money laundering detection. However, they also acknowledge risks such as model opacity, bias, and increased cyber threats. Concerns about potential financial fragility and herding behavior in stock markets have been raised, emphasizing the need for responsible AI use. President Biden’s Executive Order on AI in 2023 reflects a commitment to ensuring the safety and security of AI systems through standardized evaluations and risk mitigation measures.