Stock Markets
Daily Stock Markets News

Given regulatory uncertainty, banks will take tentative steps to embrace AI


Michael Barr
Michael Barr, vice chair for supervision at the Federal Reserve, said Tuesday at the DC Fintech Week conference that artificial intelligence could offer banks new efficiencies, but noted that “risks in terms of fairness, in terms of concerns about market manipulation and in terms of efficiency on the other side” make regulators cautious about what applications they might allow for AI.

Bloomberg News

WASHINGTON — Experts say the ever-evolving use cases  for generative artificial intelligence could spur banks to ramp up their adoption of the technology exponentially in coming years. But vague regulatory concerns about the technology may hamper that adoption, experts say, until there is some clarity about what is permitted and what is not.

Federal Reserve vice chair for supervision Michael Barr noted in remarks at the DC Fintech week conference Tuesday that the Fed is watching generative AI used in trading at financial institutions and deliberating over its potential trade-offs.

“Today, most algorithmic trading is very widespread and as a form of using machine learning to engage in trading activities,” he said. “That can generate, again, benefits in terms of efficiencies, but also risks in terms of fairness, in terms of concerns about market manipulation and in terms of efficiency on the other side, so we just have to watch — in a very long-term way — the potential risks and benefits of artificial intelligence.”

Acting Comptroller of the Currency Michael Hsu also spoke at the conference, and said he believes the lively public discussion of AI in the financial industry may be getting ahead of the effective reality on the ground at banks.

“To the extent that banks are engaging or have been engaged — especially on machine learning — It’s really been a crawl, walk, run approach, with good controls, really focused on use cases that make sense and that meet the safety and soundness and fairness standards that we have,” he said. “We expect banks to continue to take that approach because it seems like the right one.”

Davis Polk partner Gabe Rosenberg, who helps banks and other kinds of institutions adapt to the changing AI regulatory environment, said that until regulators fully understand the back-end mechanics of AI models and uses they will be unlikely to give their blessing.

“If you think about regulation and examinations, generally they are a study of what financial institutions do and how to incentivize them to do the right things, but if you don’t really know why they’re doing what they’re doing, that becomes really difficult,” he said. “I think there are concerns that the AI would be making decisions based on the wrong things, like that they’d be looking at correlation between events rather than causation.”

Clifford Chance lawyer Young Kim agreed, saying that regulators will need more clarity around the mechanics of generative AI models before they can gain widespread adoption.

“The ability to explain how an AI model arrives at a particular result is a foundational question that needs to be answered before a company can build a proper risk management framework around its usage,” he wrote in an email. “We think this is one of the primary reasons why AI deployment has been primarily relegated to back-office automation — [for example,] complaints management — or front-end applications [such as] chatbots rather than across the entire bank supply chain.”

Kim also raised another widely discussed concern for policymakers: the potential for AI to replicate human biases, something he thinks regulators are still wrapping their heads around.

“The banking agencies have paid token acknowledgment to the benefits and use cases of AI but remain principally focused on the risks they pose to consumers and the banking system more generally,” he wrote in an email. “Of particular concern is the potential for AI to be used for credit decisioning and account onboarding in a manner that heightens existing biases and furthers the exclusion of underbanked groups.”

Michael Hsu said during the DC Fintech Week conference today that while AI holds plenty of opportunity, regulators are eyeing various risks: bias, discrimination and consumer protection issues, that come with predictive models trained on existing data. 

“Those [concerns] are very live, and the banks know that, supervisors know that,” he said.

“It’s a prediction machine … based on historical data and so to the extent that that historical data has biases in it, we want to make sure that we’re not just moving into a world that’s locked everybody into the past. That’s why we focus on things like explainability and governance.”

Rosenberg noted the potential that a wide swath of banks make similar decisions as a result of relying on similar AI models — a phenomenon often…



Read More: Given regulatory uncertainty, banks will take tentative steps to embrace AI

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.