Many of the world’s biggest banks lack transparency in how they are developing the artificial intelligence (AI) that could be used in future decision-making and risk management.
A study of the 23 largest banks in Europe, the US and Canada found eight that have not publicly reported their responsible AI principles.
Benchmarking company Evident assessed banks in four areas of responsible AI, using public data to create its AI Index. It looked at the banks’ creation of AI leadership roles, publication of ethical principles, collaborations with other organisations and publication of original research.
Evident CEO Alexandra Mousavizadeh said AI could provide better risk management and decision-making across the global banking sector. But he added that it was vital for banks to develop AI in a way that meets high ethical standards and minimises unforeseen consequences.
“Our research found a worrying lack of transparency around how AI is already used – and how it may be used in the future – which could damage stakeholder trust and stifle progress,” said Mousavizadeh. “In this highly regulated sector, the reality is that many institutions are taking proactive steps to address AI concerns and developing internal programmes to address responsible AI.”
AI is already used by banks to authenticate customers, sift through data much more quickly than humans, and for risk modelling.
“The problem is that there is no standard for responsible AI reporting, and many banks withhold the details of their efforts,” added Mousavizadeh. “At this critical time for the sector, the banks need to show leadership and start reporting publicly on their AI progress.”
With the collapse of Silicon Valley Bank and Credit Suisse’s troubles, decision-making in the banking sector is under the spotlight.
The Evident research found that European banks were the least transparent when it came to reporting their policies for responsible AI.
JPMorgan Chase, Royal Bank of Canada and Toronto-Dominion Bank were the only researched banks found to have “a demonstrable strategic focus on transparency around responsible AI”, according to the Evident report. Each of these banks could show evidence of creating specific responsible AI leadership roles, publishing ethical principles and reports on AI, as well as partnering with relevant universities and organisations.
The report said European banks differed from North American banks in how they chose their AI teams. Banks in North America are more likely to hire specific responsible AI roles, usually from Big Tech firms, whereas European banks more often lead responsible AI within their data ethics teams.
Evident co-founder Annabel Ayles said: “European banks which view responsible AI through a lens of data ethics, potentially due to the dominance of General Data Protection Regulation legislation, are perhaps missing a trick by not creating AI-specific roles and thinking holistically about the broader risks posed by AI.”