2021 marked a year of enhanced dependence on technology. As companies shifted to remote working, some tasks switched to being run automatically - often with Artificial Intelligence coming into play. However, artificial intelligence doesn’t automatically know what to do; it must be programmed to do tasks correctly, appropriately, and sometimes better than a human employee attempting to do the same task. AI must follow specific rules, guidelines, and a framework to best help human employees working in businesses, big or small. According to Deutsche Telekom, AI must be responsible, careful, supporting, transparent, secure, reliable, trustworthy, cooperative, and illustrative.
Furthermore, there are guidelines put in place by EBA, FCA, and SMCR to best support businesses that use AI and make sure operations are safe, secure, and beneficial.
EBA
At the end of 2020, EBA released its latest annual Risk Assessment Report and found that Artificial Intelligence technologies had been adopted by 64% of European banking institutions. Over the past two years, European banks have continued to invest in AI and big data analytics. In the same report mentioned previously, it was found that 12% of EU banks have moved from pilot testing and beginning stages of development to the official implementation of AI tools in financial services. While to some, these stats may be surprising, to professionals, it was expected. The pandemic accelerated a digital transformation not just in the EU, but around the world, that was already beginning to happen before COVID-19.
Banks are in opposition to each other, competing with each other, and of course, the entry of GAFA (Google, Facebook, Apple, and Amazon). This acceleration in 2020 caused changes in budgets for financial institutions as well. The funding for digital innovations and the development of new technologies increased for 60% of EU banks and their partnerships with tech companies. This began a rapid roll-out of advanced, AI-powered financial solutions.
As the demand continues to grow, EBA has set forth guidelines for enhanced security and trustworthiness among banking institutions when using AI. It is also expected that those who use AI follow ethical principles like respect and prevention of harm. AI systems should also be “explainable,”; thus beginning the definition of “explainable AI.”
Defining AI is anything but transparent and straightforward, which is why EBA coined the term “explainable AI.” The definition of this is simple: an AI model is explainable when it is possible to create explanations that allow humans to understand how the results have been reached and on what grounds the answer is based upon.
EBA guidelines are centered around trust, transparency, and security. Banking clients should never doubt those three aspects and should ask questions and understand what processes are taking place. The necessity of explainability is much higher when decisions being made will have a direct impact on customers. With the inevitable progression of technology, explainability will play a vital role in the ongoing competition between institutions. Click here to learn how Druid’s AI-driven chatbots can help to create a cognitive banking experience for customers.
FCA
In 2019, the FCA and The Bank of England published a report on machine learning (ML) in the UK financial services sector. This joint report assessed the ongoing adoption of AI and ML technology by UK financial institutions and considered risks and opinions regarding their progression.
While the report did not outright create new regulations regarding AI and ML, it did call for improved guidelines. The reports acknowledge that the ongoing use of ML and AI could pose quite a significant risk for financial institutions if not adequately prepared. Specifically, there must be a regulatory framework for the use of ML as a factor that may affect the nature, scale, and complexity of vital information. Tying into the EBA guidelines, the FCA also recommends a standard methodology and an “explainability” aspect for any processes involving ML and or AI.
It is up to each organization to use ML or AI in accordance with all rules and regulations. The FCA has made it very clear that this must be a top priority. It is also highly stressed that institutions should be very transparent with clients about their use of ML or AI in any decision-making processes and whether or not ML or AI makes decisions on behalf of a customer or the institution itself.
The FCA report contained a vast amount of information and answered various questions regarding the ongoing progression of organizational use of AI or ML. Below are several highlights from the report:
- Broad Adoption of AI and ML: Many customers are already familiar with AI and ML somehow. The two are also being used in front-office tasks and not just the tedious, back-office tasks. For more examples of use cases for conversational AI in banking, click here.
- Risks: Organizations are still hesitant to adopt AI or ML fear the risks, like data quality, lack of explainability, or poor performance.
- Risk Management Framework: Most businesses govern their use of technology through risk management frameworks, while some businesses have established sets of committees. The report notes that risk management frameworks should evolve just as technology does.
- Minimum explainability: Many businesses find that there is a challenge to explain them to customers as requirements and regulations expand.
- Human Intervention: Some type of human oversight is a risk management tool for firms. Should the machine deem something wrong, a human employee will be alerted. It is essential to have a human employee who can repair things should they go bad.
SMCR
The Approved Persons Regime (APR) for banks, building societies, credit unions, and dual-regulation was replaced by The Senior Managers, and Certification Regime (SMCR) replaced the Senior Managers and Certification Regime (SMCR). The SMCR requires each senior manager to fully comprehend their expectations and take responsibility for any compliance breaches within their area of liability unless they can adequately prove that they took all reasonable steps to prevent disruptions.
One of the main goals of the SMCR is to assure those specific responsibilities within institutions are more defined and transparent. The SMCR also requires all staff members to be trained on technologies (AI and ML) the organization is using. They must fully understand the code of conduct and how it impacts their roles. Furthermore, organizations will need to provide detailed references for staff should they leave the firm and work someplace else. Record keeping and transparency are vital to the SMCR.
SMCR Requirements
- Enhanced firms will have 17 Senior Management Functions, with defined responsibilities, as well as some responsibilities that all staff are required to uphold.
- Core firms will be comprised of 6 Senior Management Functions, 4 of which are governing functions (Chief Executive, Executive Director, Partner, Chair, and 2 other required ones, which are typically Compliance Oversight and Money Laundering Reporting Officer.
- Limited Firms have 3 Senior Management Functions that depend on the specific activities and permissions the business requires; no prescribed responsibilities apply to limited firms.
A commonality among EBA, FCA, and SMCR is the strong need for transparency. AI and ML are no longer for high tech-scientists and computer programmers; these technologies are used daily in our commonly used businesses (like banks). We as consumers have a right to know what is happening behind the scenes, be informed in ways we can understand, and make educated decisions.