How AI is impacting financial services and what it means for the future
In the week that Theresa May challenged health services to use Artificial Intelligence (AI) to pool medical data to identify the early stages if cancer, we take a look at how AI is impacting financial services and what it means for the future.
What it looks like now
A perfect storm of increasing cyber-crime and consequent regulatory requirements, competition, customer expectation, technological capability and access to data has seen an exponential rise in the use of AI.
The financial services are leading the way in applying AI in business. It is being used across all functions in areas such as assessing credit quality, analysing market impact for trading, predictive forecasting and automated client interaction.
In our world of Risk and Compliance AI is helping to make huge advances, especially in surveillance, screening, credit decision making and fraud detection. One of the most influential AI technologies is Rapid Machine Learning. This processes huge volumes of data from different sources (whether it be images, text or numerical) and is able to flag both regularities and irregularities that can highlight important opportunities and indeed threats with the capacity and speed way beyond human capability.
What the future holds
The era of AI is still in its infancy and smart companies have to embrace and run with the advancement in order to stay ahead of the curve.
Steve Culp, the Senior Managing Director of Accenture Finance and Risk Services predicts the pairing of Virtual Reality (VR) with AI such as robotics, natural language processing and machine learning to create an Extended Reality (ER) ‘where participants become part of a virtual eco-system and can interact and dissect data within their real-world field of view’.
This could see future Risk and Compliance Managers ‘walking through’ a 3D showcase of reports. Systems currently flag issues but it then falls to the compliance team to look at a range of other sources to fully identify the size and scope of the situation and ultimately solve the problem. ER would be able to pull every piece of information together (traders, locations, history, associates) in one place with easy to visualise graphical representation. Colleagues in other countries could even join the ‘environment’ for a closer level of interaction and discussion. The appropriate course of action still lies with the team but the information they need to make that decision is more thorough, timely and clear.
The pros and cons
While this technology aids innovation, protection and growth, it is in itself vulnerable. The vast layers of interfaces, devices and connections involved provide a greater surface area to be protected from cyber criminals and glitches such as the Flash Crash of 2010. Technological advancement increases the speed at which tasks can be executed but it also increases the speed at which risks can spread too. The constant progression will mean the extent of AI will always be in a kind of experimental phase with the associated risk.
There is also a threat of Big Tech using the knowledge they gain from banks’ use of their platforms and their own technological capabilities entering the market, putting a major strain on the competition. This would open up a whole other risk of the monopolisation of power.
Risk and privacy will always be at the core of the consumer’s needs and with critical data and money at stake the role of cyber risk management is complex. Building resilience to a rapidly evolving host of threats has to be at the core.
What this means for regulation
The rising use of AI challenges traditional risk management thinking. Regulators who have faced criticism in the past for reacting slowly to change, face the challenge of responding quickly to the disruptive nature of technological advances. We can certainly expect technology and cyber risk to take an ever increasing position in future regulation.
Financial services turning to AI to protect themselves from risk find themselves in a position where AI also puts them at risk. The speed of change means that the impact is difficult to measure and control and this unknown creates unease with regulators. A recent paper by Consultancy firm Parker Fitzgerald warns that this could result in regulators forcing banks to hold even more capital as a result.
Transparency is key. AI needs to be able to show regulators how and why they have made decisions. The visibility of the analytical processes and a human’s ability to understand it and participate have to be factored into their development.
Parker Fitzgerald also calls for ‘greater global coordination and standardisation’. The very disposition of technology and transactions know no borders so regulators are going to have to address how to provide a consistent international approach.
How will humans and AI work together and what does this mean for recruitment
AI is going to be part of our future and it will be able to fulfil roles currently undertaken by humans infinitely quicker and more thoroughly. This doesn’t mean we should prepare for mass redundancies making way for an army of robots.
AI’s ability to learn and adapt is impressive but the tools can only be as strong as the direction, data, instruction, checks, maintenance and upgrades they are given. For the foreseeable future at least, It is hard to see a world where humans won’t need to be the ones making decisions (outside of certain parameters) or checking them. They will also be needed to identify and guide what technology needs to be built to address new issues as they arise as well as being the ones who can offer a level of testing and fixing that sits outside of the potentially compromised technology itself.
It will remove more clerical jobs but pave the way for a new set of skills and roles. I recently attended an event where David Craig, President of Thompson Reuters made a prediction that in the next five years, half of all Chief Risk Officers and Chief Compliance Officers will have advanced academic backgrounds in data science and machine learning.
As the differences between what is working and what needs to be done differently become more apparent, firms are adjusting their workforce strategy. This is likely to include training, development, encouraging an innovative culture and partnering as well as new recruitment requirements. We can look outside of the financial services industry into the wider technology space with practical experience of applying machine learning techniques being a key requirement. This will see a new generation of talent and hirers will need to be mindful that there may be a shift in motivating factors that can attract and keep them.
To conclude
AI will only continue to evolve at an increasingly rapid rate and it is for us humans to shape how this technology can work in harmony with a newly skilled workforce to get the right balance of risk and reward for the benefit of society.
If you would like to add skills required for AI development to your team. Please get in touch.