More than two-thirds of UK investors are comfortable with artificial intelligence (AI) being used in investment decision making, according to a research survey by Avaloq.
Only 15% of respondents said they would be comfortable with an entirely AI-driven analysis of their portfolio. But more than half were happy with a blended approach that combined AI and human involvement.
“Our research reveals that investors are more open to using AI in the investment process but still want the human touch, indicating natural opportunities for wealth managers to integrate AI into their offerings in a way that augments the service they provide,” Gery Zollinger, head of data science at Avaloq, said.
A separate study by CoreData found that 32% of UK financial advisers think AI will “revolutionise” the sector. CoreData found that this figure increased to 40% for advisers focused on high-net-worth clients.
Lingering concerns
It is no surprise then that three in ten advice firms say they will be “competitively disadvantaged” if they don’t embrace the technology.
There remains lingering concerns, however. 42% of advisers believe that AI raises serious risks for advice firms in terms of client confidentiality and data protection. Over a third do not trust the information produced.
An emerging technology, known as explainable AI, may help quell these fears.
Explainable models?
Explainable AI, or XAI, allows users to comprehend and trust the results and output created by machine learning algorithms. The technology stands in contrast to the ‘black box’ technology which is currently the standard.
‘Black box’ AI offers no transparency. Not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside or how the AI arrived at a specific result.
In a world as highly regulated as financial services, transparency is essential. XAI could go a long way in building trust in the use of AI in the industry.