Speaking at the MAB conference, Yuval Dvir, global director of online partnerships at Google, said that from a financial decision perspective, the “trust level to an algorithm is still not there”.
“Ultimately, I think for financial decisions the trust level to navigate them is still not there. You can ask questions [of the AI]…but when you make that financial decision you still want to make it in front of a person,”
He continued that what AI could do was “augment” the broker’s capabilities, and that the “strength [of the broker] is building the relationship, the empathy, the trust and having that ease with those customers”.
Dvir explained that having technology in place would lead to more professional and accurate documentation and better data. With this you could show a customer what other customers have done in similar circumstances using a trove of data to back up your relationship and knowledge.
He noted that it could also provide “additional services” as when people are looking for a mortgage they could be looking for other services like insurance and protection.
“You need to find them [customers] in the right moment, maybe when they’re looking for a mortgage, that’s the right route for other stuff as well. If you can provide that complete package, based on the strong relationship that you have, then everybody stands to benefit,” Dvir added.
He continued that using AI would mean having a “co-pilot with you on everything that you do, whether you’re a physician, financial institution or broker”.
Data quality issues will need to be collaborated on in next few years
Regarding data quality, as AI requires huge amounts of data and if poor data is inputted it could lead to different outcomes, Dvir said that current AI algorithms do not have the solutions for this quite yet in ensuring that it has the “right information”.
“If you take your own data…then I think that’s where you limit the probability of having the wrong information and the wrong result. But I think it’s a lengthy sort of process and it’s a learning process by the machine.
“That’s why it can’t fully replace [a human], but it can be augmenting the human, in some cases,” he added.
Dvir continued that developing AI would need to be a “collaboration together that will…get us through the initial few years”.
“After that, I imagine, most of those quality issues would be corrected, that we have developed a mechanism to ensure that sort of doesn’t happen. But in the next few years, as it’s a very new and technology I think it’s going to be more collaboration than trusting everything on it,” he noted.
AI ‘hype is real’
Dvir said that whilst there was “hype” around AI but it was “slightly different” as there were already applications for the technology and it can be explained more easily than blockchain or cryptocurrency.
He pointed to an example with neurosurgeons, where AI was being used to identify and minimise mistakes during surgery as videos were uploaded and then analysed by the AI algorithm. When a surgeon underwent a similar surgery then warnings would flash up to highlight if a similar mistake may be made.
He continued: “It does look like it will move to a natural language capability to allow more people to interact with it, which doesn’t require a lot of knowledge. So, from that perspective, I think we’re seeing that really some work out in the market.”
Natural language capability means AI and computers can understand text and spoken words in the same way as humans, so if someone wanted to use an AI algorithm they could just type in a normal sentence as opposed to learning coding.
“There is still hype because hype is something that we always do as humans, but I think it [the hype] is real. I know that at Google, Microsoft and other companies that generative AI is front and centre and everybody talks about it, everybody is trying to capture and use it, and I don’t think there is a clear winner yet,” Dvir said.