The financial services sector doesn’t tend to do technology well. Factors such as poor collaboration between major players, excessive and often stifling legislation, and a mistrustful and fatigued user base present a sector ready for some good old-fashioned disruption.
Consumer facing robo-advice has slowly become an entertaining if somewhat tired example of Luddism. On one side, there are the innovators who recognise an opportunity to modernise financial advice by creating and catering for new acquisition channels, servicing clients more cost-effectively.
Online interaction continues to increase and new purchasing and servicing behaviours begin to emerge and evolve.
On the other hand, there are those who claim that people are neither interested or want the ability to transact in anything but a face-to-face scenario.
Certainly, while individuals are willing to embrace new interactions online, the lack of mass adoption of robo-advice services points to an unproven model.
From my rather low vantage, robo-advice within the intermediary sector can be broken into two parts: through a provider offering a simplified product set where the user journey is controlled or by providing a more rounded solution that is abstracted away from the providers. Towards the latter option, machine learning is bounded around with the promise of a more intelligent option that is closer to the advice element than the far more unpleasant robo.
Unfortunately, the issue with machine learning is that it is ultimately a reactive tool basing its decision-making on a set of probabilities and confidence levels where, certainly in financial services, the decision endpoints are continually evolving. The key is the curation and the structuring of the data but getting that right is a science in itself.
But what happens when it doesn’t work? When we disagree? Or even we’ve been unfairly judged by the much-vaunted algorithm?
Shake and bake
Help is at hand from the General Data Protection Regulation that is due May 2018. (It’s worth noting that 2018 is dropping a load of legislation affecting the fintech sphere, more on that another time.)
Within the act is the right to object to automated decision making, where there should be “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”.
Which in my very unqualified opinion means that robo needs to bake in the ability to qualify and justify decisions, along with the ability to provide alternative methods to service. Why? Because software works in absolutes and there’s always a goodly portion of grey between the black and the white.