The next generation of fraud identification
Fraud Aware Issue 3
The implementation of the Claims Portal and the LASPO Act has fundamentally changed the way in which personal injury claims are handled and as a consequence, many insurers are re-evaluating their approach to tackling insurance fraud. Keoghs believe the key to this is simple, by adopting a ‘find, focus, fight’ approach compensators should be able to identify, validate and pursue the right cases. By doing so, only cases with a good prospect of success will be pursued. Crucially, processes must be quick and efficient to allow insurers to make the right decisions at the right time and enabling them to take advantage of this new environment.
Dene Rowe, Director of Product Development at Keoghs, peers into the world of fraud identification and explains how the changes in this fast moving environment were the driving force behind Keoghs’ latest innovation, Advanced Data Analytics (ADA).
ADA as a solution started life in 2012. At that stage the Portal was very much in its infancy and LASPO, whilst in existence, had not had its presence felt yet. I remember internal discussions at the time speculating as to what the impact of these changes would be on counter-fraud. These discussions, whilst wide-ranging in scale, had a common theme: compensators would need to make sure their decision making was quick, accurate and economic. The world of fraud detection was changing fast with big advantages for those who could react quickly.
We also identified that the solutions helping the market to identify fraud would need to adapt to satisfy this criteria. Whilst the solutions had (and still have) their redeeming features, and many had matured with the fraud identification market, they were focussed just on that: identification. In the old world, counter-fraud measures always started with a ‘see how big you can get your funnel’ approach for fraud referrals, but in this new world is this still the case?
The core question became ‘how can these tools help on speed, accuracy and cost-effectiveness issues when they offer little in terms of validation and investigations?’ More importantly, we began to focus on how we could help compensators become more efficient in validating referrals, eliminating waste in investigating false-positives and helping them focus their time on the higher risk claims.
We had heard comments in the market surrounding compensators who had reached a stage where they were pouring increasing counter-fraud resource into investigating a burgeoning number of potential referrals from fraud ID systems. Rather than focussing on fighting fraudulent claims, their time was being spent on containing false-positive rates that were fundamentally too high… and we therefore asked three key questions:
1) How does this approach stack up in a world where speed and accuracy is king?
2) Why is this the case when surely the investment should be in driving positive counter-fraud outcomes?
3) Does anyone calculate the true cost of fighting fraud?
It was against this backdrop that ADA was conceived – but that story is for another day!
All roads therefore led to one conclusion – the changes in the market demanded better identification technology and in turn, the thing that drives it all, data.
The problem with data though is that it is often misunderstood. So in a world of ‘big data’ how do we get better data, the right data and data that can actually help identify fraud rather than hinder it by creating the burden of having to eradicate the ‘noise’ of useless information?
Luckily the answers are out there and lie with technology and data providers. Technology to consume ‘big data’ has developed at a pace and most importantly, there is now a recognition that in future, a single system mining its own data is not going to cut it alone without significant resource to filter the results – not when data integration and syndication is now commonplace. In simple terms, for personal injury fraud identification, immense value can be obtained if fraud identification and data matching rules are supplemented by a ‘call-out’ to a trusted automated identification or address validation system, an automated CUE check and most importantly a check against a known fraud outcomes data set.
In practical terms imagine a world where this was all done automatically so that when a claim actually arrives in a counter-fraud handler’s inbox they are presented with all this information from which to make a decision. Remember, making accurate and timely decisions is what this new world is about, not consuming valuable claims handler time conducting a series of subsequent routine searches. Eradicating routine through automation leads to one thing – more time to drive the right cases to the right outcomes.
But what next?
Well it’s straightforward, firstly – why go through a secondary counter-fraud system overnight when surely the nirvana is to have real-time validation checks as the claim is entered into the claims system at FNOL or at a later stage? Is it beyond technology to click ‘save’ on a new claim record, for example in a Guidewire or ICE system, and have it call out to a specialist counter-fraud system, via the magical use of ‘web-services’ and ‘XML’, to get an instant response? In turn, directing the claim down the right workflow channel, furnished with all that lovely quality data. The answer is no, of course it’s not beyond the capability of what is available.
Secondly, would it not also be sensible to use the same technology that is used after-the-incident, before the policy is taken out to avoid frauds where the policyholder may be ‘in on it’? Yes of course it would. Does the technology exist – well you know the answer. So where does this lead us in our quest to understand the next generation of fraud identification? I guess it’s two-fold:
1) In terms of quality data it prompts a memory of a conversation with a previous mentor, “Dene, the secret is not you knowing all the answers, it’s you knowing the right questions to ask of the right people at the right time that matters,” and secondly:
2) The technology to penetrate ‘big data’ exists, the technology to integrate with external specialist systems exists and more importantly there are many providers of quality data who make it efficient and easy to get this – so why not use it?
And I guess that brings us back full circle. What does all this mean to Keoghs? Well that is simple, that is what ADA is about.