We’re in a bold era for the rise of InsurTechs. But does that rise mean the fall of insurers? Emphatically, no. By working together, insurers and InsurTechs can better address fraud and proactively address new forms of claims and investigative bias.
The Problem With Fraud
Insurance fraud costs the average American $400–$700 per year1 and $80 billion overall.2 Fraud is an issue of enough significance that 21 percent of insurers plan to invest in fraud prevention technology within the next two years.3 From our experience at Owl, insurers with lines of business that cover bodily harm typically detect 1 to 2 percent of the fraud on their books. Our client relationships have shown that fraudulent claims can make up nearly 10 percent of an insurer’s claims block. Bridging this gap is a critical initiative and a noteworthy opportunity.
The Problems With (Human) Fraud Detection
Daniel Kahneman, the Nobel Prize-winning behavioral economist, argues that human bias undisputedly leads to more variance in judgment. In 2015, Kahneman presented cases to 48 underwriters at a large insurance company to gauge the prices they would charge. Executives predicted a 10 percent variance between high and low prices. The typical variance in the study was 55 percent.4
Human judgment in the context of claims can be just as subjective as underwriting. One of the most egregious cases of ineligibility that has been unearthed during one of Owl’s client engagements involved a claimant whose adjuster had written in his file that he was still eligible and shouldn’t be investigated. However, external data analytics found the claimant had a YouTube channel with 140,000 subscribers and an associated e-commerce business generating about $40,000 a month.
How Insurers Are Fighting Bias
One significant study offered that insurers worldwide spend roughly $20 billion annually on advanced analytics.5 These typically take the form of predictive models, which essentially use data to find correlations to predict outcomes.6 In the case of insurance, that often means predicting potential losses.7
According to a senior executive at Chubb: “There is definitely an error rate, but the important question is not, ‘Is the AI [artificial intelligence] model 100 percent accurate?’ But, ‘Is it more accurate on a relative basis than the traditional manual processing of data?’ The answer is, ‘Absolutely.’”8
While predictive models may be more effective than traditional manual processing at scale, they have concerning flaws. Often, predictive models can incorporate biases that may lead to unfair and inaccurate decisions made by the insurer.
As mentioned, predictive models identify correlations. Experts at Everest Re argue that incorrect correlations could result in an illegal practice such as “redlining,”9 a racist lending doctrine used by the U.S. government’s Home Owners Loan Corporation to systematically deny loans to Black-majority neighborhoods between 1934 and 1968.10 The company argues that because of the vast amount of information incorporated into predictive models, it’s possible that they unintentionally include variables such as location. These could lead to correlations that result in discriminatory practices, such as more heavily investigating low-income areas.
Further, the sheer volume of information necessary for predictive models to function makes them difficult to audit. Moreover, insurers rarely have the necessary access required to perform an audit. Many insurers are using third-party predictive models, and often these vendors will not allow insurers to audit their proprietary models. This level of access is akin to giving away their secret sauce—and would likely cause most vendors to walk away from the engagement rather than allow for that level of transparency.
Insurers are beginning to appreciate the reputational risk that predictive, score-based models can introduce.
Despite these risks, claims will continue to malinger, and fraud will continue to occur. To stem the flow of ineligible claims, leading insurers are turning to a different model.
How Insurers Can Control for Bias and Detect Fraud
Insurers are turning to deterministic, evidence-based models to complement or sense-check their predictive ones. While they can take different forms, deterministic models “always produce the same output from a given starting condition.”11 Predictive models have a lower variance rate in their judgments than humans do, while deterministic models have no variance.
Deterministic models can function in a variety of ways. For example, Owl’s model works by surfacing evidence from public data sources. By only searching for publicly accessible data and not scoring based on biased data points, such as income level in various neighborhoods, Owl does not factor in protected class information and can produce an unbiased risk score.
A useful rule of thumb for insurers to employ when evaluating new third-party models is: Does this model care about what people do instead of who they are? If the vendor’s model fails this test, there is likely far more reputational risk in the model than the vendor is letting on.
Insurers should also gauge the level of transparency that vendors are willing to provide about their models. For example, suppose vendors claim to have deterministic models but are unwilling to allow insurers to see how they calculate their scores. In that case, the models are functionally black boxes and no better than the predictive models to which they claim to be superior. Failing to have this level of transparency will result in the same cultural roadblocks that insurers have encountered when adopting predictive models—namely, users ignoring findings and going with their gut feelings.
Increased transparency does not need to mean decreased effectiveness. Owl’s deterministic model has proven to be about five times more effective than some of the industry’s best claim monitoring processes. The return on investment (ROI) of deterministic models is high, and it is even higher if insurers are confident that the conclusions are unbiased.
Building on the Best Aspects
The “rise of the InsurTechs” necessitates neither the fall of insurers nor the rejection of intuition. A second opinion is always a good thing. It’s incumbent that experts and machines work closely together to successfully rely on data science and human intuition, with neither taking dominance nor overruling the other.
By building the best aspects of human intuition and predictive and deterministic models into their workflows, insurers can succeed in paying the claims they owe and not paying the claims they don’t. Structuring their work in this way also puts insurers in the best position to stay ahead of advancing government regulations and retain trust by protecting their policyholders.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.
- 1. Federal Bureau of Investigation. Insurance Fraud. ↩
- 2. Coalition Against Insurance Fraud. Fraud Stats. ↩
- 3. Ibid. ↩
- 4. Carroll, Paul. Our Big Problem With Noise. InsuranceThoughtLeadership.com, May 17, 2021. ↩
- 5. Banham, Russ. Down the Rabbit Hole. Leader’s Edge, July 20, 2020. ↩
- 6. McKeon, Christopher. The Promise of Predictive Models. InsuranceThoughtLeadership.com, June 1, 2021. ↩
- 7. Ibid. ↩
- 8. Supra note 5. ↩
- 9. Supra note 6. ↩
- 10. NPR. Housing Segregation and Redlining in America: A Short History. YouTube, April 11, 2018. ↩
- 11. Deterministic System. Wikipedia. ↩
Copyright © 2022 by the Society of Actuaries, Schaumburg, Illinois.