Reaping the Benefits of AI While Avoiding Unfair Bias

Applications in DeFi, InsurTech and traditional insurance R. Dale Hall


The impact of artificial intelligence (AI) seems to affect almost every facet of our lives. From algorithms that make recommendations for movies and online purchases, to automated safety features in cars and programs that help diagnose diseases, AI has the potential to make our lives more convenient, safer and healthier. On the market level, AI’s influence spans many industries, helping to make companies more productive and efficient. In fact, a PricewaterhouseCoopers study shows that AI could contribute as much as $1.7 trillion to the global economy by 2030.

In the world of finance and insurance, AI has supported the evolution of decentralized finance (DeFi) and InsurTech, and traditional insurance is reaping the benefits of this technology as well. However, AI isn’t without risks—threats to data privacy and security along with data quality issues quickly come to mind. Unfair bias is yet another. According to Avoiding Unfair Bias in Insurance Applications of AI Models, a report from the Society of Actuaries (SOA) Research Institute, this is a risk with far-reaching consequences for large portions of society. It is important to look closely at how unfair bias occurs and how it can be prevented.

Automated Processes and Decision-Making

AI has played a role in the evolution of DeFi, a financial system powered by blockchain that acts as a secure, decentralized record of digital transactions. Over the last two years, DeFi’s growth has exploded, with more than $245 billion USD deposited in DeFi applications by the end of 2021. As DeFi continues to evolve, implementing more AI models promises to considerably improve its services, such as providing fraud protection and enabling smart contracts to generate a range of recommendations.

AI also is affecting InsurTech companies, including digital entities that provide insurance protection. Two examples are the companies Bought By Many, which offers pet insurance, and Lemonade, a peer-to-peer home, rental, car, life and pet insurance provider. There are many more InsurTech companies making headlines, and the number of startups that have launched initial public offerings (IPOs) points to this segment’s success. Despite the pandemic, 20 InsurTech companies launched IPOs in 2020, according to a survey on InsurTech and venture capital. And most respondents forecast between 10 and 19 additional InsurTech IPOs in 2022. AI, which helped spur InsurTech’s development, will play a role in its continued growth by streamlining processes such as onboarding and simplifying customer service.

In addition, AI has played a role in the traditional insurance market, and insurers have adopted AI to enhance services and remain competitive. AI investments have reinvented customer service and claims processing with chatbots; underwriting has experienced improved operational efficiencies and offered more personalized assessments with AI-suggested risk categories; and fraud detection automation has reduced risk by identifying aberrations in claims data. These are just a few examples of AI’s benefits in the traditional insurance industry.

Challenges That Could Undermine AI’s Benefits

As with any new technology, AI’s positive outcomes are accompanied by some concerns, including unintentional discrimination. DeFi, InsurTech and traditional insurance organizations are becoming increasingly reliant on AI, and inadvertent bias can damage a wide range of business practices, significantly affecting large groups of people. For instance, AI-based smart contracts enable DeFi to automate a range of services, such as credit scoring, investment advice and underwriting. However, if the input data or algorithms reflect unfair bias, the output, including recommendations, might be discriminatory.

The same risks exist for InsurTech companies and traditional insurers as well. Take AI-generated forecasting for demand trends, for example. If insurers lack historical data for traditionally unexplored segments of the population, AI models unknowingly could exclude some customer groups. As a result, products may fail to meet all needs effectively, and marketing could miss communicating with entire groups of potential customers.

Likewise, as insurers increasingly use AI in customer service interactions, accessibility challenges could put some customers at a disadvantage. For example, customers without a smartphone or internet access would miss out on the convenience of using AI and image recognition to report automobile damage after a car accident.

No responsible business wants to overlook or alienate any of its customers. So, how do inadvertent exclusions happen? The culprit is unfair bias, and there are a couple of ways it can sneak into AI systems.

Algorithm design, data element types and end users’ interpretation of results are all part of the AI model’s decision-making process. If any of these elements aren’t clearly understood, there are risks of bias. More specifically, bias can emerge from an AI risk model if a company is unaware its data sets are too simple or outdated. Additionally, the large amounts of data and multivariate risk scores used in micro-segmentation are not always clear. Not understanding what drives a model’s decision-making can unintentionally result in discrimination.

For example, a data element that correlates with higher exposure to loss in insurance coverage also may be associated with a protected trait. As a result, the model’s decision-making could generate outcomes that are unfairly biased.

Inoculating AI Against Unfair Bias: Ethical Practices for Actuaries

Actuarial tasks encompass sourcing and formatting input data, performing calculations, analyzing results, preparing reports, communicating recommendations and, ultimately, making decisions. According to the research paper Actuarial Technology: A Roundtable Discussion on Current Issues released earlier this year by the SOA Research Institute, new technology can benefit each of these tasks. And it introduces unknown risks as well.

As soon as AI is placed in a social context, ethical questions can come into play. An autonomous drilling machine has no social context. A human resources AI model using specific hiring parameters, on the other hand, does have a social context because it can affect who gets hired. Similarly, actuarial work involves a social context because it too affects people. As a result, the actuarial profession has experience balancing risk management with ethical considerations. Now, AI is providing opportunities for actuaries to further develop their ethical best practices, and a framework around the usage of AI will help prevent the infection of unfair bias.

The SOA Research Institute report Ethical Use of Artificial Intelligence for Actuaries discusses in detail five concepts, or pillars, that should underly an ethical approach to AI:

  1. Responsibility—Accountability starts with identifying who is responsible for AI development and outcomes.
  2. Transparency—Some AI techniques are difficult to understand and explain. Others, like decision trees and Bayesian nets, are more transparent.
  3. Predictability—Technology works best when people feel comfortable using it. AI applications that provide a predictable environment support this.
  4. Auditability—Continuously tracking AI actions can help ensure ethical outcomes.
  5. Incorruptibility—Robust protection against manipulation mitigates risk and engenders trust.

Building Guardrails to Support Ethical AI

An ethical framework starts with a governance structure that is flexible enough to apply to current practices and future possibilities, such as new regulations and evolving customer expectations and market segments. Here are some recommendations for implementing AI governance:

  • Monitor the regulatory environment. Doing so will help ensure AI development aligns with the organization’s risk tolerance.
  • Engage stakeholders, and establish roles and responsibilities. The goal is a nuanced view of how AI is used across the organization and the related risks. Additionally, setting roles and responsibilities maintains accountability.
  • Equip employees with the necessary tools and skills. Ethics training can inform employees of regulatory and ethical requirements and encourage productive discussion. It also raises awareness of personal biases and how they can influence decision-making.
  • Conduct a model risk assessment. This assessment will help determine the necessary levels of scrutiny and control. Additionally, the AI model’s risk tier determined by the assessment will dictate the design and development, including risk mitigation strategies.
  • Integrate with model risk management. It can be beneficial to integrate AI governance with existing risk management practices for actuarial- or finance-related models. Individuals can collaborate to streamline compliance processes.

Because AI governance practices are not a one-size-fits-all proposition, companies will want to tailor policies and practices to their organization’s specific needs. The Avoiding Unfair Bias in Insurance Applications of AI Models report presents a generalized model development framework with steps for AI model development across five stage gates, or phases, that serve as critical decision and evaluation points. The report explores each step and stage gate in detail.

Please share your thoughts and experiences on the topic of AI bias with us. Email

As part of the SOA’s strategic plan, the organization included new learning opportunities for candidates and members, including topics involving predictive analytics and ethics. The curriculum for the associate of the SOA designation includes learning objectives on ethics as part of the Advanced Topics in Predictive Analytics (ATPA) assessment. To further address the ethical questions around AI models, the SOA developed a certificate program on the Ethical and Responsible Use of Data and Predictive Models. The detailed content and hands-on training show how to incorporate an ethical framework of best practices when creating or deploying predictive models, including those created by AI. This certificate is one such example of offering actuaries hands-on practical training.

Preparing for the Future

Like the rest of the world, insurance companies increasingly rely on AI, and this reliance will continue to grow. Actuaries may feel pressure to deliver insights derived from AI models more rapidly and across new use cases, increasing the potential for inadvertent discrimination. Additionally, regulators already have noticed the development of AI models. In fact, as early as 2023, Colorado will prohibit insurers from using external data that unfairly discriminates against an individual based on protected attributes. Therefore, the importance of implementing a robust set of processes and controls only grows. A framework of ethics can go a long way in mitigating the risks of unfair bias throughout all the stages of AI use and development.

R. Dale Hall, FSA, CERA, MAAA, CFA, is managing director of research at the Society of Actuaries in Schaumburg, Illinois.

Copyright © 2022 by the Society of Actuaries, Chicago, Illinois.