Ensuring Fairness in Insurance Pricing
Award-winning paper links fairness criteria and regulations with pricing models
December 2025Photo credit: Shutterstock/Niroworld
In recent years, insurers have turned to big data and advanced algorithms for pricing and underwriting. While this is intended to boost accuracy and efficiency, it also brings new challenges around fairness and bias. Complex models trained on large datasets often operate as “black boxes,” making it difficult to see whether indirect discrimination has found its way into the results. As a result, regulators and the public have expressed concerns around hidden bias, since indirect factors like ZIP codes or credit scores can inadvertently serve as proxies for race, gender and other protected attributes, leading to discrimination.
At the same time, machine learning, or ML, experts have worked on developing fairness criteria. However, these criteria have focused on binary classification algorithms, such as whether or not to hire someone or to approve a loan or not, and not on the regression models used for insurance pricing.


The North American Actuarial Journal (NAAJ) editorial board has selected “Antidiscrimination Insurance Pricing: Regulations, Fairness Criteria, and Models” as the recipient of the NAAJ Annual Prize for the best paper published in 2024. This standout research by Fei Huang, Associate Professor, School of Risk and Actuarial Studies, University of New South Wales (UNSW) Business School, and UNSW Business School Ph.D. candidate Xi Xin breaks new ground by introducing machine learning fairness criteria for insurance pricing using regression models.
The authors connect these criteria to antidiscrimination regulations and demonstrate how they can be applied across different pricing models, making a meaningful step forward for fairness in insurance pricing.
Tackling indirect discrimination
Artificial intelligence (AI) lets insurers analyze risks in much more detail, but algorithms can pick up patterns that act as stand-ins for protected traits like race or income. Because advanced AI models are complex and hard to interpret, it is difficult for actuaries, regulators, or consumers to spot hidden bias in their results.
“This raises the fear that such algorithms could be used, intentionally or unintentionally, to hide discrimination behind layers of statistical complexity,” stated Huang.
Motivated by conversations with insurance industry experts and regulators, Huang proposed that she and Xin explore the links between insurance pricing, antidiscrimination regulations and machine learning fairness concepts. Their collaboration began in 2020 as Xin prepared to start work on his Ph.D. at UNSW. This also happened to be around the time the insurance industry was adopting more sophisticated machine learning tools and regulators were asking questions about fairness and discrimination.
“That made it the perfect moment to launch a project at the intersection of actuarial science, law and data science,” said Huang.
Tradeoffs and balance
Their research found that fairness in insurance pricing can be approached in several ways, each with its own trade-offs:
- Connections: By linking applicable fairness criteria and antidiscrimination regulations and then embedding them into a range of pricing models, the study showed how these elements can work together in a framework.
- Tradeoffs: Achieving fairness often means sacrificing some predictive accuracy, and different fairness goals (like individual vs. group fairness) require different modeling choices.
- Tailoring models: The research applied a range of models, from generalized linear models (GLMs) and ML methods such as XGBoost, and results showed that each can be adjusted to fit different definitions of fairness.
- Practicality: The research also demonstrated that fairness models are practical and measurable with metrics familiar to actuaries, such as prediction accuracy, calibration and adverse selection.
- Economic implications: Some approaches may shift costs between groups, highlighting tension between actuarial fairness (“similar risks, similar premiums”) and solidarity (sharing risks more broadly across society).
- Sensitivity of methods: The choice of modeling technique itself can affect fairness outcomes, highlighting the need for careful regulatory oversight.
“We see our work as offering a ‘menu’ of options,” stated Huang. “Different models correspond to different fairness notions and regulatory regimes.”
For actuaries, this means fairness can be built directly into technical models, providing more transparency when complying with regulations and company values. For insurers and regulators, the paper shows that simply omitting protected traits might not be enough to eliminate bias. However, Huang and Xin demonstrate that there are models they can use that actively mitigate bias while still maintaining accuracy. The paper also presents regulators with a practical framework to turn fairness ideas into standards that can be monitored and audited, which helps balance fairness, predictive accuracy and market stability.
Future research
However, there’s still much to learn about fairness in insurance pricing. More research is needed, especially on which fairness criteria to use, since the right choice depends on the type of insurance and regulatory environment. Deciding where to apply fairness constraints—whether in cost modeling, market pricing, or both—also matters, as each option affects insurers, regulators and consumers differently.
FOR MORE
For a deeper understanding of how to prevent indirect discrimination in insurance pricing algorithms, review the fairness criteria and model comparisons presented in the award-winning NAAJ article, “Antidiscrimination Insurance Pricing: Regulations, Fairness Criteria, and Models.”
Read The Actuary article “Emerging Cyber Risks.”
Read The Actuary article “Disseminating New Ideas to the Actuarial Profession.”
Read The Actuary article “Prize-Winning Paper Tackles Machine Learning Actuarial Models.”.
Also, in many places, insurers can’t collect data on protected traits, making fairness hard to measure. Statistical models and machine learning can provide partial solutions, but there are questions about accuracy, transparency and unintended bias. So, more research is needed to adapt these methods to insurance and understand their implications. Finally, fairness interventions can shift costs and access across groups, so it’s important to understand how different groups are affected.
“Fairness in insurance is now a global, multidisciplinary effort that spans actuarial science, computer science, law and economics,” said Huang. “The community is steadily building practical tools, governance frameworks and regulatory guidance to ensure that fairness is not only an aspiration but also a reality in day-to-day insurance practice.”
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.
Copyright © 2025 by the Society of Actuaries, Chicago, Illinois.

