Generative AI and Actuarial Work

With the technology advancing so quickly, it can be tricky to know where to begin

R. Dale Hall
Photo: Adobe

Generative artificial intelligence is rapidly evolving, with tools, methodologies and applications developing at an astonishing speed. By now, it’s almost a cliché to say generative AI is revolutionary, and it may seem difficult to keep up with the pace of change. How can actuaries leverage this powerful technology to increase their productivity and enhance their skills?

First Things First: The Relationship Between AI, Gen AI and LLMs

Generative AI builds on AI, encompassing new technologies that use algorithms to generate content. While older models categorize or label input data, generative AI models capture patterns, structures and distributions in their training data, and this enables them to produce new content.

Large language models (LLMs) are a subset of artificial intelligence (AI) models that focus on natural language processing (NLP). LLMs generate human-like text and perform tasks like translations, summarization, answering questions, text generation and some software coding tasks.

Guides on Using LLMs and Gen AI in Actuarial Work

Because LLMs offer the potential to enhance actuarial work, the Society of Actuaries (SOA) Research Institute, in early 2025,  published Operationalizing LLMs to provide actuaries with a practical guide for integrating LLMs effectively and responsibly. It provides an overview of the major LLM providers that are currently available and discusses how to evaluate and compare LLMs. The report then goes on to provide practical guidance on accessing and deploying LLMs, leveraging them with prompt engineering techniques and outlining a framework for managing risks and governance.

Because generative AI models excel at processing huge amounts of data, identifying patterns and generating content at great speed, innovative actuaries can leverage this technology to enhance their roles and gain new efficiencies. The SOA Research Institute report, A Primer on Generative AI for Actuaries, provides examples of tasks that could benefit from generative AI:

  • Coding-related tasks, such as generating code based on natural language prompts (NLPs), reviewing and improving code, and generating test cases and code comments.
  • AI can process and analyze much larger amounts of data than humans within the same timeframe and could potentially draw new insights.
  • Generating and summarizing documents and reports, such as current or proposed regulations.
  • Creating scenarios and simulations that are crucial in climate modeling, stress testing and scenario modeling.
  • Maintaining consistency in repetitive tasks, in contrast to humans, who often tire from monotony.
  • Image generation and processing, which has already been used in home and auto insurance claims processing.
  • AI can analyze personal preferences and behaviors to offer customization and has the potential to provide insurance products with customizable features and pricing.

Proceeding With Care: Technical Constraints and Ethical Concerns

Limitations to generative AI capabilities and ethical and security concerns remain. It’s important to keep in mind the need for human oversight. While some AI platforms have invested a tremendous amount to improve quality, generative AI tools are known occasionally to “hallucinate,” which refers to a generative AI tool providing inaccurate information. “Drift” is another accuracy issue in which generative AI models produce different outputs over time despite being given identical inputs.

Balancing the potential gains from generative AI with the need to keep data confidential is another important issue. The choice of AI platforms and the deployment model both play important roles in securing data privacy. Additionally, auditability and transparency of AI models can be a challenge in understanding underwriting decisions and other results that affect policyholders.

Other complicated issues surrounding generative AI include questions about copyright, the quality of input and training data, fraudulent use, the training needed for prompt engineering, and ethical issues involving bias and accountability.

Choosing Wisely: Start by Asking the Right Questions

Before actuaries deploy generative AI tools, architecture and accessibility decisions are required, such as whether the tools should be based in-house or with an external host. Other considerations include choosing from a growing list of generative AI tools now available in the market, the training process and quality of data. Decisions made on the various aspects of a generative AI tool impact its success within an organization.

A Primer on Generative AI for Actuaries gives a detailed list of questions that can serve as a checklist for decision-makers as they consider deploying a generative AI tool. The questions fall into these categories:

  • Tasks
  • Solutions
  • Data
  • Quality requirements
  • Commercial considerations about licenses and costs
  • Organization readiness

SOA Resources to Help Actuaries Leverage AI

FOR MORE INFORMATION

The SOA has made the adoption of this game-changing technology a key tactic in its strategic plan to help members and candidates adapt to rising challenges and, through a commitment to the ethical and responsible use of AI, do so safely. In keeping with the actuarial profession’s renowned integrity, the SOA joined the U.S. Commerce Department’s AI Safety Institute Consortium (AISIC) in 2024, along with more than 200 of the nation’s leading AI stakeholders, to support the development and deployment of trustworthy and safe AI.

The SOA Research Institute’s AI Research landing page provides a library of reports and resources, including the monthly Actuarial Intelligence Bulletin, which informs readers about advancements in actuarial technology, new AI research reports, updates on PD Edge+ webcasts, videos and on-demand webinars, and news about AISIC developments.

R. Dale Hall, FSA, CERA, MAAA, CFA, is managing director of Research, SOA Research Institute.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.

Copyright © 2025 by the Society of Actuaries, Chicago, Illinois.