AI the Ally

A look at modernizing actuarial systems with Large Language Models

Daniel Ramsay
Photo: Adobe

While the landscape of actuarial modeling is constantly evolving, many life insurers find themselves tethered to legacy tools of the trade. Decades-old codebases (think COBOL, APL languages); vendor systems designed in the 1990s; and sprawling, complex networks of spreadsheets immediately come to mind. While these tools have served their purpose, they often, in my observation, lack the governance, scalability and efficiency demanded by today’s dynamic market and regulatory environments.

Migrating modeling work, whether handling valuation, asset and liability management (ALM) or intricate pricing logic, can present a significant hurdle. It’s a resource-intensive process fraught with challenges in understanding legacy code, documentation, validation and reconciliation. However, the advent of large language models (LLMs) offers a new set of capabilities, not as a complete replacement for actuarial expertise, but as a powerful accelerator in the modernization process.

Across the U.S. life insurance space, various functions rely on systems built over many years. Pricing models, for example, might originate in flexible tools like Microsoft Excel but grow to become unwieldy in complexity. Valuation or ALM models might be written in archaic languages, with the expertise required to extend or maintain them disappearing from the enterprise when the original developers retire or move companies. Other common issues include opaque logic, assumptions buried deep within code or cells, and institutional knowledge residing primarily with a few key individuals. As articles suggest, documentation might be missing, outdated, or inaccurate, leaving models that are hard to decipher, maintain and migrate.

The business case for moving to modern, robust actuarial platforms is clear to me: enhanced control, governance and auditability, automated testing capabilities and access to elastic compute resources. Yet, in some instances, the sheer scale of a migration project, particularly the effort required to understand the source code, translate logic, and perform rigorous testing and reconciliation, could be delaying these essential modernization initiatives, it has been reported.

Streamlining Migration with AI Power Tools

I believe the true value of LLMs could lie in their potential to accelerate key phases of the migration process by handling the complex, time-consuming task of initial code/logic translation and validation, freeing up actuaries to focus on further optimization and more strategic issues.

  1. Automated Model Documentation (AutoDoc): Documentation is frequently the Achilles’ heel of legacy models, whether they are spreadsheets, old codebases, or existing platform models. AutoDoc can ingest actuarial logic from various sources and generate comprehensive documentation, covering cash flows, assumptions, data sources and underlying business logic. For instance, analyzing a complex model might require the creation of dozens of pages of detailed documentation, swallowing up precious team resources. The AutoDoc tool provides a first draft, handling a substantial percentage of the “grunt work.” Actuaries can preserve invaluable institutional knowledge and create a solid foundation for the migration by reviewing, refining, and enhancing this output, focusing their expertise on critical nuances rather than reverse-engineering or transcription.
  2. Intelligent Auto-Reconciliation (AutoRecon): Translating or rebuilding the model in the new system is only part of the migration task. The remaining, often larger part, involves rigorous testing and reconciliation, ensuring the new model accurately replicates the results and logic of the old one. This phase can be meticulous and time-consuming. It’s been my experience that the AutoRecon tool can streamline this process. Given the new model, reference documentation (from the legacy system or replicating spreadsheet), along with the respective outputs from both, this AI-powered process identifies where results diverge and works backward to pinpoint the specific point where the logic in the new model and legacy model diverges. This could be anything from a missing variable in the new calculations to an assumption table lookup error or a subtle difference in syntax interpretation between the old and new systems. By automating this forensic analysis, the reconciliation phase of a migration can potentially be accelerated.
  3. Excel migration: Many real-world Excel workbooks are less standardized, with logic distributed across cells, formulas, named ranges, VBA code and external links, making automated migration more challenging. GenAI can be used for arbitrary Excel workbooks rather than relying on Excel models that follow a pre-defined template. This means less time is needed to untangle spaghetti code and formula, and more time can be spent on validating and refining the migrated models, ensuring accuracy and alignment with business objectives rather than manual cleanup and restructuring.

Tailoring the Migration Approach

The optimal migration strategy, and the role AI plays within it, will depend on the source and target systems. Here are considerations for three scenarios:

  1. Open Code-Based Legacy Systems (e.g., COBOL, APL): This is likely the lowest hanging fruit for AI-accelerated migrations. LLMs can assist in translating procedural code into modern languages like Python or C++ on a solution level directly, potentially reducing manual recoding efforts.
  2. Closed Code-Based Legacy Vendor Systems: Unlike open-source models, there may be contractual limitations as to whether code can be exported from a proprietary modelling system and into a new third-party provider. This limits the ability of AI to generate translations directly for the new platform. However, if model outputs along with sufficiently detailed model logic of the legacy system exist, then these together could be used by the AI to generate code in the new target platform.
  3. Proprietary/Black-Box Systems: For some valuation/GAAP models on older platforms, direct translation might be limited due to lack of access to underlying model logic or documentation. Reimplementation from model requirements might be necessary, with AI assisting primarily with generating code from documentation and accelerating the reconciliation of the new model.

Navigating the Limitations

The potential of AI to accelerate migrations is clear, but it’s crucial to acknowledge its limitations. While these tools, in my experience, can perform the tedious and far from cognitively demanding task of initial conversion, it’s important to understand they are accelerators, not fully autonomous solutions. The output generated by an LLM requires rigorous review, validation and refinement by experienced actuaries. LLMs operate based on patterns learned from vast datasets, but they don’t possess the deep contextual understanding or have the capability for expert judgment that an actuary with decades of financial modelling experience would have.

Moreover, AI systems are just as likely as humans to produce low-quality output when given low-quality input. While highly detailed and comprehensive documentation is ideal, in practice, model documentation can range from highly detailed pseudo-code to much more abstract high-level requirements—and there is no universally accepted standard. Typically, documentation describes the foundational aspects of a product or regulation—features that are often in the public domain—rather than the intricate implementation details of specific logic. Our goal is not to replicate proprietary logic, but rather to ensure that we all work from a shared understanding of common actuarial concepts such as cashflows and decrements, reconciling any nuanced differences as needed.

The day when an AI can build a high-quality pricing model from high-level requirements documents entirely from scratch has not yet arrived. This is primarily because current LLMs, in my experience, lack the in-depth actuarial domain knowledge needed for reliable, fully autonomous implementation. However, AI as a copilot rather than autopilot still has the potential to make actuaries more productive in their work, and migration ambitions more realistic.

FOR MORE INFORMATION

The Future is Collaborative

LLMs are poised to fundamentally change how actuaries approach model migration and testing. By performing initial conversions and automating laborious tasks like documentation and reconciliation, AI could free up actuaries to focus on strategic design, validation, interpretation and managing the nuances of a transition, areas where human expertise remains irreplaceable. I believe the future isn’t about replacing actuaries with AI; it’s about empowering actuaries with AI, creating a collaborative environment that accelerates progress, enhances governance and ultimately unlocks greater value from actuarial systems.

Daniel Ramsay, FFA, is an associate director in the Research and Development team within WTW’s Insurance Technology segment. He also serves as co-chair of the IFoA Generative AI Working Party. He is based in Glasgow, Scotland.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.

Copyright © 2025 by the Society of Actuaries, Chicago, Illinois.