In the Year 2036…

One actuary’s decade-away prognosis for the profession

By Devadeep Gupta

Actuaries are trained in long-term future planning, and yet, in my view, too often we still judge ourselves by how well we are performing based on legacy benchmarking of what “actuaries are supposed to do.” Many activities we perform and consider to be “definitely actuarial,” such as valuation, pricing, business planning and modeling, still operate on long-established fundamental principles. While these approaches remain robust and appropriate in many contexts, they also invite reflection on how evolving tools and expectations may shape actuarial work moving forward.

Imagine a world where …

  • Live screens at your work desk have completely replaced static reports.
  • Fixed assumption-setting processes have been eliminated and replaced by adaptive learning systems.
  • Cloud platforms powered by graphics processing units (GPUs) can run millions of policies across thousands of scenarios and react to market movements in near-real time.
  • Tail risks, once invisible (and sometimes considered unknowable), are now fully quantified.

The tools that could enable aspects of this vision exist today, although their applicability varies depending on the regulatory environment, scale, governance and organizational risk level. The question is not whether every actuarial function should move in this direction, but how actuaries might consider which capabilities could add value in their specific contexts over the coming decade.

From dice to data centers: How did we get here?

In my 2024 article for The Actuary, “The History of Actuarial Science,” it’s noted that the development of modern actuarial science may have taken a pivotal turn in 1654, when mathematicians Blaise Pascal and Pierre de Fermat argued over a gambling problem, and probability theory was born. This was followed by the creation of mortality tables in the 18th century, and since then, actuarial science has seen multiple computational breakthroughs, as shown in Figure 1. Each generational leap in technology did not just make calculations faster; it fundamentally broadened the scope and complexity of the problems actuaries could solve.

Figure 1. How each computing era expanded what actuaries could do

Era Approximate Date Computational Leap Actuarial Innovation Time Since Last Leap
Mechanical 1800–1900 Mechanical Calculators (e.g., Arithmometer, Comptometer) Mechanical calculators in the 1800s enabled actuaries to construct large reserve tables and industrialise premium calculations. 100+ years
Mainframe 1950s–1970s Mainframes (e.g., IBM 700 series, System/360) Mainframes allowed insurers to perform large-scale cash flow modeling and early scenario-based evaluations. 30 years
Desktop 1980s–2000s Personal computers (PCs) (Intel 386/486) Enabling complex actuarial modeling using vendor software. 20 years
Parallel 2010s–Present Cloud + Python + GPUs Real-time analytics, Interactive Scenario Analysis, AI/ML-driven pricing and capital reporting within seconds are now technically possible. 15 years

Source: Author

How does this all compute?

Many of the advances in actuarial science have coincided with major leaps in computational capability. The personal computer revolution of the 1980s and 1990s—along with the rise of spreadsheets and early actuarial modeling systems—expanded practitioners’ ability to analyze risk at scale, enabling the development of asset-liability management (ALM) frameworks, enterprise risk modeling (ERM) and increasingly sophisticated regulatory capital methodologies.

This, as I see it, has now defined the profession for nearly 40 years. However, for the past 10–15 years, cloud computing, Python and distributed systems have unlocked scalable modeling. This is gradually transforming the industry, but, in my experience, real-time actuarial reporting is still far from the norm (although it is technically possible).

However, I’ve observed that today’s actuaries hold a range of views. Some are eager to become strategic drivers by eliminating manual reconciliation; others are fatigued by system projects and wish to revert to what they consider “traditional actuarial work,” with minimal changes or system transformation. These differing views underscore the challenge of ensuring responsible, rigorous change.

For example, in product pricing, actuaries face a choice to either invest the most effort in deriving increasingly refined theoretical assumptions from limited samples or to use modern systems—GPUs, cloud and stochastic engines—to test thousands of scenarios in the same time. The former feels more traditionally actuarial; the latter often delivers materially stronger governance because assumptions are stress-tested, not just debated. When teams resist modern tooling, the result is not just slower workflows, but weaker validation, narrower controls and a higher risk of being confidently wrong.

Is the technology really available today?

Below are examples of technologies that exist today and which could, under certain circumstances, support transformation in actuarial work, although adoption remains uneven and at times, constrained by cost, governance, data readiness and regulatory considerations. The examples that follow are illustrative only and are not intended to indicate preferred architectures, vendors, platforms or solutions. Technical feasibility does not imply regulatory appropriateness or professional endorsement; rather, it highlights capabilities that may be viable in specific contexts.

  • Computational explosion. A single modern, high-end GPU can deliver computing power equivalent to hundreds of traditional CPU cores for financial simulation. This means that comprehensive overnight or weekend actuarial runs, which can take hours (or even days in some cases), can now be completed in minutes.
  • Workflow and automation revolution. The actuarial workflow can sometimes take the form of a fragmented and opaque web of systems, spreadsheets and manual handovers. However, by adopting a Python-native platform, actuaries could potentially gain instant access to the global ecosystem of AI tools (TensorFlow, PyTorch), positioning us to gain an overview of an automated set of processes for BAU pricing, valuation or business planning.

Given all of the above, I believe the actuarial profession may be standing at a fork with multiple parallel universes possible in the coming decade. Here is an attempt to imagine a couple of scenarios of how a day or week at work might be for an actuary in the year 2030, or even 2036, depending on how quickly we embrace and integrate technological advancements in the coming years.

Scenario 1: An actuary’s week in 2036

In the first scenario, I examine an organization that has chosen not to undergo transformation programs over the past decade. In such an environment, an actuary’s week in 2036 may still involve many familiar tasks and constraints, but could look like this:

  • Investigating a crashed overnight CPU batch run.
  • Manually completing hundreds of complex, key person risk management spreadsheet and system manual adjustments to finalize off-model reserves.
  • Manually extracting information from multiple sources and reconciling numbers at different levels of granularity to provide an ad hoc management information request.

In addition, a number of new requests come up during the week, adding to the workload:

  • An unexpected announcement leading to a change in long-term interest rates, requiring additional system runs, off-model manual calculations and preparation of an ad hoc report.
  • The finance team advised of a change in accounting policy related to IFRS 17 contract boundaries, which requires further system updates and manual adjustments.
  • New IT projects have kicked off, requiring sign-off on inputs from actuaries.

Scenario 2: Another actuary’s week in 2036

In the second scenario, I imagine an organization that has undergone transformation programs over the prior decade. While such an approach requires sustained commitment and oversight, it illustrates how actuarial work might evolve where advances are successfully integrated into existing control structures. An actuary’s week in the year 2036 may look like this:

The actuary’s morning begins by validating a new product that is scheduled to launch in a few hours—a same-day release, as rapid-cycle product development has become the norm. The product has already undergone various rounds of review, follows automated pricing and has run a large number of what-if scenarios using GPU-powered high-performance computing. A Python-driven orchestration pipeline also enables it to stream millions of projection paths under thousands of stochastic scenarios within minutes.

On the next day, the actuary manages to do the following, each in 30-minute slots allocated in their diary for tasks requiring human input:

  • Review capital and solvency surfaces, simulating millions of pricing permutations across economic and behavioral scenarios, and verified compliance—all in one day, enabled by GPU-powered HPC and LLM-assisted checks.
  • Review a new set of automated controls to be added to reporting result checks and additional automated commentary for the next day’s reports.
  • Collaborate with data scientists, software engineers, and risk teams via shared model repositories and automated testing frameworks to improve valuation run times via deployment of a new GPU type.
  • Before the end of the day, the actuary continues designing a climate-resilience coverage feature using live environmental data and presents capital-efficiency insights to the CFO regarding government incentives that reduce the company’s carbon footprint.

The actuary in Scenario 2 does not work any harder than the one in Scenario 1, but they and their organization benefit from the months and years invested in an ecosystem designed for efficiency and creativity, thereby enabling actuaries to create direct value and ROI for the business.

I believe most organizations are likely to fall somewhere between these two scenarios, blending legacy processes with select new capabilities. My intent here is not to suggest a single “correct” future, but to highlight a range of possibilities that may emerge.

Impact on people: New roles and adding value

By 2036, new roles may emerge that are explicitly designed to deliver measurable ROI, reduce friction across varying systems, and translate actuarial expertise into enterprise-level financial outcomes.

Any increased use of automation, advanced analytics or machine learning would most likely operate within governance frameworks. Regulatory approval processes, documentation standards, model validation and professional oversight remain essential, regardless of computational advances. In this sense, innovation complements the foundational controls of actuarial practice.

Figure 2. A look at potential new roles

New Actuarial Role Core Function & Value Added Quantifiable ROI & Quantifiable Impact
Systems Architecture Actuary Designs, architects and governs the live, integrated capital and reserving engines. Oversees real-time systems and leads any major redesign or transformation projects. May improve operational efficiency and reduce duplication across legacy systems, potentially leading to cost savings over time, depending on scale.
Product Dynamics Actuary Uses real-time capital and profitability signals to iteratively tune product features, distribution levers and reinsurance structures, creating a direct link between actuarial action and incremental margin. Business growth: Drives an increase in New Business Profit (NBP) by quickly modeling and launching products with optimized capital consumption and risk transfer structures to match market demand.
Assumption Intelligence Actuary Manages behavioral learning systems that flag model drift, detect anomalies in lapse or claims data and maintain a continuously updated view of portfolio health. May contribute to improved reserving accuracy and earlier identification of experience trends, supporting stable financial outcomes.
Regulatory Compliance Actuary Ensures that all live models comply with ICS, RBC, Solvency II and IFRS requirements by embedding rules directly into automated processes. Enables interaction with dashboards showing traceable, executable controls. Reduced regulatory risk: Avoids fines and capital charges by ensuring continuous compliance monitoring. Cuts the time spent on preparing complex regulatory returns, allowing actuaries to focus on value creation.
Enterprise Risk Stewardship Actuary Reviews overall risk appetite and actual quantification of risk versus tolerances, codifying lessons from past events, maintaining cross-market scenario libraries and maintaining related model logic. Risk management: Prevents costly errors and protects billions in capital by ensuring that lessons from financial crises or past model failures are permanently incorporated into current risk assessments, preventing recurrent flaws.

Source: Author

Four pillars of the profession

As computation continues to evolve, the actuary’s value, I assert, shifts even more to the principles of trust and judgment. The technological leaps are of the most value if they preserve the four pillars of the profession:

  1. Auditability: Every number, however fast the calculation, must be traceable to its source and assumption.
  2. Explainability: Complex AI/ML models must be interpretable to regulatory bodies and the Board.
  3. Prudence: Maintaining a consistent, conservative methodology for reserving and capital estimation.
  4. Professional judgement: The human actuary remains the final, indispensable validator of model output and the chief interpreter of risk to the business.

In closing

FOR MORE

Read “Forging Future-Ready Actuaries,” an SOA Career Development Community article.

Imagine 2036 under a scenario where nearly none of the transformations addressed here have come into being:

  • Insurers defer modernization, constrained by short-term cost pressures or fear of operational disruption.
  • Core models remain trapped in older or previously implemented domain-specific languages (DSLs) and desktop-bound architectures, limiting scale, flexibility and insight.
  • Regression testing and stochastic simulations continue to stretch overnight, leaving strategic decisions waiting for yesterday’s data.
  • Talent attrition: Actuaries pivot toward other opportunities that offer faster impact and more visible innovation.

As tools such as cloud computing, advanced analytics and automation continue to evolve, they present opportunities for actuaries to reconsider how best to apply their skills. The extent and pace of adoption will differ across organizations and roles, but the profession’s enduring value lies in applying judgment, prudence and accountability—regardless of the technologies employed.

Devadeep Gupta, FIAI, CERA, is a qualified actuary with two decades of experience in corporate life insurance and consulting roles with Aon, Prudential, HSBC, Deloitte and Towers Watson. His current role is Managing Director at Aon, where he leads business development for the Life Technology division for Asia Pacific. He is based in Singapore.

Copyright © 2026 by the Society of Actuaries, Chicago, Illinois.