AI in Health Care

Actuarial considerations around cost, data and outcomes

BY NATE WORRELL

Earlier this year, OpenAI launched ChatGPT Health, which it calls “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health.” 1

This announcement underscores the widespread use of ChatGPT for medical and wellness-related topics. OpenAI reports there are “hundreds of millions of people asking health and wellness questions each week,” and has set up a waitlist for individuals to request access to details about ChatGPT Health.

As debates continue over the ethics, privacy and effectiveness of AI health tools, I want to focus on the potential implications for three key components of actuarial practice: 1) cost and usage drivers, 2) data, and 3) mortality experience.

COST AND USAGE CONSIDERATIONS

Healthcare expenditures increase year over year. In the United States, which leads the world in healthcare spending, $4.9 trillion was spent on healthcare in 2023, a 7.5% increase from the previous year, according to data from the Centers for Medicare and Medicaid Services.

USAGE OF SERVICES

At times, consumers face time, cost and travel obstacles to seeing a doctor. In The Actuary article “Telemedicine in the New Health Economy,” Doug Norris and Keith Passwater review telemedicine as one solution to this issue, offering cheaper and more convenient ways to solicit medical advice. But it is a double-edged sword—a 1-1 replacement of an in-office visit with a virtual one would save money, but if the ease of access increases the overall volume of visits, then there is potential for increased expenditure.

Large language models (LLMs) continue telehealth’s trade-off between replacement and volume, though the equation’s inputs differ. Similar to looking up health info or self-diagnosing medical conditions rather than immediately consulting a qualified healthcare provider (aka using “Dr. Google”), there are negligible costs to getting information from digital sources.

According to the National Center for Health Statistics (part of the Centers for Disease Control), in 2022, close to 60% of American adults used the internet for medical information. Ideally, this self-seeking would redirect and streamline care, but misdiagnosis and misinformation could also create more complications, requiring an extra cost burden. AI chat engines differ from internet searches in a few ways: They produce content in an easier-to-read format with a conversational tone. This shows up clearly in regard to mental health, in my perspective.2

Like physical health, patients of therapists and counselors face cost and convenience constraints. According to the 2024 Practitioner Pulse survey from the American Psychological Association, more than half (53%) of psychologists reported having no openings for new patients. And a similar number (51%) reported an increase in symptom severity among patients, with more than 4 in 10 (44%) reporting that their patients have needed a longer duration of treatment.

I would imagine that, for many, an “always on” voice that can offer cost and convenience considerations is an appealing prospect. However, there are still concerns about its correct use. Psychologist Stephen Schueller, Ph.D., studies digital mental health technologies and had this to say in a 2025 article: “They might prevent a person in crisis from seeking support from a trained human therapist or—in extreme cases—encourage them to harm themselves or others. Vulnerable groups include children and teens, who lack the experience to accurately assess risks, as well as individuals dealing with mental health challenges who are eager for support.”3

So, how may AI chatbots affect medical usage? Well, to use a popular actuarial catch phrase—“it depends.” The way I see it, AI could replace some forms of care, but only a subset of types of visits, and the benefits could be counteracted by potential volume increases and greater severity that result from delayed care or misinformation.

ACCURACY AND DIAGNOSTIC RISK

Instead of using the internet as a doctor, one way of using digital information is alongside a doctor. Assuming the AI tool becomes a reliable way to triage information and present it to a doctor with accurate evidence, there may be opportunities to save costs related to testing, diagnostics and specialist consultation, for example. Naturally, the flip side is also true: Undoing the damage from misinformation or being sent on a wild goose chase could increase costs.

A lot of this will depend, I believe, on the tool’s ability to get things right and on how the receiving user, whether doctor or patient, proceeds with this information.

  • A Stanford University study showed that, in a narrowly defined diagnostic task conducted in a controlled research setting, ChatGPT performed comparably to—or better than—physicians who were using either AI-assisted or conventional research tools.
  • A Forbes article highlighted several applications in radiology, including mammograms, pulmonary embolism detection, and stroke bleeding detection, with improved accuracy, speed and workload.
  • As detailed in The Conversation, IBM’s Watson serves as a cautionary tale, ultimately failing in health contexts due to bias, data connectivity issues and improper recommendations.

Another factor is the extent to which AI systems used by hospitals and providers are incorporated into the algorithms and applications of consumer-facing prompt engines.

CONSUMER TOOLS AND CLAIMS NAVIGATION

Usage of AI tools is not limited to diagnostics or education. Consumers can apply the technology to better understand benefit statements, review bills, negotiate with insurers and appeal claims.

One recent story detailed how an individual reduced a $195,000 hospital bill to $37,000. Using the generative AI assistant Claude AI, the individual cross-referenced information with ChatGPT and conducted their own research to ensure the findings were accurate. They then used AI to write a letter outlining the billing violations detected and to infer that legal recourse would be taken. The hospital capitulated, reissuing the bill for $37,000.

Whether due to human or machine error or fraudulent activity, waste exists in the healthcare system, and perhaps with the adoption of computer-aided analysis, consumers and payers could experience some relief.

CONSIDERATIONS FOR DISABILITY, LONG-TERM CARE

Many of these items will also affect disability and long-term care. If people can use technology to improve health, we may see a lower incidence and a better recovery experience. The converse, I believe, is also true.

Additionally, usage is affected by the extent to which policyholders and their supporters use generative AI assistants to better understand policy language and improve their ability to file initial claims or appeal denied claims. For seniors, generative AI, especially via voice, may be easier to embrace than other technologies. Per a poll from the University of Michigan, “55% of people age 50 and older said they have used AI technologies that you speak or type to for a variety of purposes, including for health information and social connection.”

With disability, one area to pay particular attention to may be how conversations with AI affect mental well-being. Mental health issues have been on the rise in a number of countries (Australia, the UK) or have remained steady (23% of adults in the U.S).

A LOOK AT DATA

Many actuaries are accustomed to navigating Electronic Health Records (EHRs). In the past decade, the use of wearables and fitness tracking has opened the door to a range of data sources for predictive analytics algorithms. While these new areas bring new insights, they also come with a gauntlet of anti-discrimination and privacy protections, such as the Health Insurance Portability and Accountability Act (HIPAA).

Generative AI is pushing against the privacy paradigm, demonstrating people’s willingness to share some degree of medical data outside privacy protections. In a paper presented at an Association for the Advancement of Artificial Intelligence conference, researchers observed, “Though studies of privacy attitudes find that users express uncertainty about the security of their chat data, research probing users’ real-world behaviors shows that users nonetheless disclose highly sensitive information to chatbots.”

I believe that, just as Uber disrupted transportation regulations, we could see a disruption of health information-sharing frameworks, potentially prompting regulatory clarification or evolution rather than wholesale disruption. That said, the formation of ChatGPT Health and other applications signals that some degree of protection for this information is necessary.

Taken together, these interactions represent a large volume of conversational data that, under appropriate governance and privacy safeguards, could offer insights not previously available through traditional health data sources. In theory, and subject to ethical standards, regulatory requirements and technical feasibility, could it signal new information about the prevalence of disease and sickness? Could it provide an early alert for cancer? Could it detect dementia or cognitive decline? Could it demonstrate how a bug is spreading in a region? Could it illuminate where areas of health are misunderstood or misapplied?

From underwriting to fraud and claim management, insurers have long been looking for alternative data sources. A 2023 SOA interview with insurance professionals in the Australian insurance market identified current and future ways in which insurers could adopt alternative data. Of the 20 participants, 13 were optimistic or cautiously optimistic about the prospects of alternative data use in the industry, while the others held a more neutral view.

Actuarial professional organizations have guidance notes on governance and professionalism around AI models in actuarial practice. Data quality standards and Actuarial Standards of Practice (ASOP) would, as I understand it, apply to data generated by public GPT models, even though chat transcripts are a new source of information.

MORTALITY AND LONGEVITY

Engaging with generative AI tools can be compelling. For some people who struggle with isolation or depression and want a trusted advisor, chatting with a robot may be a perceived source of comfort. However, the Center for Humane Technology recently detailed seven lawsuits involving self-harm and suicide filed against a generative AI application owner.

In the Center’s view, instead of being given tools that elevate human potential, consumers have been given AI products designed to exploit vulnerabilities, erode human connection and diminish cognitive capabilities. From an actuarial perspective, the magnitude of these risks may depend on usage patterns, controls and integration with human care rather than solely on the presence of AI tools.

Misinformation or bad advice not only increases the risk of exacerbating a health issue, but it can also be fatal. This is not unique to generative algorithms. There were numerous rumors and alternative remedies circulating worldwide during the COVID-19 pandemic. One example, drinking methanol, claimed 700 lives. Similarly, HIV denialism may have contributed to over 300,000 deaths in South Africa during the peak of the AIDS epidemic. It may be difficult for LLMs to distinguish between what is popular and what is accurate. The question, to me, remains whether AI will exacerbate misinformation or resolve it. There are areas where educational attainment and access to care lead to a lack of helpful information. The proliferation of chat tools may help ameliorate that education gap.

When the generative tool gets things right, the next piece of the equation is whether it will accelerate or delay the care that is needed. Delays are often associated with higher mortality risk.

  • Cancer survival rates improve dramatically if detected earlier.
  • The quicker a cardiac patient gets to the hospital, the better. For example, researchers found that for patients needing angioplasty, each additional 30 minutes of delay increased 1-year mortality by 7.5%.
  • The Common Wealth Fund points to delays in care as part of the story for elevated mortality rates, especially for preventable deaths, for women of reproductive age.

The longevity and mortality outcomes of the individual in relation to using AI-based chats are one dimension; the overall societal impact is another.

In 2023, tech leaders came together to write a statement on AI Risk, highlighting the need to consider extreme but low-probability risks associated with advanced AI systems and to place them in the broader context of other societal-scale threats.

Additionally, the popularity of generative AI does have secondary effects, including a potentially heavy environmental toll.

On the more optimistic side, in the course of human history, some of the largest gains in longevity have been due to new technology, and not always in medical advances. Toilets, pasteurization, and climate control have all contributed to adding more years to life.

Generative AI’s contribution to this risk is unfolding before our eyes as millions of users interact with myriad health conditions around the globe, around the clock. Meanwhile, in other applications, AI tools are pursuing cures for cancer, protein folding and vaccine development, leading to the bold prediction from futurist Ray Kurzweil that we will achieve Longevity Escape Velocity, where longevity increases exceed the pace of aging.

FOR MORE

Access the SOA Research Institute report, “Provider Use of AI in Healthcare.”

Access the SOA’s Actuarial Intelligence Bulletin, which informs readers about advancements in technology.

IN CLOSING

As generative AI becomes increasingly woven into the fabric of personal health management, its implications for actuarial practice will continue to deepen. One of the primary challenges I see lies in understanding the blessings and curses that come from this human-to-machine engagement for health and for mortality.

There are opportunities, I believe, for actuarial pioneers to study and explore conversational data and look to bring that into all stages of the insurance cycle. As we study and analyze this phenomenon for individuals and insured populations, let’s also stay vigilant about the threats and opportunities at the highest levels.

In the meantime, I am going to ask for a few tips for getting a better night’s sleep.

Nate Worrell, FSA, is a director of customer success at Moody’s. He is based in Babcock Ranch, Florida.

Statements of fact and opinions expressed herein are those of the individual authors and
are not necessarily those of the Society of Actuaries or the respective authors’ employers.

Copyright © 2026 by the Society of Actuaries, Chicago, Illinois.