Insights: AI Insurability
Taking a look at generative AI and the emerging risks associated with it
October 2025Photo: Adobe/Pinklife
Generative AI marks a significant advancement in artificial intelligence (AI). It is distinguished from traditional AI by its ability to create new content in the form of text, images, code and voice. Unlike traditional AI, which analyzes patterns in data for forecasting or classification, generative AI produces outputs not previously found in the data. Generative AI has great potential to transform business operations but its integration also introduces cybersecurity, ethical, legal and reputational risks.
Generative AI models, such as ChatGPT and DeepSeek, are being adopted rapidly by both consumers and businesses. According to a 2025 McKinsey survey, 71% of firms were using generative AI tools by early 2025, up from 33% just two years earlier. Applications range from automating internal workflows to powering customer interfaces. This widespread use underscores the urgency of developing sound risk management frameworks, including insurance-based risk transfer, and strengthening industry wide education to raise awareness of these new technologies.
Six areas of increased risk
To gain insights into AI adoption and risk management, the Geneva Association surveyed 600 business insurance decision-makers across six major markets (U.S., China, Japan, U.K., France and Germany). The report identifies six risk areas intensified by or related to generative AI:
- Operational
- Cybersecurity
- Reputational
- Regulatory
- Ethical
- Workforce related
Unlike traditional IT systems, generative AI often operates with opaque decision-making processes and a degree of unpredictability, making it vulnerable to errors, hallucinations and malicious exploitation.
Cybersecurity, according to the Geneva Association report, is the most cited concern among businesses. Generative AI can lower the barrier for cybercriminals through automated phishing or deepfake generation and become a vector for model manipulation attacks. Liability concerns follow closely—if AI-generated outputs cause financial or reputational harm, determining accountability becomes complex. Copyright infringement, biased outputs and customer misinformation are just a few examples of exposures that exemplify the kinds of risks that complicate the assignment of legal liability.
Internally, the integration of generative AI may lead to systemic errors, overdependence on automated systems or undertrained staff lacking AI literacy. Generative AI may amplify workforce displacement or strain governance frameworks that are not fully prepared for rapid technological change. These risks underscore a fundamental gap in organizational AI literacy—highlighting the critical need for broad educational and upskilling initiatives.1
Risk perception and insurance demand
The findings of the Geneva Association report identified cybersecurity as the leading insurable risk (cited by more than 50% of respondents), followed by liability and operational disruptions. Interestingly, reputational risk, while recognized, was deemed less immediately insurable. Over 90% of businesses expressed the need for specific insurance coverage for generative AI, and two-thirds were willing to pay at least 10% more in premiums.
Demand varies by company size, industry and geography. Medium-to-large enterprises, particularly in the technology and financial sectors, are most willing to invest in insurance. Businesses in China and the U.S. exhibit the strongest generative AI adoption and insurance demand, reflecting their advanced digital ecosystems. In contrast, European and Japanese firms are more cautious, citing regulatory uncertainty and cultural hesitancy.
It is worth noting that many respondents acknowledged a lack of internal understanding of generative AI risks. The survey findings point to a critical education gap: Many decision-makers feel ill-equipped to evaluate generative AI exposure or insurance needs. This further reinforces the importance of insurer-led educational efforts to help clients navigate this emerging risk landscape.
Assessing Insurability: Applying Berliner’s framework
Applying Berliner’s classic insurability framework, which includes actuarial, market, and societal criteria, reveals substantial challenges for risks related to generative AI.2
From an actuarial standpoint, the predictability of generative AI losses is low. The creative nature of generative AI outputs means losses can arise from unique and unforeseen interactions. High potential severity (e.g., misinformation campaigns or regulatory fines) further complicates pricing. The average loss per incident is expected to be significant, yet historical data to model such events is scarce.
Information asymmetry adds another layer of complexity. Insurers lack full visibility on how businesses develop and govern their generative AI tools. This heightens moral hazard and adverse selection: Companies with riskier systems are more likely to seek coverage, while insurers face difficulty verifying risk controls.
Market conditions are similarly problematic. Premiums may need to be high to account for uncertainty, deterring uptake by smaller firms. Findings also indicate that insurers are hesitant to offer high coverage limits due to tail risk and legal ambiguity.
Societal and legal factors present further friction. Emerging AI regulations, such as the EU AI Act, require compliance frameworks that are still evolving. Legal permissibility is also uncertain—generative AI outputs may violate privacy or IP laws in ways that challenge contract enforceability. Public policy concerns also arise if insurance inadvertently enables reckless AI use.
To overcome these hurdles, we believe insurers and policyholders alike would be well-served by strengthening their mutual understanding of generative AI technologies. This demands not just actuarial refinement, but robust educational initiatives targeting risk governance, regulatory alignment and ethical AI deployment.
Market responses: Product innovation and pilots
Despite these challenges, insurers are innovating. Many are adapting existing policies—particularly cyber and professional liability lines—to explicitly include generative AI-related losses. AXA XL, for instance, has introduced endorsements to cover AI-induced data contamination and regulatory penalties.
Standalone AI insurance is also emerging. PICC is piloting a generative AI liability product covering third-party claims related to AI-generated content. Munich Re has launched aiSure, a policy that combines expert-driven due diligence with parametric-style triggers. Startups like Armilla AI are offering performance warranties for AI models, while Managing General Agents, backed by Lloyd’s of London, is designing multi-risk generative AI policies.
As reports show, the use of parametric triggers is also gaining traction, offering greater clarity and speed in claims settlement. For example, policies may pay out if an AI-generated error exceeds a financial threshold or triggers regulatory scrutiny.
What unites these innovations, in our view, is the growing recognition that insurance cannot succeed in isolation. Each policy implicitly relies on insureds maintaining a baseline of AI literacy and governance. Education is not an optional add-on, but a foundational enabler of generative AI insurance.
Embedding education in AI risk management
We believe education is central to insurability of AI, with insurers, regulators, and technology providers jointly investing in educating underwriters and insureds.
For insurers, this means building internal capabilities to understand AI technologies, their vulnerabilities and governance best practices. For insureds, it means training staff on AI ethics, data privacy and operational safeguards.
Moreover, insurers can embed education into policy design: Coverage terms that require periodic AI audits, documented governance practices or third-party validation can incentivize good behavior while reducing underwriting uncertainty.
Education also underpins trust. By demystifying generative AI technologies and communicating their risks, insurers can position themselves as partners in responsible AI adoption. Cross-functional training programs, joint risk dialogues and industry knowledge hubs could be scaled across the sector.
Beyond risk mitigation, education also plays a strategic role. It enables insureds to better articulate their risk needs and fosters innovation by clarifying what can and cannot be insured. In the long term, we believe education will be as important as underwriting in shaping the trajectory of generative AI insurance.
A collaborative future
To unlock generative AI’s full potential without amplifying systemic risk, stakeholders could collaborate on shared risk frameworks, including these three:
- Risk taxonomies. Establishing clear categories of AI-related risk that align with potential insurance coverages.
- Cross-industry standards. Co-developing ethical and technical guidelines for safe generative AI deployment.
- AI education platforms. Co-funding training centers, online courses, and insurer certification programs tailored to generative AI governance and risk literacy.
Just as cyber insurance matured through iterative learning and cooperation, generative AI risk management is expected to follow a similar path. Early emphasis on education will accelerate this journey.
Conclusion
FOR MORE INFORMATION
- The SOA’s AI Research landing page has the latest trends and reports.
- The SOA’s Actuarial Intelligence Bulletin informs readers about advancements in actuarial technology.
- Read The Actuary article, “AI the Ally: A look at modernizing actuarial systems with large language models.”
Insuring generative AI is becoming a real-world necessity. The technology is here, adoption is widespread and the risks—though still emerging—are tangible. We have confirmation that businesses have a strong appetite for such insurance, as well as fragility in the current models in underwriting for generative AI threats.
Industry innovation in product design, creativity, and embracing change can potentially help overcome barriers, as could embedding education, fostering awareness and promoting collaboration.
We believe that the future of AI risk management will not be defined by coverage limits alone, but also by how well insurers help businesses understand, govern and responsibly use AI.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.
References:
Copyright © 2025 by the Society of Actuaries, Chicago, Illinois.

