Applying AI
Structure, uncertainty and human judgment
April 2026Artificial intelligence (AI) is everywhere, promising faster results and more intelligent decisions, among other things. I have spent a lot of time experimenting with AI in my own work, from Excel and Power Query to editing and analysis. What I’ve learned is straightforward: Current, widely deployed AI systems perform exceptionally well when tasks are structured, predictable and computational—but their reliability declines as problems involve greater uncertainty, shifting conditions or human judgment. Tools can automate and optimize, but they cannot, for me, replace the reasoning, adaptability and collaboration that humans bring to complex challenges.
This article explores where AI truly adds value—and where, in my opinion, it can fail when applied outside its strengths. Through practical examples, historical context and research insights, I hope to show that AI cannot be a substitute for human work but a powerful complement to it. I believe that understanding this distinction is essential for anyone navigating the promises and pitfalls of AI today.
LEARNING POWER QUERY IN EXCEL
I have worked extensively with Excel for years, but until recently, I had never used Power Query. Fellow actuary Andrew Chan frequently shared insights about it on LinkedIn, which finally prompted me to give it a try. Power Query is a data transformation and data preparation engine that relies on a language called M, which I had no prior experience with. To learn it, I turned to both Microsoft Copilot and ChatGPT.
Initially, I tried to have the AI generate the entire solution at once. I crafted detailed prompts and included as much context as possible, but the results were consistently flawed. Each attempt produced errors. I rephrased the prompts, pointed out the specific lines causing issues, shared error messages and asked for fixes. Often, the AI acknowledged that the prior solution would not work—yet the same problems remained. In many cases, the responses worsened. Because I lacked familiarity with the language’s syntax and behavior, I was unable to diagnose or fix the issues myself. More importantly, this approach left me passively involved in the process and I learned very little as a result.
To make progress, I had to change my approach. With limited knowledge, I sketched out a rough solution and then asked the AI to help with small, highly focused tasks—one line or step at a time. This method worked far better. I became actively engaged in the problem, and over time, I began requesting similar tasks and noticing patterns in the language. Through patience and persistence, I eventually reached the point where I could now accomplish quite a bit in M without relying on AI.
From this experience, I learned that AI performs inconsistently on complex, open-ended problems—especially when success depends on iterative design choices in lieu of testable steps. For me, it is far more effective when used for small, testable tasks with immediate feedback. When I work in languages I know well, incorrect AI solutions are less of an issue because I can quickly spot and fix errors. In those cases, AI serves as a helpful guide—offering structure, reminding me of syntax and supporting broader tasks—rather than attempting to solve everything at once.
DESIGNING ACTUARIAL PROCESSES AND MODELS
To date, I have not found current AI tools reliable for independently designing actuarial processes, particularly when trade-offs, professional judgment, and evolving regulatory or business contexts dominate. Writing code is not the same as designing and engineering a system. Programming addresses syntax and immediate tasks; design and engineering address the problem itself—how concepts translate into data, logic and durable processes. Strong programming is necessary but not sufficient; well-written code can still reflect poor design. AI can help generate code, but it struggles with design and engineering, where complexity, judgment and trade-offs dominate.
WRITING ARTICLES
I avoid using AI to generate article content because, in my experience, it tends to reproduce familiar patterns rather than contribute anything new (aka predictable and dull to me). Where I believe AI excels is in editing. I rely on tools like Grammarly for grammar and clarity, which have significantly improved my writing (and likely made life easier for editors as well). While issues still arise, they are far fewer than before. Once an article is solid, I often run it through AI again to tighten the prose and cut length. In a 2,000-word article, AI can, in my experience, typically remove 300–500 words without losing meaning, creating room to add another topic and improve the overall piece.
WHAT IS THE CONNECTION?
The purpose of these examples is to clarify what AI does well—and what it does not. AI excels at tasks with heavy computation, structured rules, and predictable outcomes. It narrows large search spaces by aggregating common patterns, which is why it performs so well at grammar correction and programming.
As uncertainty, ambiguity and the number of decisions compound, AI’s effectiveness drops. This fact is why generating original content, designing and engineering systems, and managing large, open-ended projects are, in my opinion, poor uses of AI. As described in David Epstein’s 2019 book “Range: Why Generalists Triumph in a Specialized World,” games like chess and the board game Go offer vast possibilities within fixed rules, making them ideal for computation-intensive approaches. Winning is a matter of speed and scale. With modern hardware, earlier computation-intensive approaches are no longer remarkable.1
Now imagine those games with rules that randomly and unpredictably change mid-play. In such environments, purely computational approaches become ineffective because uncertainty overwhelms optimization. In those environments, the best results come from pairing humans with AI—letting machines handle computation while humans adapt to change, use judgment and provide context.
NOTES ON THE TURING TEST
Some of society’s confusion around AI can be traced, in part, to how Alan Turing’s Turing Test has been interpreted since the 1950s. The test asked whether a computer could be said to “think” if it could mimic a human in a text-based conversation. This test rested on a simplifying assumption: that imitating human behavior implies modeling human thought. At the time, it seemed reasonable. Early researchers believed brains and computers processed information in similar, algorithmic ways—both taking inputs and producing outputs.2
That assumption, while useful in the early days of computing, has proven misleading. Our understanding of both computers and the brain was limited, and the Turing Test became less a thought experiment and more a flawed benchmark. We assumed that if machines could replicate the structure of the brain, they could reason like humans.
As Dale Purves explains in his 2021 book “Why Brains Don’t Compute,” this analogy breaks down. AI neural networks optimize global objectives based on input data, which requires a relatively stable and predictable environment. The real world is neither. Human brains evolved over millions of years to operate in noisy, uncertain and constantly changing conditions. They rely on adaptability and plasticity, not optimization. Although AI and brains may appear structurally similar, they function in fundamentally different ways.3
By emphasizing a computer’s ability to mimic human thought, the Turing Test implies that human cognition can be fully replicated. Computers do not take time off, get sick or have family obligations. They work continuously, producing 24-7.4
This has contributed to a “hype cycle” around AI, with investment levels often compared to the largest public and private technology initiatives in history. As articles note, it has been used to justify extraordinarily large levels of investment in technology and research, eclipsing even the Manhattan Project and the Apollo Moon landing combined.
As my oldest daughter prepares to graduate from college and enter the workforce, I am increasingly aware of reports showing higher unemployment among people in their early twenties, a trend driven by many factors, as supported by research from Stanford University. I understand why corporations are eager to see returns on their AI investments. Still, the outcome, from a long-term capability perspective, doesn’t take into account he overall picture. It doesn’t delve into how individuals and organizations develop knowledge, skills and long-term capabilities, which contribute to their culture.5
WHERE IS THE PROBLEM?
As noted in Kurt Andersen’s 2021 book “Evil Geniuses: The Unmaking of America: A Recent History,” after World War II, the United States shaped global security policies that enabled trade, stability and rapid economic growth. From the 1940s through the 1970s, the U.S. was an economic leader as the world rebuilt, and in this period become a manufacturing powerhouse. Some analysts argue that the trajectory changed in the 1970s, when the Friedman Doctrine and the Chicago School argued that a corporation’s sole purpose was to maximize shareholder value. As globalization lowered the cost of shipping goods across oceans, manufacturing relocated to cheaper labor markets.
This aligned with the neoclassical vision of “Homo Economicus”—independent, self-interested individuals pursuing profit through rational choice. In practice, this overlooks the reality that significant economic achievements don’t occur in isolation. Strong economies and resilient firms arise from collaboration, shared knowledge and institutional memory. A focus on short-term shareholder returns weaken long-term organizational capability. Profits follow deep organizational capability; they are not a substitute for it.6,7
In the context of automation and AI, the key oversight is that knowledge moves with the jobs. This is more important than the job itself. Manufacturing embeds ecosystems of skills, relationships and tacit expertise.8
Apple illustrates this dynamic. In the mid-1990s, Apple manufactured primarily in the U.S., a point of pride. But poor management left the company on the brink of bankruptcy. After Steve Jobs returned in 1997, Apple outsourced manufacturing to China to cut costs and survive. At the time, China lacked robust experience in producing advanced consumer electronics. Apple sent American engineers to China to train local manufacturers in its processes as Apple succeeded with its various products. Investment and expertise were further concentrated in China, creating a powerful positive feedback loop. By around 2010, Apple’s manufacturing knowledge and capability had developed a stronghold there.
Today, it appears to me that success handcuffs Apple. Diversifying away from China has proven difficult and expensive because the underlying skills and knowledge cannot be easily extracted or recreated. Attempts to bring manufacturing back to the U.S. have required Chinese and Taiwanese experts to train American workers. Even then, the more skilled tasks remain offshore, leaving domestic facilities without the same capabilities, some analysts argue.9
IN CLOSING
I began this article with practical examples of how I use AI. To me, the pattern is clear: AI works well when outcomes are computable, structured, and predictable, but it struggles when problems involve uncertainty, shifting conditions and judgment. Research confirms this distinction. AI can assist with risk management, but it is not reliable for navigating genuine uncertainty without human oversight and adaptation.
FOR MORE
The SOA’s AI Research landing page has the latest trends and reports.
The SOA’s Actuarial Intelligence Bulletin informs readers about advancements in actuarial technology.
Yet AI has attracted massive investment and, as reports show, it is a concern for many that it will replace jobs. This is certainly a concern—skills and knowledge atrophy when they are not maintained, and the ecosystems that support them collapse. If failures occur, organizations may have to scramble to rebuild capabilities they no longer possess, only to find that the talent and experience are gone.
The solution is to use AI as a complement, not a substitute. Let it handle computation and repetitive tasks, while humans provide judgment, creativity and collaborative problem-solving. AI can boost efficiency and decision-making—but the long-term strength of any organization comes from the collective ingenuity, knowledge and adaptability of its people.
Statements of fact and opinions expressed herein are those of the individual authors and
are not necessarily those of the Society of Actuaries or the respective authors’ employers.
References:
- 1. Epstein, David J. Range: Why Generalists Triumph in a Specialized World. Riverhead Books, 2021. ↩
- 2. Hawkins, Jeff, and Sandra Blakeslee. On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines. Times Books/Henry Holt, 2008. ↩
- 3. Purves, Dale. Why Brains Don’t Compute. Springer International Publishing Springer, 2021. ↩
- 4. Bender, Emily M., and Alex Hanna. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Harper, an Imprint of Harper Collins Publishers, 2025. ↩
- 5. White, David G. Disrupting Corporate Culture: How Cognitive Science Alters Accepted Beliefs About Culture and Culture Change and Its Impact on Leaders and Change Agents. Routledge, 2021. ↩
- 6. Andersen, Kurt. Evil Geniuses: The Unmaking of America: A Recent History. Random House, 2021. ↩
- 7. Sloman, Steven A., and Philip Fernbach. Knowledge Illusion: Why We Never Think Alone. Riverhead Books, 2018. ↩
- 8. Hidalgo, Cesar. Why Information Grows: The Evolution of Order, from Atoms to Economies. Basic Books, 2016. ↩
- 9. McGee, Patrick. Apple in China: The Capture of the World’s Greatest Company. Scribner, 2025 ↩
Copyright © 2026 by the Society of Actuaries, Chicago, Illinois.
