As actuaries, we hear a lot about big data, predictive analytics, machine learning and artificial intelligence (AI). Big data and predictive analytics have already created insights on risk, particularly for personal property and auto insurance. Advanced analytics and big data are fueling developments in machine learning and AI that we are only beginning to fully understand. At this point in time, we may be at the beginning of a revolutionary change.
Revolutionary changes are hard to understand. It’s easier to deal with incremental change, such as the move from a typewriter to a personal computer (PC), or from snail mail to email. While these changes may have felt transformative, they were simply new ways of achieving an end: creating and transmitting information. But AI and machine learning aren’t just better computer programs—they promise to change how knowledge is created and disseminated. As knowledge workers, actuaries could see tremendous disruption in our roles, including what we do and how we do it. If we don’t act to create a new role fit for the age of AI, we may find ourselves as obsolete as blacksmiths at the end of the Industrial Revolution.
Lessons From the Industrial Revolution
Momentous change has happened before, but none of us were alive to see it. Common knowledge of the Industrial Revolution (and parallel scientific innovations) centers on the machines invented to save labor and improve our quality of life: cars, airplanes, washing machines, light bulbs and so on. But the Industrial Revolution didn’t just create labor-saving devices—it changed how we work and how we organize our lives. It modernized agriculture, transforming farmhands into factory workers. Machine die-casting processes replaced blacksmiths in the creation of screws, nails and tools. Children stopped learning a trade at their parents’ sides, so we started schools to educate them.
Moving the means of production out of the home and into the factory created a more complicated economy that accelerated the development of professions. The steel factory that employed hundreds of workers with multiple customers needed well-trained professionals to stay in business. The bookkeeper became a recognized accounting professional (accountant) to keep the factory’s finances straight. An attorney was employed to deal with supplier and customer contracts.
Insurance (and the actuarial profession) rose in prominence in parallel with these changes:
- Fire insurance was developed as cities became more crowded. What started as a fire brigade service eventually became property insurance.
- Some of the original accident insurance policies arose in conjunction with the railroads to insure against injuries and fatalities on the developing rail system.
- In the United States, the railroads also pioneered pensions, creating the first private pension system in 1874.
- Private insurance systems were supplemented with social insurance systems: Unemployment insurance was expanded, and Social Security was created in 1935 in the United States to ease economic displacement during the Great Depression.
Today’s Innovation: AI and Machine Learning
Digital transformation, driven by AI and machine learning, won’t unfold exactly like the Industrial Revolution, but it does have the potential to transform work and society in similar ways and to the same degree. The Industrial Revolution primarily changed the lives of physical laborers, who had to retrain and develop new skills, while a whole new class of knowledge work expanded. This time, the AI revolution will likely primarily affect those of us laboring with our brains: actuaries, accountants, lawyers and other professionals are about to find their professional world transformed because of AI—just like the farm laborers and blacksmiths of the 18th and 19th centuries.
Some definitions of AI and machine learning include:
“[AI] is typically defined as the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning and problem solving. Examples of technologies that enable AI to solve business problems are robotics and autonomous vehicles, computer vision, language, virtual agents and machine learning …
Most recent advances in AI have been achieved by applying machine learning to very large data sets. Machine-learning algorithms detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve efficiency over time.”1
What’s the Difference? AI, Machine Learning, Big Data, Predictive Analytics and Data Science
Artificial intelligence (AI) and machine learning use predictive analytics to find relationships in data. Machine learning has led to many of the advances in AI, which may be better thought of as augmented intelligence.
Big data refers to the billions of bits of data that have been generated with the advent of the internet, smartphones and other electronic devices. Big data is necessary for machine learning (the computer learns by finding relationships in the data).
Data science is the term that covers all aspects of AI, machine learning, big data and predictive analytics. Data scientists are the individuals who are skilled in the related techniques.
What Makes AI and Machine Learning Different?
First, machine learning isn’t the same thing as traditional “logic-based” programming. Traditional logic-based computer programming relates a very specific step of instructions to a computer, centered around mathematical operations and logic loops—for example, “do while” X is true, “if” X “then” Y, otherwise “do” Z. Logic loops create very efficient tools for complicated calculations. They’re limited, though, by the ability of the human programmer to create a set of logic rules the machine could follow. Some tasks were simply too difficult to do, such as identifying a picture of a cat versus a dog. There was no way to write a code that could account for all the permutations of color, position, setting and breed, much less artistic renderings.
Machine learning is a process more akin to human learning. We can make the argument that all knowledge is really a prediction. We teach children how to identify a cat by showing them cats or images of cats and saying “cat” until the child can identify the image. It’s a form of reinforced learning. Most toddlers can quickly identify a cat (and distinguish a cat from other common animals, e.g., a dog), even if it’s a different color, in a different setting, in a different position or an artistic rendering. However, a young child who sees a skunk for the first time may naturally “predict” the small, four-legged, furred, tailed creature is a cat until corrected.
Machine learning focuses on teasing out relationships in data to make predictions—hence the term predictive analytics. There are three types of machine learning:
1. Supervised learning. An algorithm uses training data and feedback from programmers to learn the relationships among inputs. It is used when the programmer knows the type of behavior they want to predict.
2. Unsupervised learning. Algorithms explore input data without being given specific instructions or relationships to understand. It is used when you want the algorithm to find patterns and classify them (e.g., customer demographic data).
3. Reinforcement learning. Algorithms perform a task by trying to maximize a reward. This is useful when there isn’t training data available and the only way learn about the environment is to interact with it.
Adapted from An Executive’s Guide to AI.
Machine learning works the same way as the child predicts and learns: Based on available information, the algorithm makes predictions, gets input as to whether its prediction was correct or incorrect, then uses that new information to improve its predictions (the small black animal with short legs and a white stripe is most likely a skunk, not a cat). A machine that is very good at making predictions becomes, in a narrow way, as “intelligent” as a human.
Second, machine learning is made possible by three developments:
- Increased amount of digital data
- Increased computing power (including storage)
- Better algorithms
The groundwork for machine learning goes back to the 19th century when Adrien-Marie Legendre published the least squares method for regression. That was the first framework that allowed patterns to be expressed. The first self-learning algorithms were developed in the 1950s and 1960s, but they didn’t take hold until more recently when increased data and computing power came together.
Massive amounts of data are key: Machines have learned to identify pictures of cats, dogs and skunks by seeing millions of images of these animals and developing their own “rules” for how to distinguish them. An Executive’s Guide to AI, published in 2017, estimated at the time of publication that 90 percent of the world’s data had been produced during the previous two years. That same document noted the leaps of computing power have made machine learning practical for solving business problems by dropping training times for algorithms to minutes or hours compared to days or weeks just 10 years ago. None of this would be possible without cloud storage, which allows these giant data sets to be accessed and shared.
AI is really best thought of as “augmented” intelligence. Human intelligence is wide ranging, and humans don’t need millions of data points before they can distinguish a cat from a dog. Humans also are able to deal with the unexpected, or with changes in conditions. Machine learning, sometimes called “deep learning,” is an iterative process that allows the machine to improve its own algorithms so that it can improve its predictions with future data. AI is not a superintelligence that knows everything. It’s not even a general intelligence. AI is a narrow intelligence that, based on data it is given and the algorithms at its disposal, can become better than humans at making specific predictions. However, AI is limited to what it knows. An AI that can distinguish pictures of animals won’t be good at finding routes to avoid traffic.
AI today can replicate basic human cognitive functions. What distinguished humans from machines in the past was a human had to specifically feed data into a machine in a machine-readable format. New AI technologies now enable computer vision, natural language processing (reading and writing), cognitive agents (bots) that can interpret questions, and robots and autonomous vehicles. This allows computers to do much more in a more natural way, and this will speed the adoption of machines into many other functions.
How Is AI Changing Our World?
Most people use AI in some form every day:
- Internet search results are driven by AI. Every time someone clicks on a particular result, the AI learns how to improve your search for the next time.
- Voice assistants such as Apple’s Siri, Amazon’s Alexa and the Google Assistant are natural language processing applications that can interpret and answer questions.
- Drones can take pictures and machines can automatically interpret the data to assess damage from storms.
A good example of the power of AI to transform knowledge creation and distribution is Google Translate. Prior to 2016, Google Translate used logic-based programming that hinged on grammar and syntax rules. It translated by phrases, and the resulting sentences and paragraphs often were jumbled. So, Google created machine language translation that creates sentences that are almost indistinguishable from natural language. Google Translate could change what it means to be a professional interpreter.
Insurance is rapidly changing based on the availability of big data. Insurers have recognized the predictive ability of credit scores in setting auto insurance. Today, many insurers offer drivers a discount if they use telematic devices, or if they are willing to track and record their driving habits to encourage (and prove) safe driving. In 2018, John Hancock announced it would sell life insurance policies packaged with its Vitality Program that allows consumers to accumulate points from doing healthy activities, which in turn can earn them life insurance discounts. In the meantime, the company gets valuable data it can use to better understand its customers’ experiences.
The Future Is Here: How Will We Adapt?
Computers and digitization have already driven many improvements in actuaries’ work. Data that was transmitted on paper in the mail is now sent “through the cloud.” Programs that took hours to run now take seconds. Actuaries have gotten used to adapting to change, including system improvements, automation and better ways to send information. In the face of continuous transformation, it’s hard to recognize that the next change could usher in a knowledge revolution that could change the nature of the actuarial role completely.
The Industrial Revolution was not an overnight transformation. Blacksmithing, as a trade, survived into the early 20th century. Blacksmiths moved from creating tools to repairing tools, and then finally they focused on one part of their historical trade: shoeing horses. Even the introduction of the automobile didn’t immediately make the horse-drawn wagon obsolete. It’s easy to find early 20th century pictures with the horse and wagon side by side with the automobile and streetcar, like in the featured image for this article.
Insurance is a regulated industry, and actuaries have regulated roles—and those regulated roles likely won’t change overnight. Insights driven by AI could give actuaries opportunities to revolutionize the world of insurance. As a profession, we seek to adapt in the face of change. We must innovate and take on these new opportunities.
Copyright © 2020 by the Society of Actuaries, Schaumburg, Illinois.