Artificial Intelligence and the Internet of Things

The architects of risk prevention in insurance Amarnath Suggu

Photo: iStock.com/gorodenkoff

With the advent of the Internet of Things (IoT), the fourth industrial revolution has ushered us into an era of hyper-connected physical and digital worlds. IoT combined with artificial intelligence (AI) has the potential to enable the insurance industry to better understand risks and, most importantly, provide a means to prevent them. As a result, the use of IoT-enabled devices could decrease insurance premiums and claim costs.1

This article delves into how AI models and IoT devices could be used to deliver benefits to carriers. It highlights their shortcomings and the impact on insurers. It also stresses the need for regulation and details some that are currently in place to ensure the ethical use of technologies.

Role of AI in IoT-enabled Insurance Ecosystems

IoT devices generate vast quantities of data. The average U.S. household has about 25 IoT devices,2 which generate nearly 130 GB of data through connected cars,3 smart homes4 and wearables.5 It is impossible for humans to crunch all this data and generate insights—and certainly not in real time. AI capabilities embedded in IoT devices mean they can operate with greater autonomy and decision-making capabilities.

Take the example of a car with a distracted driver at the wheel. It takes only a second or two to have a crash or prevent it. The data captured by IoT sensors is useless if there is no intelligence to:

  • Process it
  • Determine the need for preventive action
  • Trigger a specific next-best action

This is precisely what AI does. It generates actionable insights in real-time, which in this case, would prevent a crash.

The raw data from IoT sensors is processed in situ if the computations are simple or if the required computing power is available at the source. Otherwise, data is streamed to an edge server or a cloud server where the AI has the computing power needed to process it. The outcomes and actions to be taken are then transmitted to control units.

Risk Prevention With AI and IoT

Raw sensor data contains values of parameters like speed, temperature, coordinates and so on. This raw data is preprocessed (cleansed, formatted and aggregated) before it is fed to a model. The processed data is used in conjunction with other environmental and contextual data to generate insights. The following are commonly used AI models and how they could be applied in the insurance industry:

  • Decision trees are used in parametric insurance. The model compares sensor data with the parametric values to trigger coverages or settle claims. For example, based on the GPS location of a ship, a marine insurer can invoke specific coverages. Similarly, a property and casualty (P&C) insurer can settle flood claims in real time using the water levels indicated by an IoT sensor. Vehicles with auto-braking safety features use regression models to calculate the braking distance based on speed. Decision trees compare this with the distance to pedestrians or obstacles from the IoT sensor and determine when auto brakes must be applied to prevent a crash.
  • Computer vision analyzes video footage using convolution neural networks (cNNs) to classify various objects. This can help autonomous cars navigate the road safely and identify distracted drivers. It also can ensure workplace safety by identifying employees working without personal protective equipment (PPE), in restricted areas or in the middle of vehicle paths. Boiler machinery operates at certain combinations of temperature, pressure and flow rates—IoT sensors monitor these parameters. Commercial insurers may use outlier detection models to detect abnormal equipment behavior and take preventive maintenance actions. Similarly, cyber insurers can monitor network parameters to detect intruders, and health insurers can monitor vital stats to prevent life-threatening ailments.
  • Clustering is another AI model based on the premise that objects from the same cluster exhibit the same tendencies and behavior. When the root cause for an anomaly, breakdown or health incident has been identified, knowledge can be applied to all objects from the same cluster as a preventive measure. Insurers can recommend preventative maintenance for vehicles and machinery based on IoT data. Similarly, health insurers can alert customers with the same ailment or within a demographic cluster to undergo checkups or visit their physicians based on IoT alerts.
  • Different models generate different insights. Insurers may use a single model or a combination of models to manage and mitigate risk. These insights usually are warnings and alerts that, aggregated over time, would provide the necessary information required to underwrite risks.

The Downside of AI and IoT: Impact on Insurers

Like any technology, AI and IoT have their pros and cons. While these technologies can play a key role in mitigating and preventing risks, carriers should be aware of their shortcomings and the increasing ethical concerns related to their usage. A potential downside, for example, is that criminals could compromise these technologies and use them for illegal and unethical purposes.

Shortcomings of IoT Devices

Studies have found that most IoT devices in the market communicate without encryption. Many of them use default passwords published in the operating manual and do not come with a provision for version upgrades. This makes it easy for cybercriminals to permanently access and control the device and the data. The lack of adequate device security in IoT devices likely will lead to breaches of sensitive data and violations of privacy.

Compromised IoT devices could have a drastic impact on the insurance industry. The data transmitted from the sensors could be manipulated and no longer accurate. As a result, many of the control mechanisms used to identify and prevent risks would no longer function as intended, which could lead to accidents and loss of life. Failure to predict abnormal behavior in boiler machinery equipment due to compromised IoT sensors is a classic example. Similarly, compromising IoT sensors that detect flood or fire in smart homes could lead to extensive damage to the property.

Manipulated data from compromised IoT devices also can result in incorrect risk assessment, understating potential risks that could lead to premium loss for insurers. A hacked boiler could show lower readings for critical parameters, falsely indicating overall equipment safety. In the same manner, manipulated IoT devices could overstate damage, leading to a larger claim payout or false claims being honored. Parametric insurance products like crop damage and flood insurance could end up paying false claims if they relied solely on IoT devices that can be hacked.

Shortcomings of AI Algorithms

Algorithmic bias is the most common shortcoming of present-day AI models. A model is said to be biased when its decision or outcome favors or discriminates against a specific class of individuals based on one or more protected attributes (like race, color, religion or gender). An AI model does not have any inherent bias toward a specific class but originates from the data on which it is trained. Incomplete training data or uneven representation of a population in the training data are some of the key reasons for AI bias.

In the future, health insurers may choose to rely on smartwatches and fitness bands to understand the health and well-being of individuals. AI models would determine risk exposure based on stats from these IoT devices, which in turn determines the premiums. If the model has not been trained on a specific parameter, such as heart rate, it will not detect any risks associated with this parameter during the evaluation. The AI model trained on a specific class of people and in specific regions will not be accurate for other classes and regions. This could lead to incorrect underwriting of risks and discrimination in premiums for specific classes.

Adversarial AI is another technique cybercriminals use to mislead computer vision driven by deep learning models. Images are slightly tweaked or injected with noise so cNN models cannot detect them. Studies have shown that computer vision used in autonomous vehicles cannot detect a stop sign if it has been tampered with by noise. Similarly, computer vision used to underwrite properties or assess claim damage based on satellite imagery also could be compromised. These could result in financial losses and reputation damage for the insurer, as well as potential long and involved legal battles.

The Need to Regulate AI and IoT

AI and IoT offer numerous potential benefits to the insurance industry. But when they fall into the wrong hands, the benefits don’t outweigh the potential harm—consumers could lose trust and even question how they are being used or if they even exist. Hence, regulation is required to prevent the misuse of technology for illegal and unethical purposes. This would bring accountability and ensure that technology complies with standards and guidelines, preventing misuse and instilling consumer confidence. Governments around the world have come up with regulations to improve the security of IoT devices and ensure the privacy of the data created and transmitted by them.

IoT and AI Regulation in the United States

The United States does not have any national framework governing the security of IoT devices. The 2019 IoT Cybersecurity Improvement Act empowers the National Institute of Standards (NIST) to set security standards for connected devices used by the federal government. California’s 2018 IoT Security Law is the first IoT-specific regulation in the country. It forbids default passwords and mandates unique passwords for each device or requires the consumer to create one during setup.

An “AI Bill of Rights” would ensure transparency and clear explanations of the decision-making process of the AI models, thus making them more accountable. State-specific laws, such as California’s Automated Decision Systems Accountability Act and Illinois’ Artificial Intelligence Video Interview Act, strive to eliminate bias from AI models. The National Association of Insurance Commissioners (NAIC) and the Federal Trade Commission (FTC) also released guidelines that aim for transparency, fairness, equity, accountability and security in organizations’ use of AI.

Conclusion

The true value of IoT doesn’t lie in obtaining the right data but in the ability to analyze that data at the right time to make the right decisions. The combination of AI and IoT could provide real-time actionable insights to help insurers underwrite new business and settle claims instantly. Most importantly, they could help insurers prevent accidents and save lives. At the same time, insurers should be cognizant of the damage these technologies could cause if misused. Regulation should mandate compliance with cybersecurity frameworks to prevent the technology from falling into the wrong hands or exhibiting bias. It should not be viewed as a deterrent to technology adoption but as a means to prevent harm to consumers and, more importantly, earn their trust so the technology can continue to serve humankind.

Amarnath Suggu is a senior consultant in the BFSI CTO unit at Tata Consultancy Services Ltd. and is based in Chennai, India.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.

Copyright © 2022 by the Society of Actuaries, Schaumburg, Illinois.