Actuarial Big Data Application and Practice: Part 1How actuaries can be successful in the era of big data November 2022
As new technologies continuously emerge, the concept and systems surrounding big data constantly improve. As such, the application of big data has become more in-depth and extensive. Various industries and fields have been innovative in digital transformation and big data integration applications, using big data to improve existing business processes, expand new business forms and more. With larger amounts of data that are richer in useful information, actuaries can execute the actuarial role at a heightened level, improving risk management capabilities and better serving consumers.
This article, Part 1 of a two-part series, analyzes the change in the actuarial mind in the era of big data. Part 2 focuses on the application and practice of actuarial big data based on preliminary exploration and current practice.
Adaptation of the Actuarial Mind in the Era of Big Data
Big data is necessary to meet business management needs. The customer requirements for life insurance in China show a trend toward diversification. Scientific and technological developments, changing customer needs and changes in interest rates have outsized effects on the operation and management of life insurance. As a result, the industry needs to adopt new technologies, such as big data, to improve the level of risk management. To meet the higher requirements for operation and management in this new era, actuaries need to enhance scenario-based, personalized, differentiated and refined product operation and service capabilities.
Data accumulation and technological progress provide objective possibilities. Years of rapid development have accumulated and precipitated a lot of data resources. Improvements in data platforms and technical capabilities also have enabled actuaries to handle large-scale and complex data forms. With good integration and application, actuaries can make the data play a more significant role and make management more effective.
Actuaries can take advantage of their expertise and wisely use big data resources and technology. The essence of the insurance business is the collection and dispersion of a large number of homogeneous risks. The actuarial technology principle is to carry out empirical analysis based on the law of large numbers. Actuaries have natural data sensitivity, strong abilities in data thinking, mathematical analysis and professional modeling, as well as multidisciplinary professional knowledge. By learning and mastering big data technology, actuaries can comprehensively use their knowledge and skills to better execute on the promise of big data and provide more refined and professional actuarial services.
Initial Exploration and Practice of Actuarial Big Data
The mechanism of actuarial work has developed a unique way of thinking that integrates both long- and short-term activity, balancing risk and value. This unique and comprehensive way of thinking, when combined with professional skills, can be utilized at the front, middle and back ends of company operations if management expands actuaries’ professional scope. When actuaries gain insight into more scenarios and use their skill sets to carry out relevant work, they can make significant contributions to an operation. The specific process in the application of actuarial big data has four steps:
- Scenario requirement analysis and scheme formulation
- Data processing and feature engineering
- Modeling and evaluation of model performance
- Practical application
Scenario Requirement Analysis and Scheme Formulation
Actuaries need to analyze and evaluate the scenario requirements from a company’s operational and management functions, such as product design, investment management, marketing support, risk management, claims and operation. They mainly consider the following two points based on each part of the work process:
- Whether there is sufficient data to provide basic and/or refined support for the scenario
- With observed scenario needs, led by an asset-liability management concept, actuaries can offer corresponding solutions based on the global and long-term perspectives of the company
For example, in empowering differentiated risk prevention and control, actuaries use actuarial big data to make a differentiated evaluation of the risk and control of sales. Then they optimize related policies to reduce potential risks and improve efficiency from the source.
Data Processing and Feature Engineering
After scenario requirement analysis, it is necessary to collect, organize and explore available data based on the modeling scheme, business logic and other factors. Then, one must handle and process modeling data and perform feature construction and derivation.
The first step is data preprocessing. It involves some data-cleaning; dealing with missing, abnormal and inconsistent data; and constructing basic analysis units—for example, using individuals, families or insurance policies as the basic data processing units.
The second step is feature engineering. Good feature engineering needs to combine significant business logic and expert experience. The resulting degree of completion of the feature engineering will have a great impact on the model effect. Actuaries are well-versed in the principles and logic of the life insurance business, so they can play an important role in feature engineering by constructing and deriving targeted features for scenario requirements.
The third step is modeling data preparation. Combining different modeling objectives and data characteristics and adopting different data structure processing methods are necessary. For example, previously, a company may have used the “full-volume triangle generation method” to generate the data set according to the data characteristics and continuously observe and update the sample information.
It should be noted that due to the characteristics of life insurance policies and customers, actuaries need to pay attention to some situations, like the explosion of the data set and the gradual decrease of information entropy. They also need to further study and improve the division methods of the model training, validation and test sets and make a reasonable evaluation of the model effect and value to ensure that the subsequent application is more scientific and effective.
This article introduced the exploration of scene requirements and data feature processing in the application of actuarial big data. Part 2, to be published next month, will discuss the modeling, application and preliminary thoughts and prospects for the future applications of actuarial big data.
This article mainly summarizes the working practice of the company in the aspect of actuarial big data. Special thanks to Li Mingguang and Peggy Hou for their guidance and support and Li Xiangye for participating in the discussion.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.
Copyright © 2022 by the Society of Actuaries, Schaumburg, Illinois.