Technology is changing the face of competition in the insurance industry. Executives don’t have to look far to see digital at work. Today’s emerging insurtech companies are using technology and data to automate processes and operate more efficiently and to provide fast, innovative service to customers. Some of these digital natives are focused on customer-facing processes. Friendsurance, for example, uses a peer-to-peer model to connect groups of customers and facilitate annual cashback payments, while some provide back-end solutions to other companies. TrueMotion offers a platform that collects smartphone data to assess driver behavior and risk levels; this is embedded in some mainstream insurers’ products.
Industry executives recognize this shift. In our recent research exploring how insurers are applying digital technology, more than 85% of respondents said they’re investing in their digital agendas. While the range of digital initiatives in the industry is impressive, it is also deceptive. Some insurers are moving ahead rapidly, but many are struggling to gain traction with digital transformation.
Our study found that virtually all insurers are exploring digital technology, but they vary widely in terms of digital maturity. About half are taking a limited approach, using digital at the business-unit or departmental level. Some insurers are just beginning to move toward an enterprise-wide approach. Another quarter are “dabblers,” organizations that are essentially pursuing a wait-and-see strategy.
Insurers struggle to scale their digital initiatives for a variety of reasons, but one key is their data capabilities — which, too often, are unable to keep up with growing data demands.
Large volumes of meaningful data are the raw material of the digital revolution. Sound data management has become a key factor in the insurance industry — especially in the use of analytics to develop insights into market trends and customer preferences. Insurers’ growing interest in AI is making sound data even more critical. AI has the potential to transform everything from product development to underwriting and claims processing, and it is opening the door to innovations such as instantly customizable life insurance and on-demand property coverage.
In this AI-enabled age, the role and management of data shifts dramatically. Certainly, the insurance industry has a long history of managing large amounts of data. Traditionally, that has meant keeping records, administering contracts and analyzing what has happened in the business: data was used to track performance. Now, however, it is used to drive performance. It means that data must be treated as a perishable asset that is gathered, understood and made available to business processes dynamically and quickly — even in real time.
Data’s traditional flow has been fairly straightforward and linear, moving from source to data warehouse to reporting system. Now, data needs to move freely to a variety of people and systems across the company. The systems most insurers have used to manage data are not up to that task. Data is typically siloed and stored across different systems in different formats — a problem that has been aggravated by mergers and acquisitions that leave insurers with disparate collections of legacy technologies.
These problems are further complicated by ongoing growth in the volumes and types of structured and unstructured data that insurers have at hand. Today, they can draw on data flowing from online interactions, smartphones, wearable fitness trackers, connected homes, video images, inspection drones, telematics devices in vehicles, mapping and environmental satellites, and anthropological and psychological profiles of customers, to name a few sources. Not surprisingly, in our survey of insurance executives in the U.S. and Europe, the most commonly cited obstacle to the successful implementation of AI-driven capabilities in business functions was a lack of accurate and timely data. Insurers need to rethink the way they manage data, and create new data foundations.
Building a new data foundation should be approached as a strategic initiative sponsored by executives and business teams, rather than technology and data architecture teams. It should look beyond point solutions, fragmented programs and the idea that the company needs to create another monolithic platform.
Instead, companies should adopt a structured approach to transforming the ways they source, interpret and consume data to consolidate disparate data sources and support data modernization. This more flexible and loosely coupled architecture uses “fit for purpose” storage, compute and distribute strategies, while leveraging the power of machine learning (ML) to accelerate the process of drawing actionable insights from data.
Each business will have its own needs and goals when rethinking its approach to data. But we have found that seven key principles can guide the development to a new, more adaptive data foundation that is ready for AI. As they work on data architectures, insurers should consider these principles:
The data architecture should enable the on-demand performance of computations; allow the business to use data without needing to check with IT; and use cloud technology to enable the organization to scale up when additional computing horsepower is needed — and scale down when it isn’t.
The architecture should address different shapes and granularities of data such as transactions, logs, geospatial information, sensors and social — and handle data in real time as much as possible.
Most enterprises view metadata extraction as an afterthought, typically driven by compliance. However, metadata is much easier to manage early in the process rather than later, and it has value far beyond compliance. For example, by cataloging their metadata, companies can create a library of data sets that everyone in the organization can access, enabling wider use of insight-generation and AI throughout the enterprise.
Platforms have three layers of data: raw, curated and consumption. Traditional architectures typically grant access only to the consumption layer. However, data scientists often like to examine raw data for overlooked elements that may generate more information — so it’s important that all layers are exposed and open for access.
Companies will need to integrate new data sources quickly in order to keep relevant data flowing to analytics and AI applications. However, mapping data to target usage environments is still a largely manual process. That can be addressed by using ML to automatically detect changes in incoming data and adjust integration patterns.
Feature engineering transforms data into consumable forms and shapes that ML models can use. Features describe data points and serve as inputs into the learning system, so they need to be as precise as possible. Careful feature engineering is key to making ML accessible broadly within the business.
Companies often rely on complex, hybrid environments that blend cloud-based and on-premises services, with data scattered in various locations and used by a variety of individuals and systems. A unified security approach lets companies consider security from the point that data is produced to all points of consumption and cycles of enrichment.
Adherence to these principles will allow insurers to revamp their approach to data, positioning themselves for success in a world in which AI will only grow more important.