Data has always been an important asset in the insurance industry, which is largely built on algorithms and financial models. Today, that is truer than ever. Data can be analyzed to provide deeper business insights and the ability to target the right customers through the right channels with the right offerings in the right sequence. It has also opened the door to digital natives and “insurtechs” that bring innovative ideas to improve the customer journey.
The rapid ascent of artificial intelligence (AI) is increasing the importance of data across the industry. AI can bring greater efficiency and innovation to the entire insurance value chain, from customer acquisition to claims processing, and for all stakeholders, including customers, agents and employees.
However, effective AI depends on current, accurate and relevant data — and in our experience, this is where mainstream insurers often fall short. They struggle with data governance and management, and are therefore constrained in their ability to leverage AI to inform better decision-making. Legacy systems, siloed and inconsistent data, and growing volumes all make it difficult to manage and harness data effectively. As the following figure illustrates, the increased importance of nontraditional data from third-party sources creates additional challenges, as insurers often find it difficult to align this information with their existing data stores.
To address these challenges, insurers must fundamentally rethink the technology foundations that underpin their data-management efforts. Toward that effort, we have used knowledge and experience developed working with major industry clients to develop a set of core concepts and key principles. For an even deeper exploration, see our white paper, “Modernizing Insurance Data to Drive Intelligent Decisions.”
The following core concepts offer a valuable framework for insurers as they shape new approaches to their data:
Data architectures are often rigid and hardwired, built around inflexible data warehouses and fragile legacy source systems. They are not built for robust data curation or architected with today’s data needs in mind. This makes it difficult to bring in new and varied types of data and use it to develop insights — a critical ability in digitally enabled operations. Next-generation data architectures can simplify, augment and transform the data landscape to enable insurers to harness existing data more efficiently; draw in different types of data; and quickly deliver data in a suitable form to applications and business processes. This can be particularly challenging when dealing with nontraditional data originating from sources such as wearables, building sensors and drones.
We have developed solutions and accelerators such as our Customer Journey AI platform, which sifts through unstructured data from local governments to help business teams understand household composition and risk characteristics to identify upselling opportunities for coverage. This can be done without involving the IT team to integrate such data sets, eliminating a step that might take weeks or months.
Traditional data management processes are not designed to handle dynamic data and changing business demands. The management of metadata, data quality, security and regulatory compliance are labor-intensive processes that often can’t keep up with changing data sources and applications. With third-party data, insurers face the additional challenge of incorporating data in three layers: a raw layer, a curated layer (cleaned and organized for improved consumption) and a consumption layer (one with an interface for access to the data). These obstacles can be overcome by streamlining and automating many processes — especially such time-consuming tasks as reconciling entries in multiple data sources.
This approach helps insurers rapidly tap their data stores to deliver actionable information and insights — which also helps them respond to change. For example, as business conditions and data sets evolve over time, our Learning Evolutionary Algorithm Framework can help avoid “model drift” by dynamically identifying the relative importance of the most predictive variables and factors, enabling insurers to proactively adjust their models for accuracy.
The processes used to develop and modify data management systems have not leveraged the advances that have overturned application development, limiting the ability to change and improve. Insurers can take advantage of advanced delivery methods, such as Agile, DevOps, DataOps and MLOps, to optimize and simplify processes. Asset-based development models can enable standardization and the efficient reuse of solution components. And continuous integration/continuous delivery techniques can ensure that new capabilities are easily and quickly included in systems.
These approaches can dramatically reduce time-to-market for new capabilities — and, in effect, enable the data organization to release such capacities almost continuously. For example, Uber can support millions of weekly analytical queries.
Each company will have its own needs and goals when rethinking its approach to data. But we have found that seven key principles can guide the development to a new, more adaptive data foundation that is ready for AI. As they work on data architectures, insurers should consider these principles:
For more information, read our white paper, “Modernizing Insurance Data to Drive Intelligent Decisions,” visit the Insurance section of our website or contact us.