To address these challenges, insurers must fundamentally rethink the technology foundations that underpin their data-management efforts. Toward that effort, we have used knowledge and experience developed working with major industry clients to develop a set of core concepts and key principles. For an even deeper exploration, see our white paper, “Modernizing Insurance Data to Drive Intelligent Decisions.”
Three core concepts
The following core concepts offer a valuable framework for insurers as they shape new approaches to their data:
1 Employ a responsive data architecture.
Data architectures are often rigid and hardwired, built around inflexible data warehouses and fragile legacy source systems. They are not built for robust data curation or architected with today’s data needs in mind. This makes it difficult to bring in new and varied types of data and use it to develop insights — a critical ability in digitally enabled operations. Next-generation data architectures can simplify, augment and transform the data landscape to enable insurers to harness existing data more efficiently; draw in different types of data; and quickly deliver data in a suitable form to applications and business processes. This can be particularly challenging when dealing with nontraditional data originating from sources such as wearables, building sensors and drones.
We have developed solutions and accelerators such as our Customer Journey AI platform, which sifts through unstructured data from local governments to help business teams understand household composition and risk characteristics to identify upselling opportunities for coverage. This can be done without involving the IT team to integrate such data sets, eliminating a step that might take weeks or months.
2 Leverage intelligent data management.
Traditional data management processes are not designed to handle dynamic data and changing business demands. The management of metadata, data quality, security and regulatory compliance are labor-intensive processes that often can’t keep up with changing data sources and applications. With third-party data, insurers face the additional challenge of incorporating data in three layers: a raw layer, a curated layer (cleaned and organized for improved consumption) and a consumption layer (one with an interface for access to the data). These obstacles can be overcome by streamlining and automating many processes — especially such time-consuming tasks as reconciling entries in multiple data sources.
This approach helps insurers rapidly tap their data stores to deliver actionable information and insights — which also helps them respond to change. For example, as business conditions and data sets evolve over time, our Learning Evolutionary Algorithm Framework can help avoid “model drift” by dynamically identifying the relative importance of the most predictive variables and factors, enabling insurers to proactively adjust their models for accuracy.
3 Enable delivery at scale.
The processes used to develop and modify data management systems have not leveraged the advances that have overturned application development, limiting the ability to change and improve. Insurers can take advantage of advanced delivery methods, such as Agile, DevOps, DataOps and MLOps, to optimize and simplify processes. Asset-based development models can enable standardization and the efficient reuse of solution components. And continuous integration/continuous delivery techniques can ensure that new capabilities are easily and quickly included in systems.
These approaches can dramatically reduce time-to-market for new capabilities — and, in effect, enable the data organization to release such capacities almost continuously. For example, Uber can support millions of weekly analytical queries.
Principles for moving forward
Each company will have its own needs and goals when rethinking its approach to data. But we have found that seven key principles can guide the development to a new, more adaptive data foundation that is ready for AI. As they work on data architectures, insurers should consider these principles:
- Plan for scale and elasticity. The data architecture should enable the on-demand performance of computations; allow the business to use data without checking with IT; and use cloud technology to enable the organization to scale computing power up or down.
- Build in the ability to ingest all types of data. In addition to internal company data, the architecture should provide the capability to incorporate, and draw insights from, a wide variety of third-party data: social media, Internet of Things devices, wearables, images/videos from drones and medical providers, and more.
- Be metadata-driven from the start. Insurers can often obtain richer analyses and additional context by leveraging their metadata — that is, information about the data they hold. But most enterprises view metadata extraction as an afterthought, typically driven by compliance. Metadata is much easier to manage early in the process rather than later, and it has value far beyond compliance. For example, by cataloging metadata, companies can create a library of data sets that everyone in the organization can access.
- Provide open access across all layers. As noted above, platforms have three layers of data: a raw data layer, a curated layer and a consumption layer. Traditional architectures typically grant access only to the consumption layer. However, data scientists often like to examine raw data for overlooked elements that may generate more information, so all layers should be open for access.
- Enable autonomous data integration. Companies will need to integrate new data sources quickly in order to keep relevant data flowing to analytics and AI applications. However, mapping data to target usage environments is still a largely manual process. That can be addressed by using machine learning (ML) to automatically detect changes in incoming data and adjust integration patterns. ML can support plug-and-play architectures that leverage APIs and API gateways to provide enhanced flexibility as alternative data sources evolve.
- Get feature engineering right. This transforms data into consumable forms and shapes that ML models can use. Features describe data points and serve as inputs into the learning system, so they must be as precise as possible. Careful feature engineering is key to making ML accessible broadly within the business.
- Support a unified security model for data. Companies often rely on complex, hybrid environments that blend cloud-based and on-premises services, with data scattered in various locations and used by myriad individuals and systems. A unified security approach lets companies consider security from the point that data is produced to all points of consumption and cycles of enrichment.
For more information, read our white paper, “Modernizing Insurance Data to Drive Intelligent Decisions,” visit the Insurance section of our website or contact us.