Four Ways That Financial Services Firms Can Age Their Data Like Fine Wine
By understanding how data and models mature and perish and focusing on the finest bits and bytes to harvest for improved decision-making, financial services firms can more effectively modernize and monetize the information they collect to stay viable in the hearts, minds and most prized cellars of customers — and grow share of wallet.
To maximize the benefits of your data, you must age it like fine wine, and that does not always mean that longer is better. According to Usual Wines, “Contrary to popular belief, not all wine benefits from aging. In fact, it’s actually a very small percentage that does — a meager 2% of wines produced will be suitable for aging.” Similarly, keeping data too long can also soil your data set. Organizations that keep data too long slow down their analytics processes and cost their firms more due to increased storage needs. How long do you keep your data? By eliminating older, irrelevant data, financial services can maximize their data, and lower costs and risk, while providing a data-rich environment and giving themselves a steep selling advantage.
Most of our financial services clients understand that their data is their corporate asset and that using this asset helps their organizations make more informed, data-driven decisions. From our experience, financial services firms that value their data are more likely to create new data-driven products that drive profit. For instance, for a merchant services client, we brought together multiple data sources into a “data lakehouse” to build new products and services around loyalty, payment mechanisms, competitive information and next-best actions.
Firms that realize the importance of data are incentivizing internal stakeholders through awards, recognition and financial incentives to reinvent and mature their data through sharing, encouraged experimentation, collaboration and crowdsourcing into richer information to sell and monetize.
These firms should have robust data archival and deletion strategies and procedures to prove that they retain the right data for regulations such as Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), which regulators are more strictly enforcing. Data archiving and deletion also help to prevent data loss, protect from ransomware attacks on live or exposed data and keep data sets smaller and theoretically more manageable.
Coates’ Law of Maturity: How wine and data get better with age
In order to find the aging ability of wine, wine tasters typically use Coates’ Law of Maturity to determine the proper aging of wine. This principle states that wine remains at peak for the same time that it takes to mature.
So, the old saying “wine gets better with age” has an expiration date. The flavors, aromas and textures appear and fade over time, rather than in unison. The value of data, like wine, also fades over time for a variety of use cases that may appear and fade over time depending on a firm’s strategic focus, management changes and market conditions.
Behind the scenes of the wine industry, the vast majority of wine is not aged, and even wine that is aged is rarely aged for long. Typically, wine is not aged more than five years, since most people do not wait long to consume it. Similarly, in our experience, financial services firms tend to consume a large percentage of data during their first year and consume the vast majority of the data within five years. Therefore, a robust data management strategy should consider the lifecycle of data integration creation, consumption, archiving and removal. Firms that do this early and often will reap the most benefits when data is most meaningful and relevant to internal and external customers.
Aging to maturation: A lost data management art and science
Financial firms have typically not accounted for dynamic data and changing business demands in their core systems. Coupled with the many changes caused by COVID-19, the issue even more complicated. Finally, add the lack of fine-grained policies for data integration creation, consumption, archiving and removal, and the exponential growth in data from applying advanced forms of artificial intelligence (AI) and machine learning (ML) — and you get the “perfect storm.” Many firms lack a robust system for properly dealing with aging data to segregate higher value assets from older, irrelevant data.
Fermenting the digital enterprise with modernized data varietals
Traditional processes used to develop and modify data management systems have not leveraged the advances of advanced delivery methods such as Agile, DevOps, DataOps and MLOps, to optimize and simplify processes.
To modernize and achieve finely aged data, financial services firms must:
Be agile and utilitarian. Data architecture must consider on-demand, self-service, crowdsourcing and AI/ML-enabled capabilities. The use of cloud to modernize data with proper cleansing, normalization, quality and consolidation to a data lakehouse will enable financial services firms to scale up and down quickly for emerging business needs. Additionally, this adaptability will make it possible to add on-the-spot computing power for additional use cases and apply AI/ML to the data to get to more of an information-rich environment. This will help these firms to incorporate additional third-party data sources and drive more insights, thus making data and subsequent analyses inherently more valuable.
Provide open access. Platforms have three layers of data — a raw data layer, a curated layer and a consumption layer. Traditional data architectures typically grant access only to the consumption layer. However, analysts and data scientists want access to raw data to find overlooked elements that may be useful to generate additional insights. Firms typically want to integrate new data sources into analytics, AI/ML and applications in an automated way. We see that most financial services firms are currently producing and consuming data with manual processes. These firms can use machine learning to detect changes in the schema and structure of the incoming data and auto-adjust the integration patterns.
Invest in a data-rich library to get the full impact of AI/ML. Data scientists develop features to transform data into more consumable forms for AI/ML algorithm training: for example, calculating the time in between transactions. A feature library is the collection of all the features into a standardized ontology or collection that data scientists can apply more readily to their AI/ML models. Since data scientists use features in AI/ML models as inputs into the learning systems, the more there are at the front-end, the better. AI/ML models can then select the best performing features and spend less time finding better models. The goal of an endless feature library is to create a limitless number of features from prior work and auto-calculations to capture every feature that could arise in a given data set.
Enable a unified data security and classification model. Firms often rely on complex, hybrid environments that blend cloud-based and on-premises services with data scattered in various locations and used by myriad individuals and systems. These firms should scan and separate redundant, outdated, trivial, confidential and classified data using AI/ML to help protect data and information more closely. We recommend a unified data security and classification model using AI/ML to enable employees to focus on using the data in new and interesting ways, rather than worrying about finding workarounds and using a large amount of effort to get the same analyses completed.
Lineage and governance to achieve full data maturity
We worked with a top-10 financial services firm to reduce manual intervention with smart capture of data lineage using AI/ML techniques. The firm faced a huge and growing data volume, integration challenges of multiple niche technologies, inconsistent business description of the application data, and multiple stakeholder involvement in integrating additional data sources.
We built a centralized repository to harvest business, application and infrastructure metadata and track lineage. Additionally, we cataloged the results in the unified metadata store. Our work helped the firm to reduce the complexity and cost of providing accurate and up-to-date information to business users and auditors.
Lastly, we were able to reduce down the total lifecycle cost of maintaining data, by updating archival policies to retire data past its maturity. Subsequently, this also helped to avoid unnecessary data loss or paying for data modernization or monetization efforts on data that had not reached its full potential and not yet become finely aged data.
This article was written by Nathan Greenhut, a Client Solutions Executive in Cognizant’s Banking & Financial Services Practice.