Skip to main content Skip to footer
Case study

The challenge

Customer satisfaction is paramount in handling claims and renewing policies. An industry-leading P&C insurer experiences high call-handling times at its call center and lacks the ability to transcribe these calls to analyze the quality. Of an approximately 8,000 calls per month, only 40 received review. But auditing calls isn't enough: It doesn’t proactively address how to best serve an upset, stressed caller facing a loss. The goal is to equip customer service representatives (CSRs) with the tools to quickly answer customer questions, provide key information and resolve their issues.

Our approach

We provided an analytics platform informed by artificial intelligence (AI) to this particular insurer to improve its customer service, help supervisors monitor call quality and help CSRs understand customer sentiment during insurance claim calls. We worked closely with our client’s internal innovation team to improve its customer experience in various scenarios. Use cases included streamlining how insurance quotes are provided, automating and simplifying underwriting and improving the claims process. 

We extended IBM's Watson analytics capability to analyze customer sentiment during calls and provide CSRs with appropriate information to respond with empathy as well as questions and information relevant to each caller’s situation.

Language analytics provide insights to customer satisfaction

From our client's checklist of 40 individual steps that should be taken on each call, we taught Idea Watson how to recognize 12 entries and created a dashboard that lets CSRs monitor call progress on their displays. By performing speech analytics on calls as they take place, the checklist is automatically updated to show which tasks have been performed and which remain outstanding. Using language analytics, including diction, word choice and tone, provides each CSR insight into the customer’s attitude.

35%–40%

monthly time saving for supervisors

8,000

monthly calls analyzed

80%–90%

accuracy of dialogue auditing