Using Best Practices and Automation to Increase Revenues

Marketing and Sales

Subscribe to Marketing and Sales: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Marketing and Sales: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

BuyerSteps Authors: Jason Bloomberg, Will Davis, Adrian Grigoriu, APM Blog, Sarah Watkins

Related Topics: EMC Journal, Marketing and Sales, Big Data on Ulitzer


Effective Analytics | @BigDataExpo #BigData #IoT #M2M #DigitalTransformation

Measure the effectiveness of whatever decisions are ultimately made in order to continuously refine the analytic models

I was reading an interview with John Krafcik, CEO of Google’s Self-driving Car Project, in the August 8th issue of Bloomberg BusinessWeek. The article referenced a survey by AlixPartners where they found that 73% of people wanted autonomous vehicles.  But when people had the option to have a steering wheel in the car, allowing optional full control to the driver, the acceptance rate jumped to 90%.  This finding, that people are much more accepting of automation and new ideas when they have the option of control, is totally consistent with what we found with respect to how to deliver big data analytics.

The big data engagements we run for EMC focus on applying predictive and prescriptive analytics to deliver recommendations to help key decision makers become more effective at at their jobs.  For example, delivering recommendations to teachers in how to best group their students based upon the subject area, or to mechanics regarding what parts to replace when performing maintenance on a wind turbine, or to physicians regarding what medications and treatments will likely deliver the best results given a patient’s overall wellness, or to appraisers to help them more accurately determine the value of a property, or to an underwriter to help them to determine which loans to accept given a reasonable level of risk, or etc.

But how does one ensure that the business stakeholders, the humans in the process, are accepting of the analytics and recommendations that are being delivered to them? Being right doesn’t necessarily make you persuasive.

Effective Recommendations Put Humans in Control
We learned through several engagements that as we deliver recommendations to business stakeholders or directly to customers involved in the process or decision that we had to provide three options to the humans in order to ensure their buy-in to the analytics.  Those three options that we presented to the business stakeholders were:

  • They could accept the recommendation and we would measure how effective the outcome was versus the model, or
  • They could reject the recommendation and we would measure how effective the outcome was versus the model, or
  • They could change the recommendation and we would measure how effective the outcome was versus the model.

Note:  in some situations, we also offered the option to select the [MORE] option to get more details (usually presented as interactive charts or tables) in support of the recommendation.  But after a while, we found that the users seldom selected that option

For example, the organization may be executing on a customer retention business initiative.  The organization could be applying big data analytics to deliver retention offers (new services, lower prices, more features, etc.) to their “high value, at risk” customers based upon the likelihood of the customer’s attrition (Customer Attrition Score) and the customer’s potential lifetime value (Maximum Customer LTV Score).  So when Jane Smith calls the call center about a billing issue, the model would look up Jane’s “Customer Attrition Score” and “Maximum Customer LTV Score” to recommend a specific retention offer to the customer service representation.

Let’s say that the data indicates that Jane has a high likelihood of attrition (based upon a change in her usage behaviors and social media sentiment) and that she has a very high “Maximum Customer LTV Score” (based upon both the number of additional services that could be sold to Jane, plus her strong social media following).  The prescriptive model may recommend the following retention offer:

[Offer Jane 10% off of her current cable service over the next 3 months]

The call center representative has the options to:

  • Accept the recommendation and make that offer to Jane, or
  • Reject the recommendation and make no offer to Jane, or
  • Change the recommendation based upon the conversation that the Customer Service Rep is having with Jane.

Let’s say that the Customer Service Rep decides that the best offer for Jane (based upon the conversation the Customer Service Rep is having with Jane) is to:

[Offer Jane 50% off new high-speed Internet service over next 6 months]

The customer service rep may have learned from their conversation that Jane’s biggest usage problem was streaming her favorite shows during her weekend binge watching.  With this additional insight in hand, and the knowledge from the scores about Jane’s likelihood to attrite and her potential life time value, the customer service rep made the decision to change the recommendation to something that the customer service rep felt was more relevant to the problems that Jane was having.

Test, Measure and Learn for Continuous Model Evolution
In all cases, we want to measure the effectiveness of whatever decisions are ultimately made in order to continuously refine the analytic models and scores.  By constantly measuring the effectiveness of the recommendations AND allowing the humans in the process the freedom to test different ideas, the models can learn from the humans.

More importantly, there are always going to be some humans who produce better results than the models due to their experiences, training, and intuition (or in some cases, dumb luck).  Humans may be able to react and adjust to new information (coming from the interaction that they are having with the customers) than the time required to update and re-run the models.  In the end, involving the humans as a key factor in the analytics process will ensure that models don’t go stale and that the models are constantly improving.

As the BusinessWeek article highlighted, humans will usually be more receptive to new ideas and new technologies if they feel like they are still in control.  If you want your decision makers to accept the recommendations of your analytics, then you had better allow the humans an opportunity to provide feedback to the models.  This a clear win-win-win for everyone – data scientists who are building the analytic models, business stakeholders who are interacting with the analytic results, and the customers with whom we are trying to provide a differentiated experience.

The post Effective Analytics Put the Humans In Control appeared first on InFocus.

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business”, is responsible for setting the strategy and defining the Big Data service line offerings and capabilities for the EMC Global Services organization. As part of Bill’s CTO charter, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He’s written several white papers, avid blogger and is a frequent speaker on the use of Big Data and advanced analytics to power organization’s key business initiatives. He also teaches the “Big Data MBA” at the University of San Francisco School of Management.

Bill has nearly three decades of experience in data warehousing, BI and analytics. Bill authored EMC’s Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements, and co-authored with Ralph Kimball a series of articles on analytic applications. Bill has served on The Data Warehouse Institute’s faculty as the head of the analytic applications curriculum.

Previously, Bill was the Vice President of Advertiser Analytics at Yahoo and the Vice President of Analytic Applications at Business Objects.