News & Events

Why could the promise of Big Data and Analytics fail?



There is talk of big data and analytics all around and the organizations of all hues are excited about the promises that they hold. Analytics is expected to make organizations more competitive, more cost-efficient and more customer oriented. The most obvious sign of this excitement is the huge resource allocation to anything related to analytics – be it building a new organization comprising of statisticians, assembling disparate data into one large data warehouse, linking, cleaning and parsing that data, buying tools related to business intelligence or hiring expensive consultants to build models.

As we survey the marketplace of the offerings (both internal and external) to meet the exploding demand for analytical services, we sense potential for disappointment because of missed opportunities or underdelivery from the analytical initiatives. There are multiple issues driving this somber assessment. Today we will highlight one that is perhaps most prevalent, in our view most difficult to address and usually the least obvious.

Most organizations are run by managers who understand the business nuances but few who actually are familiar with the analytical concepts and their implications on business decisions. On the other hand, most “analytics types” are statistical/mathematical wizards who have little understanding and experience of the practical business issues. When the two come together there is a lot lost in translation. Not only do they speak different languages, they think in different languages. And this loss of translation results in misunderstandings, frustrations and delayed projects at the minimum and more often disappointments, lost investments and misdirected efforts at worse.

This loss happens at two stages of the project – in the initial stages when business problem needs to be translated to an analytical one and towards the end, when the analytical insights need to be translated to actionable decisions.

Let me illustrate it with a similar-to-real-life example.

This relates to direct marketed consumer product (like credit cards, insurance). Business owner for the direct marketing channel had set a goal of acquiring more customers and asked his team for innovative ideas. The team came up with multiple suggestions but the most relevant one to us is regarding building a suite of response models for the various product flavors that they had. The manager, who believed in the power of analytics asked his team to embark on the effort.

Once, the models were ready, the team ran the prospect list through the models and selected a subset to be direct marketed through mail. This campaign was repeated for two quarters, but the number of customers acquired didn’t pick up. They were actually a bit lower than expected earlier. The manager and his team spent time trying to uncover the reason for their apparent lack of success. They looked at various aspects including, changes in population behavior, product usage, seasonality of response rates etc.

So what actually went wrong? The models did what they were built to do – select more likely responders. So those who were mailed were more like responders but the mailed population had gone down. Overall the marketing efficiency did go up as proportion of responders in the mailed population was higher than overall population, but absolute response was slightly lower than earlier. The non-mailed prospects did have some responders but they didn’t make the cut of the response models. The team did realize this issue but by then effort (building suite of response models) time (two quarters) and opportunity (customers that could have been acquired but perhaps went to competitors) were wasted.

In this case the team got an opportunity to dig deeper and retry, but often (particularly in cases where large investments are made on the belief in power of analytics) the projects can be abandoned after initial failure or there might be so many missteps that post-mortem analysis might itself be too time consuming.

Now that we know, where the team went wrong, let’s go back and understand what should they have done? It is clear that response models weren’t the answer to the problem. As noted earlier, response models are a good way to increase marketing efficiency, i.e. acquire customers at lower marketing cost but they rarely increase the acquisition numbers itself unless the prospect list has been historically undermined. This is rare in times when businesses are fighting for every customer.

To increase the absolute response the organization needs to think about reasons why the customers aren’t responding to their offers. Possible reasons are – they don’t need the product, they need the product but don’ like the features or the price (in both these cases, they will perhaps go to competitors). In this example it was hard to get someone to buy a product if they didn’t really need it (in some products that is possible though). However, those that did need a product but didn’t respond should have been the target for this exercise and the objective should have been to identify them and reach them with the right product features and price.

Let’s assume that there are some such prospects in the list (if there aren’t then the organizations needs new prospects). Since such prospects have historically not responded to earlier offers, the organization actually doesn’t know what they want and therefore the reason for their lack of response. So building response models using the historical data is not going to help at all. These models will only pick prospects who have the need and like the features/price of the existing products. Those who need but don’t respond will be treated similarly to those who don’t need. Statistical analysis doesn’t differentiate between the two categories.

Statisticians often call this reject inference problem – essentially “you can’t know what you don’t know”. It has profound implications for application of analytics in business. I have seen it crop up unannounced in multiple instances and often it leads to false wrong conclusions and wasted efforts and dollars.

In this case, the right approach would have been to figure out the reason for non-response before embarking on response modeling. There are multiple ways to accomplish this but no amount of statistical analysis only on historical data can provide the answer.

The question is “what’s at issue here?” If the business manager was aware of the reject inference issue, he might have stopped the team early on, and if the statisticians were clear on the goal and the practical implications of the reject inference, they wouldn’t have suggested response modeling in the first place.

What was needed, is for someone to realize the core issue, translate it to the right analytical framework and steps –

  • Analysis/methodology for understanding the cause of non-response
  • Look-alike modeling to pick similar unresponsive customers for experimentation,
  • Design of experiments so that most learnings can be gleaned and applied within the actual channel constraints
  • Analysis of experiments and development into policy – actual decisions
  • Tracking of future campaigns

In our experience, most organizations today lack this critical ability to translate between the business and analytical speak. At QuaEra (www.quaerainsights.com), we call this “Crosswalk” and take pride in our ability to have successfully done this for a variety of organizations in a variety of situations across functional areas. We at QuaEra realize the criticality of this and have therefore built our value proposition and team around “Crosswalk”. It ensures success for our clients and that is what


By browsing our website you must agree to our cookie policy. Accept or Decline.