News & Events

Why experimentation could guide you to failure?



In the last entry we have talked about potential pitfalls in the use of analytics and how some of those pitfalls could lead organizations to abandon the path of analytics. Today we will talk about yet another pitfall associated with a tool that has been used for much longer before the analytical dawn. Just that it was called by another name – PILOT. Recently, business managers have extended the concept and started calling it experimentation or testing. A more limited version of this experimentation is called champion-challenger or A-B testing.

Experimentation is a very powerful tool in the hands of business managers – it allows them to test competing hypothesis under Ceteris Paribus. In effect they are able to “generate” data and insights to reduce uncertainty associated with roll-outs. However, experiment design and analysis has to be conducted very carefully else it can lead to wrong conclusions, wasted investment (often significant as we will show in the following example) and of course unrealized opportunity.

Today’s example relates to a large financial institution (FI) which provided its customers with a family of borrowing products (other industries with applicability would be insurance, wealth management, telecom, home media and cable). This FI was organized around products and each unit pursued cross-selling opportunities independently. Despite best efforts its share of wallet was low i.e. their customers’ wallets were distributed across competitors. This presented a large wallet consolidation opportunity.

One of the promising ideas to accomplish this feat (I have intentionally used this word as a lot of organizations are unsuccessfully grappling with this issue) was to integrate cross-selling, customer contact and servicing and organize around customers as against the current model of product based cross-selling and pooled customer service centers. Every customer would be assigned a “relationship manager” – a single point of contact who should be able to comprehensively understand and serve customer’s needs in an integrated manner (as against the dispersed view and servicing in the existing model) resulting in higher cross-sell and satisfaction.

This made intuitive sense and generated fair bit of enthusiasm within the organization. However such a large change requires significant investment in infrastructure and training, and cultural and organizational alignment to make it successful. Most of the senior leadership was on board, however the CEO wanted to be sure that such a massive investment was worth the prize.

He decided to run a small experiment (pilot) before committing to such a massive investment and organizational change. A set of customers was isolated for the experiment, five volunteer customer service associates recruited and trained. They were also provided with some simple tools to help them. The experiment ran for six months. As expected, results were superlative. Customer attrition dropped, satisfaction rose, and cross-sell rates increased by few multiples.

When the numbers were fed into the business case, the increase in cross-sell rates made it a slam dunk investment. Benefits from lower attrition though material, were small in the larger scheme of things. The FI decided to move ahead with the program – invested a few tens of millions of dollars in building the new infrastructure, reorganizing customer service around customer segments, and training existing customer service teams. Changes were made to policies around customer ownership and allocation of customer profitability to different units. As can be imagined, it took close to a year to make these changes. Once the program was rolled out, results in the first few months were very positive and mirrored those from the experiment, but eventually cross-sell rates began to fall and get closer (though still higher) to the original ones. At these lower cross-sell rates the return on investment was actually negative since the incremental benefit couldn’t even cover the higher running expense base.

The organization attempted various other efforts including modifications to recruiting and training of customer service associates, adding tools, applying analytics to cross-sell etc. but the rates didn’t increase enough to justify the increase running expense base. Despite some extensive analysis, there was no clarity on the reason, and finally the organization abandoned relationship based strategy and went back to regular servicing and cross-selling approach.

So what went wrong here? Was the pilot accurate or did the organization falter in the roll-out? And more fundamentally does the “relationship selling and servicing” approach not work or were the expectations from it too high? There could be many factors playing into this, but I will highlight a few directly related to the experiment design and analysis.

As you will recall, the pilot included customer service agents that volunteered – implying a selection bias. It is likely that they were the more experimentative, and faster learners than an average associate. Their performance in the experiment must have been better than what could be expected of an average associate in roll-out. So the experiment was inadvertently designed to over-perform, thereby establishing a higher expectation of benefit from “relationship selling and servicing” strategy.

The experiment also did not include newly acquired customers, who usually behave very differently from the pool of existing customers. This omission makes the experiment less close to roll-out reality and its effect could go in either direction, depending on the difference in behavior between newly acquired and existing customers.

There was something more fundamental and significant that accounted for the drop in cross-sell performance. It’s the “latent demand effect”. The experiment was performed on a set of existing customers, who had never before been cross-sold using relationship selling approach. Therefore, they would have “accumulated” response to new approach. For sake of simplicity, lets assume that the population has essentially two subsets – those who respond to the product driven (older) approach to cross selling and those who respond to the relationship (newer) approach. Since historically only the product driven approach has been followed, the “accumulated demand” will be much higher in the second group than in the first. Since the new approach addresses this accumulated demand, there will be higher observed adoption (response to cross-selling efforts) in the initial stages of the experiment until it settles down to BAU levels. This initial higher response can be construed as the incremental increase, leading to overestimation of benefits from the new approach.

Apart from some of the changes in the design and population selection, the experiment should have been allowed to run long enough (and the initial period should have been excluded from benefit calculations) or the analysis at back-end should have been done differently to eliminate the accumulated demand effect.

In examples such as specified above, just ensuring Ceteris Paribus (though in this case even that wasn’t ensured completely) doesn’t guarantee that the experiments will provide accurate insights. What was needed, is for someone to thoroughly understand all the possible effects and carefully design the experiment and analysis.

To make experimentation successful, it is critical for any organization to have resources that understand the various business effects and realities (usually the business experts) and those that understand experiment design and analysis (usually statisticians). But this isn’t sufficient. The two capabilities should be integrated to help make the right design and analysis choices. More often than not, the business and statistical experts work as separate teams with limited understanding of each-others’ areas of expertise. This can be a recipe for failure. In benign cases, the experiments can provide inaccurate results leading to missed opportunities, while in the worst case (as illustrated by this example), wrong experimentation can lead to misdirected investments and efforts.


By browsing our website you must agree to our cookie policy. Accept or Decline.