Research Blog - Customer Intelligence

As is traditional with this near-defunct blog, I'll begin by remarking: Gee, it's been a year since I last posted! That must be a record! Not like my other blogs, which receive more frequent attention.

Now, with the formalities out of the way, I can proceed to today's topic: Information Quality Measurement. David Loshin has an article in DM Review about this very fraught topic. (Along with Tom Redman, David Loshin is an IQ guru-practitioner worth reading.)

The article lays out some aspects of creating IQ metrics, following the usual top-down Key Performance Indicator approach. That's fair enough; most companies will find this well-inside their comfort zone. It goes on to:
  • lists some generally desirable characteristics of business performance measures,
  • show what that means for IQ specifically,
  • link that - abracadabra! - to the bottom-line.

As far as a list of desirable characteristics go, it's not bad. But then, there's no reason to think it's any better than any other list one might draw up. For me, a list like this is Good if you can show that each item is necessary (ie if it weren't there, the list would be deficient), and that no other items are required (ie it's exhaustive). I don't think that has been achieved in this case.

In any case, this approach makes sense but is hampered by not considering how these numbers are to be used by the organisation. Are the metrics diagnostic in intent (ie to help find out where the problems are)? Or perhaps to figure out how improvements in IQ would create value across the company?

My own research based on a series of interviews - forthcoming when I kick this #$%! PhD thesis out the door - suggests that IQ managers are well-aware of where the problems are and what will be required to fix them. Through sampling, pilots and benchmarking they seem reasonably confident about what improvements are likely too. I would question the usefulness of measures such as "% fields missing" as a diagnostic tool for all but the most rudimentary analysis. What they're crying out for is ammo - numbers they can take to their boss, the board or investment panel to argue for funding. Which leads to the next point.

"The need for business relevance" could perhaps be better explained as pricing quality. That is, converting points on an arbitrary scale into their cash equivalents. This is a very tall order: promotions and job-retention must hinge on them if they are to have meaning. Even management bonuses and Service Level Agreements will be determined (in part) by these scales. In effect, these scales become a form of currency within the organisation.

Now, what manager (or indeed supplier) is going to be happy about a bright young analyst (or battle-hardened super-consultant) sitting down at a spreadsheet and defining their penalty/bonus structure? Management buy-in is essential and if they're skeptical or reluctant then it is unlikely to work. If you try to force an agreement you risk getting the wrong (ie achievable!) metrics in place, which can be worse than having no KPIs at all. There is a huge literature on what economists call the principal-agent problem: how do owners write contracts with managers that avoid perverse incentives, without being hammered by monitoring costs?

But suppose these problems have been overcome for functional managers (in eg. the credit and marketing units). These people own the processes that consume information (decision processes) and so should value high-quality information, right? Why not get these people to price the quality of their information? But what's high-quality for one is low-quality for another.

Plus, they know that information is (usually) a shared resource. It's possible to imagine a credit manager, when asked to share the costs of improvements to the source systems, holding out saying "no, we don't need that level of quality" knowing full-well that the marketing manager will still shell out for 100% of the expense - with the benefits flowing onto the sneay credit manager. (This is where it gets into the realm of Game Theory.

So, what would help here is having an objective measure of relevance. That way, quality points could be converted into cash in a reasonably transparent way. But how do you objectively measure relevance? Well, a another tidbit from my research: relevance is not a property of the data. It is a property of the decision-process. If you want to understand (and quantify and price) the relevance of some information, then staring at the database will tell you nothing. Even seeing how well it corresponds to the real-world won't help. You need to see how it's used. And for non-discretional users (eg. hard-coded decision-makers like computers and call-centre staff) the relevance is constant regardless of any changes to the correctness.

In light of this, doesn't it makes sense to:
  1. Identify the valuable decision-making processes.
    This could include credit scoring (mortgage/no mortgage), marketing campaigns (offer/no offer), fraud detection (investigate/ignore) and so on.

  2. Price the possible mistakes arising from each process.
    Eg. giving Platnum Card to bad debtor. Don't forget the opportunity costs such as missing an upsell candidate.

  3. Score the relevance of each dataset of interest (eg. attribute) to that mistake.
    Some attributes will have no bearing at all; for others, the decision largely hinges on it.

  4. Measure the informativeness of the attribute to the real-world value.
    What can I find out about the real-world value just by inspection of the data? This is a statistical question, best asked of a communications engineer ;-)

The first two tasks would be undertaken as part of the functional units' KPI process, and tell us how much money is at stake with the various processes. The last two could be undertaken by the IS unit (governed, perhaps, by the IS steering committee made up of stakeholders). The resulting scores - stake, relevance and informativeness - could be used as the basis of prioritising different quality initiatives. It could also help develop a charge-back model for the information producers to serve their (internal) customers.

Two questions: how do you score relevance and informativness? My conference paper (abstract below) gives some hints. Next: will corporate managers (IT, finance, marketing) accept this approach? For that, I'll be doing a focus group next month. Stay tuned.


Home