Prev:
Chapter 3 - Literature Review
Chapter 4
Context Interviews
Next:
Chapter 5 - Conceptual Study

1             

Summary. 1

Chapter 1 - Introduction. 12

Chapter 2 - Research Method and Design. 18

Chapter 3 - Literature Review.. 36

Chapter 4 - Context Interviews. 56

4.1 Summary. 56

4.2 Rationale. 56

4.2.1 Alternatives. 56

4.2.2 Selection. 57

4.3 Subject Recruitment 57

4.3.1 Sampling. 57

4.3.2 Demographics. 59

4.3.3 Limitations. 60

4.3.4 Summary of Recruitment 61

4.4 Data Collection Method. 61

4.4.1 General Approach. 61

4.4.2 Materials. 62

4.4.3 Summary of Data Collection. 65

4.5 Data Analysis Method. 65

4.5.1 Approach and Philosophical Basis. 66

4.5.2 Narrative Analysis. 68

4.5.3 Topic Analysis. 69

4.5.4 Proposition Induction. 70

4.5.5 Summary of Data Analysis. 71

4.6 Key Findings. 71

4.6.1 Evaluation. 71

4.6.2 Recognition. 73

4.6.3 Capitalisation. 74

4.6.4 Quantification. 76

4.6.5 The Context-Mechanism-Outcome Configuration. 80

4.6.6 Conclusion. 82

Chapter 5 - Conceptual Study. 84

Chapter 6 - Simulations. 124

Chapter 7 - Research Evaluation. 166

Chapter 8 - Conclusion. 180

References. 184

Appendix 1. 194


Context Interviews

4.1       Summary

This chapter presents the rationale, process and key findings from field interviews. These semi-structured interviews were undertaken with Information Systems practitioners with a view to understanding current practices and their understanding of Information Quality measurement and valuation in large-scale customer-focused environments.

It was found that while IQ is regarded as important, there is no standard framework for measuring or valuing it. Further, the absence of such a framework hampers the ability for IS practitioners to argue the case for investing in improvements as access to organisational resources is dependent on such a case being made.

4.2       Rationale

Before developing a framework for customer IQ valuation, it is important to determine the existing “state of the art”. This is to avoid wasted effort and to ensure that any contribution is cumulative, in the sense that it builds on existing knowledge established during the Literature Review. Further, for such a framework to be acceptable to practitioners, it is necessary to understand their expectations: which assumptions are valid, which elements are present, what prior skills or knowledge are required, who is intended to use it and how the results are to be communicated.

4.2.1         Alternatives

Two other data collection methods were considered before settling on the use of practitioner interviews. The first was an analysis of industry texts (“white papers”), while the second was a practitioner survey. The merits and drawbacks of these approaches - along with the rationale for their rejection – follow.

White papers are an important part of information systems vendor marketing. While they may include pricing and other sales-specific information, more generally, they seek to show that the vendor understands (or even anticipates) the needs of the prospective buyer. Taken at face value, these texts could provide an accurate and current picture of the availability of information quality products and services in the market. Further analysis could draw-out the requirements, norms and practices of IS buyers – at least as seen by the vendors. What makes this approach appealing is the ready availability of voluminous sources, published online by a variety of market participants ranging from hardware vendors to strategic consultancies to market analysis firms.

The principal drawback to using white papers to assess the current situation in industry is that, as marketing and pre-sales documents, they are unlikely to be sufficiently frank in their assessments. It is expected that problems that a particular vendor claims to fix will be overstated while unaddressed problems will be glossed over. Similarly, undue weight may be given to a particular vendor’s strengths while their weaknesses are downplayed. Further, as sales documents, they are usually generic and lack the richness that comes with examining the specific contexts in which IQ is measured and valued. Contrived case studies – even those purporting to relate to creating and arguing a business case – may say more about how the vendor hopes organisations invest in IQ than the actual experiences of practitioners.

The second data collection method considered was a practitioner survey. By directly seeking the opinions of practitioners, the problem of cutting through marketing agendas is dealt with. A survey would potentially allow the opinions of a large number of practitioners to be gathered from across industry. Further, such a dataset would be amenable to a statistical analysis, allowing for rigorous hypothesis-testing, trending and latent-variable discovery.

Gathering data about respondent qualifications, experience and role would be straightforward and is common enough in this type of research. However, designing a meaningful set of questions about IQ measurement and investment would be fraught, even with piloting. This is because the terminology and even concepts are not standardised, while accounts of organisational structures and processes do not lend themselves to the simple measurement instruments used in surveys, such as Likert Scales. A compounding problem lies in the recruitment of respondents: very few organisations have a single “point-person” responsible for IQ and the low response rate and selection bias may undermine any claims to statistical significance.

4.2.2        Selection

Practitioner interviews offered a number of benefits over text analysis and surveys (Neuman 2000). The face-to-face communication means that terms relating to both IQ and organisational issues can be clarified very quickly. For example, position descriptions, internal funding processes and project roles are broadly similar across the industry but variations do exist. The flexible nature of interviews means that a subject – or interviewer – may guide the discussion based on the particular experiences or understandings of the subject. Generally, this tailoring cannot be planned in advance as it is only at the time of the interview that these differences come to light. Lastly, there is a richness of detail and frankness that comes only through people speaking relatively freely about specific experiences (Myers and Newman 2007).

At a practical level, practitioner interviews are cheap, quick and comparatively low risk. It is reasonable to expect that workers in the IS industry would have exposure to a range of technologies and work practices due to their high mobility and the industry’s immaturity. Thus, interviewing a fairly small number of practitioners can glean insights across a large number of organisations.

Such an approach is not without its limitations. Primarily, there is a risk of a getting a biased or unrepresentative view of the wider industry through problems with subject recruitment (sampling). A secondary problem is with the testimony from the subjects: faulty recollection, self-censorship and irrelevant materials were identified as concerns.

4.3       Subject Recruitment

To conduct this phase of research, IS practitioners had to be contacted with an offer and agree to participate. This section outlines the goals and methods used. The terminology used here comes from the sampling chapter in Neuman’s text on social research methods (Neuman 2000).

4.3.1         Sampling

The purpose of the field interviews was to understand the “state of the art” of IQ measurement and valuation by decision-makers within the IS industry, particularly focusing on those dealing with mass-market, multi-channel, retail customer management. The idea is to find a group that – collectively – spans a suitable cross-section of that industry sector. This is not achieved via statistical (random) sampling, but through a process called stratification. In short, a number of criteria are identified and at least one subject must meet each criterion. These criteria are outlined and discussed subsequently.

The question of sample size is understood in terms of saturation[3]: the point at which the incremental insight gained from further interviews becomes negligible. Of course, this raises the problem of defining negligibility in this context. The approach taken was to begin the analysis process in tandem with the data collection. This meant that the incoming “new” data from each interview could be compared with the entirety of the existing data, so that a view on the novelty of insights for each interview could be formed.

This sequential approach makes a very good procedural fit with the mechanism of recruitment: snowballing. This refers to asking subjects to suggest or nominate new subjects at the end of the interview. The reason for this is that each subject, through their professional network, may know dozens or scores of possible subjects. However, at the end of the interview, they have a much better idea about the research project and can quickly nominate other practitioners who are both likely to participate and have something to offer the study.

The key to making snowballing work is trust and rapport. After spending time in a face-to-face context with the interviewer, subjects may be more willing to trust the interviewer and so make a recommendation to their contacts to join the study. By the same token, an approach to a new subject with a recommendation from a trusted contact will be more likely to succeed than “cold-calling” from an unknown person.

The snowball was “seeded” (ie the initial recruitment) with two sources: subjects drawn from the professional network of the researcher and those from the industry partner. By a coincidence, these were centred on the one company, Telstra Corporation, the incumbent and (at the time) partially-privatised Australian telecommunications carrier. However, owing to its vast size and fragmented nature, there was only one subject in both “seed lists”.

All stages of the field study, including subject approach, obtaining of permission and consent, question and prompt design and the collection, analysis and storage of data were governed by an appropriate university Human Research Ethics Committee. Given the intended subjects and the types of data collected, the project was rated as being low-risk.

The following strata (or dimensions and criteria) were identified for ensuring the sample is representative of the target decision-makers in the IS industry. (That is, those who operate within large-scale customer management environments involving significant amounts of complex customer data deployed across multiple channels.)

·         Industry Sector. The two types targeted by this research are (retail) telecommunications and financial services. Since most households in the developed world have an ongoing commercial relationship with a phone company and a bank, organisations operating in these two sectors have very large customer bases. They also operate call centres, shop fronts, web presences in highly-competitive sectors and are sophisticated users of customer information.

·         Organisational Role. There are three types of roles identified: executive, managerial and analytical. By ensuring that executives, managers and analysts are represented in the sample, the study will be able to draw conclusions about decision-making at all levels of the organisation.

·         Organisational Function. The sample should include representatives from both business and technology functional groups. This includes marketing, finance or sales on the business side, and research, infrastructure and operations on the technology side. These groups may have different terminology, priorities and understandings of IQ and to omit either would leave the sample deficient.

·         Engagement Mode. This refers to the nature of the relationship with organisation: full-time employee, contractor/consultant and vendor. People working in these different ways may offer different perspectives (or levels of frankness) about the organisational processes or projects.

A sample composed of representatives across these four strata (meeting all ten criteria) would maximise the collection of disparate views. It is worth stressing that the sample is not intended to be calibrated, that is, with the respective proportions in the sample matching those in the wider population. Instead, it should achieve sufficient coverage of the population to allow inferences to be drawn about current practice.

Also, it is not necessary to find representatives in each of the possible combinations of strata (ie 2 x 3 x 2 x 3 = 36). For example, the absence of a financial services sector technology analyst from a vendor firm should not be taken as invalidating the sample. As long as each criterion is met, the sample will be considered to capture the viewpoints in the target population.

Finally, the interviews asked subjects to reflect on their experiences in their career, across many roles and employers. Given the high-mobility of the IS workforce, many of the more experienced subjects have worked for different employers in a range of sectors and with different roles. The explanations, insights and anecdotes gathered represent their views from across these disparate rolls and organisations.

4.3.2         Demographics

The final sample consisted of fifteen subjects, interviewed for an average of 90 minutes each. In accordance with the ethical guidelines for this research project, the subjects’ names are suppressed as are their current and past employers. Pseudonyms have been used for employers, except for this project’s industry partner, Telstra Corp, where approval was obtained.

ID

Organisation

Sector

Role

Function

Mode

Experience

in years

Qualifications

(highest)

S1

ISP

Telecom

Exec

Business

FTE

30+

PhD

S2

Telstra

Telecom

Exec

Business

FTE

35+

BA

S3

Telstra

Telecom

Analyst

Business

FTE

5+

BE, BSc

S4

DW

Telecom

Analyst

Tech

Vendor

5+

MBA

S5

DW

Telecom

Mgmt

Tech

Vendor

25+

MBA, MEng

S6

Telstra

Telecom

Mgmt

Business

FTE

15+

MIS

S7

Telstra

Telecom

Analyst

Tech

FTE

15+

PhD

S8

Telstra

Telecom

Exec

Business

FTE

35+

Trade Cert.

S9

Telstra

Telecom

Exec

Business

FTE

15+

Grad.Cert.

S10

Telstra

Telecom

Mgmt

Business

Consult

20+

MBA

S11

Telstra

Telecom

Mgmt

Business

FTE

15+

BSc

S12

OzBank

Finance

Exec

Business

FTE

20+

High School

S13

OzBank

Finance

Analyst

Tech

Consult

20+

Dip. (Mktg)

S14

Telstra

Telecom

Mgmt

Tech

FTE

30+

Unknown

S15

Data

Finance

Exec

Tech

FTE

20+

Unknown

Table 6 Subjects in Study by Strata

Note that for the purposes of these classifications, a subject’s role is not as self-reported owing to differences in terminology. Subjects were deemed “executive” if they had board-level visibility (or, in one instance, had a large equity stake in the business), while “management” meant they were accountable for a number of staff or significant programs of work.

All 15 subjects reported that IQ was an important factor in their work and of interest to them, two had “Information Quality” (or similar) in their job title and a further two claimed significant expertise in the area. All subjects have experience with preparing, analysing or evaluating business-cases for IS projects.

The subjects are unusually-well educated compared with the general population, with most having university qualifications and half having postgraduate degrees. Their combined IS industry experience exceeds 300 years, with four reporting more than 30 years in their careers. Only four subjects have had just one employer while another four indicated significant work experiences outside of Australia. Eight subjects have staff reporting to them, and four have more than 30.

While there were still “leads” available to pursue, after the fifteenth interview each of the designated strata (sector, role, function and mode) was adequately represented. After some 25 hours of interview data, no new IQ measures or investment processes were identified. Novel anecdotes around poor IQ were still emerging; however, gathering these was not the intent of the study. As such, it was deemed that “saturation” had been reached and additional interview subjects were not required.

4.3.3         Limitations

Owing to practical constraints around time, travel and access, the final sample has some limitations. Here, the most significant are addressed.

·         Geography. The sample consists of subjects from metropolitan Australian cities. There may be reason to think that viewpoints or practices vary from country to country, or even city to city, and that subjects should be selected from different locations. However, it is argued that the high-mobility of the IS workforce – along with the standardising effect of global employers, vendors and technologies – means that geographical differences among the target population are minimal.

·         Gender. While the sample consists only of men, it is argued that the absence of female subjects in the study is a limitation rather than a serious flaw. The reason is that subjects were asked to provide responses to how they have seen IQ measurement and valuation employed in practice. Given that the sampling processes targeted managers and analysts in larger organisations, it is considered unlikely that women will have worked on significantly different IS projects than men. While there may be instances of women being “streamed” into certain projects or management processes (thus affording very different experiences), it is unlikely that such practices could remain widespread and persistent in the face of labour market changes and regulatory frameworks. If there are separate “male” and “female” ways of understanding IQ, it is not in the scope of this study to determine these.

·         Culture. While their ethnic backgrounds varied, the subjects were all drawn from Australian workplaces. Similarly to geography and gender, sampling from a range of cultures was not a goal. It is argued that the global nature of the industry (reflected in the sample) tends towards a standardisation of norms and values to the dominant culture, in the case a Western business perspective. Again, this is not to rule out the possibility of cultural (or linguistic or ethnic) differences in understanding of IQ; rather, it is outside of the scope of this study to ascertain these differences.

·         Organisation. All subjects were employed in the corporate sector at the time of the interview, with none from the government or small-business sector. The absence of public or government sector experience is ameliorated somewhat by the fact that Telstra was, until the mid-90s, a government organisation. Over one third of the collective experience in the sample (i.e. more than 100 years) was from this time. The unique understanding of IQ by small business practitioners was not sought given the goal of large-scale customer-focused IS environments.

These limitations in the final study sample suggest areas of possible further research but do not substantially undermine the purposes of sampling for this study.

4.3.4         Summary of Recruitment

Subject sampling consisted of using “snowballing” from within the industry partner organisation to ensure representation across four identified strata. As the interview series progressed, demographic information and further “leads” were examined with a view to obtaining sufficient coverage within each stratum. Response data were examined after each interview to ascertain when “saturation” was achieved.

4.4       Data Collection Method

In this section, the method for collecting and organising the qualitative data from the interviews is discussed in some further detail. It covers the general approach, including a description of the process and settings, and the materials, covering the specific questions and prompts used in the interviews.

The objective was to gather evidence of existing practices and norms for justifying investments in improving the quality of customer information. Of particular interest was the collection or use of measures (of any kind) to support, evaluate or test initiatives that have an impact on customer IQ.

4.4.1         General Approach

Subjects were recruited and guided through a “semi-structured” interview at their workplaces using some pre-written questions and prompts (lists of words). All interviews were tape recorded (audio-only) and notes were taken by the investigator during and after the interview.

The interview was conducted in three phases. The first phase gathered demographic information and a brief history of the subject (including qualifications, work experience and current position). The second phase looked more closely at the subjects’ awareness, use and selection of IQ-related measures. The final phase was more open-ended and provided an opportunity for subjects to share information and perspectives that they though relevant, in light of the preceding two phases.

Interview subjects were recruited using “snowballing” and approached via email. In some instances, telephone calls were used to organise locations and times for the interview. The subjects determined their own workplace approval (where needed). Ethics consent for the study was obtained in accordance with University processes.

All interviews took place at the subjects’ workplaces, either in their offices or designated meeting rooms. The locations were the central business district (or adjacent business areas) of Melbourne, Australia. One interview took place in the central business district of Sydney, Australia.

The interviews took place between 10am and 7pm on working weekdays, with the majority occurring after lunch (between 1pm and 5pm). Subjects were advised that the interviews were open-ended, but that they should schedule at least one hour. Most gave considerable more than that, averaging one and half hours. The shortest was one hour and ten minutes, while the longest was just over two hours.

In all cases, there was one researcher and one subject present and the setting was “corporate”: dress was formal i.e. suit and tie, the language was business English and the prevalent mood could be described as relaxed. The subjects’ attitude to the research project was positive.

These observations are important because they indicate the subjects were willing participants, in their usual work environment, speaking to familiar topics in a tone and manner which was comfortable. By allowing them to control the time of the interview, the subjects were not rushed or interrupted. Comments made before and after the tape recording (that is, “off the record”) were not markedly different in either tone or substance. This suggests that subjects were speaking reasonably freely and that the presence of recording equipment and note-taking did not make them more guarded in their remarks.

4.4.2        Materials

During the interviews, two sheets of A4 paper were used as prompts. The first described a series of questions, including demographic and context questions to lead discussions. This information would help assess if the study met the sampling criteria; that is, obtaining reasonable coverage of industry practice. It would also allow analysis of interactions between an individual’s education, role or experience and their views about IQ justification.

The second sheet comprised of a list of measures that may relate to investment in IQ. These measures were grouped into three lists: system, relationship and financial and were taken from academic literature in information quality, customer relationship management and IS investment, respectively. Further, these lists were not static but evolved over interviews.

At the commencement of each interview, the investigator pointed out these prompts and explained that the contents of the second one would be progressively revealed during that phase of the interview.

The context questions used are as follows, with further prompts in parentheses:

·         What is your professional background? (Qualifications and work experience.)

·         What is your current role within your organisation? (Title, position, current projects/responsibilities.)

·         What experiences have you had with Customer Information Quality? (Projects, systems, methods, tools, roles.)

·         How would you describe your perspective or view on Customer Information Quality? (Operational, analytical, managerial, strategic.)

·         How does your organisation generally justify investments in your area? (Business case, investment committee, ad hoc.)

Generally, this last question (investments) prompted the lengthiest exposition, as it involved explaining a number of corporate processes and required frequent clarification by the subject of their terminology around roles and titles. This discussion frequently involved reference to measurements, which were drawn out more fully in subsequent stages. Further, this question also afforded an opportunity for subjects to describe some non-quantitative approaches to IQ investment justification.

The second last question (perspective) was the most confusing for subjects, frequently requiring prompting and clarification by the interviewer. In many cases, the second question about the current role informed responses to this question and it was largely redundant.

In the course of conducting these interviews, explanations and clarifications were streamlined and, given the bulk of subjects were drawn from one large company (albeit in quite different areas), the time taken to convey descriptions of corporate hierarchies and processes reduced.

Sources of potential confusion for the subjects were anticipated as the investigator gained experience with the question set. For example the third question (experiences) was historical in nature; this on occasion caused confusion about whether subsequent questions were asking about current or historical perspectives and processes. By explicitly stating that during the question, this saved the subject from either seeking clarification or answering an unintended question and being re-asked.

The second sheet comprised three columns: system measures, relationship measures and financial measures. The system measures related to “the technical quality of the repository” and initial instances were drawn from the IQ literature. The relationship measures described “outcomes of customer processes” and came from the Customer Relationship Management literature. The investment measures were selected from the IS investment literature and were characterised as measures describing “the performance of investments”.

During this phase of the interview, subjects were told of these three broad groups (with explanation) and the table was covered with a blank sheet in such a way as to reveal only the three headings. As each group was discussed, the blank sheet was moved to reveal the entire list in question.

For each of the three groups, subjects were asked questions to ascertain their awareness, use and selection of metrics as part of IQ evaluation, assessment and justification. Firstly, for awareness, subjects were asked to nominate some measures they’ve heard of (unprompted recall). These were noted. Next, subjects were shown the list and asked to point out any that they haven’t heard of (prompted recall). Finally, they were asked to nominate additional, related measures that they think should be on the list. In this way, their awareness of a variety of measures was established.

The next series of questions related to their use of measures in IQ evaluation, assessment and justification. Subjects were asked which of the measures they had a) heard of other people actually using and b) they had used directly themselves. Follow-up questions related to the nature of the usage; for example, whether it was retrospective or forward-looking, formal or informal, ongoing or ad hoc and the scope (in terms of systems and organisation).

Lastly, for the measures they had used, subjects were asked to explain why those particular measures were selected. Was it mandatory or discretionary? Who made the decision? What kinds of criteria were employed? What are the strengths and weaknesses of this approach? These questions helped establish an understanding of what drives the selection of measures for IQ evaluation, assessment and justification.

As subjects moved along the awareness, use and selection phases, the set of measures under discussion rapidly diminished. In most instances, it didn’t progress to the selection phase since subjects had not used directly any measures. No subject expressed first-hand experience of selecting measures across the three domains of system, relationships and investment.

During the awareness phase, subjects were asked to nominate additional measures that they thought should be included. As a result, the list of measures grew over the course of the study. Further changes came from renaming some measures to reduce confusion, based on subject feedback and clarification. The three sets of measures at the start and the end of the study are reproduced here.

System Measures

Relationship Measures

Investment Measures

Validity

Response Rate

Payback Period

Currency

Churn

Internal Rate of Return

Completeness

Cross-Sell / Up-Sell

Share of Budget

Latency

Credit Risk

Economic Value Added

Accuracy

Lift / Gain

Net Present Value

Consistency

Customer Lifetime Value

 

Availability

 

 

Table 7 Initial Measure Sets

At the end of the study, 14 of the initial 18 measures were unchanged and a further six measures had been added. Note that “accuracy” was renamed “correctness”, while “latency”, “availability” and “churn” were augmented with near-synonyms. The other changes comprised of additions.

System Measures

Relationship Measures

Investment Measures

Validity

Response Rate

Payback Period

Currency

Churn / Attrition / Defection

Internal Rate of Return

Completeness

Cross-Sell / Up-Sell

Share of Budget

Latency / Response

Credit Risk

Economic Value Added

Correctness

Lift / Gain

Net Present Value

Consistency

Customer Lifetime Value

Accounting Rate of Return

Availability / Up-Time

Share of Wallet

Profitability Index

 

Time / Cost to Serve

Cost / Risk Displacement

 

Satisfaction / Perception

 

Table 8 Final Measure Sets (new measures in italics)

This list was stable for the last five interviews, providing a strong indication that saturation had been reached as far as the awareness, use and selection of measures were concerned.

The final phase of the interview involved asking more open questions of the subjects, in order to elicit their perspectives in a more free-flowing dialogue. It also provided a means to garner further references and participants. Many subjects availed themselves of this opportunity to relate anecdotes, grievances and “war-stories from the trenches”. The specific questions were:

 

·         Do you have any other views you’d like to express about criteria, measures or models justifying IQ initiatives?

·         Do you have any anecdotes or maxims you’d like to share?

·         Do you have any references to authors or publications you think would be of benefit to this research?

·         Can you recommend any colleagues you think may be interested in participating in this research?

·         Would you like to receive a practitioner-oriented paper summarising this stage of the research?

It was important to ask these questions after the main body of the interview, since subjects would have a much better opinion about the kinds of information (anecdotes, references, colleagues) the study was seeking. They would also be comfortable with the investigator and more likely to make a recommendation or endorsement to their colleagues.

Over the course of the study, the views and anecdotes continued at a constant pace, while the (new) nominated colleagues tapered off. In particular, subjects from the industry partner suggested the same people repeatedly, so once these subjects were either interviewed (or confirmed their unavailability), the pool of leads diminished.

This indicates that the study was speaking to the “right” people, in terms of seniority and organisational group, at least as far as the industry partner was concerned. It also provides more evidence that saturation had been achieved, in that few new candidates with specific knowledge were being nominated.

4.4.3         Summary of Data Collection

Data were collected by interviewing subjects at their workplaces and making audio recordings and taking field notes. The interviews were semi-structured and had three phases. First, subjects were asked about their experience and current roles. Second, their awareness, use and selection of measures pertaining to IQ were ascertained (across system, relationship and investment domains). Finally, more open-ended discussion and leads to further resources were sought.

As the interview study progressed, the sets of measures used in the second phase were updated to reflect either better definitions or additions to the list. Also, subjects were asked to nominate colleagues to participate in the study, as part of the “snowballing” recruitment process. The latter interviews generated no new measures and very few new “leads”, indicating that saturation had been reached.

4.5       Data Analysis Method

This section describes the analytical method applied to the data collected in the interviews. The approach taken – and its rationale - is outlined first, followed by a description of the three analytical phases undertaken.

The first phase involves immersion in the data and distillation of key points as individual narratives (narrative analysis) using “open coding” (Neuman 2000). The second phase is the grouping and re-aggregation of key points by topic and theme (topic analysis) using “axial coding” (Neuman 2000). The third phase is the specification and evaluation of the emerging propositions (induction) using “selective coding” (Neuman 2000).

4.5.1         Approach and Philosophical Basis

The primary goal of analysing the interview data collected in the study is to ascertain a summary of the IS industry’s “state of the art” in IQ assessment, evaluation and justification within large-scale customer processes. By collecting data about subjects’ experiences and roles, the intent is to establish the scope over which such summarisations may hold valid.

In keeping with the over-arching Design Science research method, the secondary goal is to uncover the unstated or implicit requirements (or constraints) of analysts and decision-makers working in this field. In particular, the organisational importance and acceptability of methods and measures used for justifying investments in IQ is sought.

When constructing a framework for investing in IQ improvements (involving a method and measures), it is necessary to understand both current practice in this regard and the likely suitability of tentative new proposals to practitioners. An understanding of currently used measures provides great insights into how measures could be used in a new framework and how they could inform existing practice.

The form of the output of such an analysis is a set of propositions induced from the data. To be explicit, it is not the goal of the study to build a theory – or comprehensive theoretical framework - of how organisations currently justify their IQ investments. The set of propositions instead constitute a distillation or summary of events and processes to be used in subsequent stages of this research.

Given the use of inductive reasoning to produce propositions, it is worth establishing what is meant by “proposition” here by re-visiting the philosophical basis for this research: Critical Realism. While the deeper discussion of the ontological, epistemological and axiological position adopted in this research project is in Chapter 2 (Research Design), it is appropriate to re-cap and apply those ideas here, in light of this study.

An analysis of how organisations justify investments in IQ is not amenable to the kind of rigorous controlled laboratory experimentation popular in investigations of natural phenomena. We are talking about socially constructed objects like organisational processes, business cases and corporate hierarchies. Hence, lifting wholesale Humean notions of scientific rigour – naturalist positivism, for a want of a better description – would be inappropriate for this task (Bhaskar 1975).

For example, the interview subjects are conceptualising and describing these objects (intransitive dimension) as well as navigating and negotiating their way through them (transitive dimension). The two are inexorably linked: the way an individual manager conceives of a corporate funding process will also reinforce and perpetuate the structure. In Bhaskar’s terms, these subjects are operating in an open system.

This mixing of object and subject leads to a mixing of facts and values, in that it is not possible for participants to state “value-free” facts about their social world. Even seemingly factual content, like descriptions of professional qualifications or current organisational role, necessarily contain value judgements about what is included or excluded.

Critical Realism (CR) acknowledges these complexities while recognising that there are still “patterns of events” that persist and can be described. Rather than insisting on the positivist purists’ “constant conjunction” of causes and their effects (unachievable in a non-experimental or open system), CR offers “CMO configurations”, or Context-Mechanism-Outcome propositions. The analyst seeks to determine regularities (loosely, causality) in a particular context (Carlsson 2003a). Additionally, certain extraneous factors may “disable” or inhibit this mechanism from “firing” and the analyst’s role is to determine these.

The resulting descriptions may be referred to as propositions, but they bear two important distinctions to the classical positivist meaning of this term. Firstly, they are not “facts” as commonly understood in the natural sciences: the entities to which they refer are highly contingent and situated. They are not elemental, atomic, universal nor eternal. Secondly, it is not essential to describe the causal relationships between entities in terms of necessity or sufficiency. Instead, there is a pattern or regularity at the actual level that is observable in the empirical. This regularity may be grounded in the real world, but its action may not be apparent to us.

Consider an example proposition like “Approval must be obtained from the Investment Panel before vendors can be engaged”. The entities – approval, panel, vendor – are not real world phenomena in the way that atoms or wildebeest are. The processes – approval and engagement – are similarly contrived and cannot be tested in a closed system, that is, experimentally. While the relationships between them can be characterised in terms of contingency (“if …, then …”), this would be to misstate the nature of the causality here.

For example, a sufficiently senior executive may be able to over-ride this configuration (perhaps through by-passing the approval or blocking the engagement). To a naïve positivist, just one instance of this happening would invalidate the proposition and require it to be restated in light of the executive’s capacity.

But what if no executive has actually done this? From a CR perspective, if such a possibility exists in the minds of the participants, then whether or not anyone has observed it happening (or has been able to produce this under experimental conditions!) does not undermine the validity of the proposition. To the positivist, the absence of such an observation would render an extension invalid since there is no empirical basis for its support.

From this point of view, we can consider CR to be more robust than positivism and more accommodating of situational contingencies. Alternatively, we could characterise CR as being upfront about the kinds of assumptions that are needed to make positivist inquiry sound and practicable in the social realm.

Following Layder’s stratification of human action and social organisation (Layder 1993), this investigation into organisational IQ justification is primarily concerned with the situated activity level. That is, how individuals navigate social processes like shared understanding, evaluation and collective decision-making. During the interviews, phenomena at this level are described by these same individuals, so it will involve consideration of the lowest level - self - the tactics and “mental models” employed by individuals as they engage in this situated activity.

This situated activity takes place at the setting level of large-scale corporate environments, with all the norms and values embedded therein. The top level, context (which encompasses macro-level phenomena like political discourse, cultural participation and economic production and consumption), is outside of the scope of this study.

The use of Critical Realism to underpin this investigation means that the “state of the art” of the IS industry in IQ justification can be couched as propositions - CMO configurations in CR terms. Summarising knowledge in this way is not intended to be treated as positivist propositions, with the associated operationalisation into hypotheses and resulting empirical testing. Nor is the set of CMO configurations intended to form a comprehensive theoretical framework.

Instead, these propositions, rigorously established and empirically grounded, can be used to provide guidance in the systematic construction of a framework for investing in IQ.

4.5.2        Narrative Analysis

The data were considered as a succession of narratives, taken one participant at a time, and ranging across a wide variety of topics. Some topics were pre-planned (as part of the interview question design) and others were spontaneous and suggested by the participant.

The first analysis of the data took place during the actual interview. Empirically, this represented the richest level of exposure since the face-to-face meeting facilitated communication of facial expressions and hand gestures which could not be recorded. The audio from the entire interview was recorded, while hand-written notes were taken about the setting (including time of day, layout of the meeting room or office and so on). Notes were also taken of answers to closed questions, unfamiliar (to the interviewer) terminology and some key phrases.

After each interview, a “contacts” spreadsheet was updated with the key demographic information (organisational role, education and so on). Additional “leads” (suggested subjects for subsequent interviews) were also recorded. This information was used to keep track of the “snowballing” recruitment process and to ensure that the sampling criteria were met. This spreadsheet also provided a trail of who suggested each subject, times, dates and locations of the interview, contact details (including email addresses and phone numbers) and notes about whether they’d been contacted, had agreed to the interview and signed the release form.

The second pass was the paraphrasing and transcribing of each interview. Typically, this took place some weeks after the interview itself, by playing back the audio recording and referring to the hand-written notes. Rather than a word-for-word transcription of the entirety of the interview, a document (linked to the contacts spreadsheet) was prepared containing a dot-point summary and quotes. Approximately half of this material was direct quotations with the remainder paraphrased. The direct quotes were not corrected for grammar, punctuation was inferred, meaningless repetition was dropped and “filler words” (eg “um”, “ah” and “er”) were not transcribed.

For example

“So um you see you see the way we ah tackled this was …”

becomes

“So, you see, the way we tackled this was …”

During the paraphrasing and transcription process, most of the audio was listened to three or four times. This is due to the listen/pause/write/rewind/listen/check cycle associated with transcription from audio. For direct transcription, the length of audio that could be listened to and retained in short-term memory long enough to type reliably was between five and ten seconds. For paraphrasing, it was up to 20 seconds. In some cases, this cycle itself had to be repeated as the audio was poor (with hissing and clicks), some subjects spoke either very quickly, with a non-Australian accent or both. The variable playback feature of the audio device used was extremely helpful here, allowing the audio to be sped up during replay or slowed down

Preparatory remarks and explanations from the researcher that differed little from subject to subject were copied from prior interviews and modified where needed. Some sections of discussion – often lasting several minutes – concerned the researcher’s prior work experience, common acquaintances, rationale for undertaking doctoral studies and future plans. While such “small talk” is important for establishing rapport and helping the subject understand the context and purpose of the interview (in particular, the understanding of organisational processes and terminology), it sheds very little light on the subjects’ view of the matters at hand. As such, much of this discussion was summarised to a high level.

The third pass of the data consisted of analysing the textual summaries without the audio recordings. This involved sequentially editing the text for spelling and consistency (particularly of acronyms and people’s names) to facilitate text searches. The layout of the document was also standardised. For example, interviewer questions were placed in bold, direct quotes from the subjects were inset and put into italics and page breaks were introduced to separate out discussion by question (as per the interview materials).

The result of this sequential analysis was that some 25 hours of interview data plus associated hand-written notes were put into an edited textual form with a standardised format, linked to a spreadsheet that tracked the subjects’ demographic details and recruitment history.

4.5.3         Topic Analysis

Here, the analysis was undertaken on a topic-by-topic basis, rather than considering each subject’s entire interview. The units of analysis (“topics”) were created by a process of dividing the text data into smaller units and then re-linking related but non-contiguous themes. That is, discussion on one topic – such as the role of vendors – would typically occur at several points during the interview.

This process cannot be characterised as true open coding (Neuman 2000) since there was a pre-existing grouping of concepts - the groupings for the set of semi-structured questions initially prepared:

·         Context

·         System Measures

·         Relationship Measures

·         Investment Measures

·         Conclusion

Within each of these categories, three to five questions were asked (as outlined above in the Materials section at 4.4.2). Along with the “contacts” spreadsheet, these questions formed the basis of a new spreadsheet template (“topics spreadsheet”) for recording the subjects’ responses, with 21 fields.

The next phase was to consider each of these fields as candidate topics by reviewing each in turn, corresponding to axial coding (Neuman 2000). This involved manually copying the relevant text from each subject on that topic and highlighting (in a standout colour) keywords or phrases.

Phrases were highlighted as being significant if they seemed “typical” or “exemplary” of what a large number of subjects reported. Other times, they were selected because they stood out for being unusual, unique or contrarian. These keywords/phrases were then isolated and put into the topics spreadsheet.

In light of this, the topics spreadsheet became a table: each column related to a question while each row described a subject. Each cell contained a list of these topics that arose in the course of the discussion by each subject. The summarisation of the topics in the columns was very straightforward as there was a high degree of similarity between subjects. For example, discussion of Service Level Agreements (SLAs) by the subjects occurred in response to the same questions in many cases.

Thematic consolidation of keywords/phrases (codes) between topics was more arbitrary and required more interpretation. An example might illustrate. Discussion of “business case” came up towards the start (Context question 5: “How does your organisation generally justify investments in your area?”) and the end (Investment Measures questions relating to Net Present Value and Internal Rate of Return). Quotes (as assertions or explanations) about business cases and their role in the organisational decision-making were found as a response to both questions. As a topic, it is independent of either question and so can reasonably be separated from them. However, the site of its emergence determines its link to other topics. For example, its relationship to process-oriented topics like “approval” and “accountability” lie in the Context question, whereas the discussion of it in the abstract (eg “discounted cash flow”) is found in the Investment Measures questions.

The outcome of this topic analysis was a set of recurrent themes or topics (codes) that are interlinked and span a number of questions asked of the subjects. These topics relate to information quality and valuation in the abstract as well as particular steps or artefacts used by particular organisations. They form the building blocks of the proposition induction that follows.

4.5.4        Proposition Induction

The propositions – or CMO configurations, in Critical Realist terms – are induced from data using the topics or themes identified during the topic analysis phase (selective coding). The elements of context, mechanism and outcome are proposed and evaluated (Pawson and Tilley 1997). In particular, “blockers” are identified (that is, when the CMO was “triggered” but the regularity did not emerge). Supporting evidence in the form of direct quotes is sought as well as any counter-evidence, to enable the balancing of their respective weights.

When specifying the context in which the configuration occurs, it is important to describe the scope, or level. Layder’s stratification of human action and social organisation (Layder 1993) is used for this. It is not possible to completely describe the norms and practices from first principles, so as a summary, a large amount of “background information” about corporate processes and IT must be assumed.

The mechanism identified is a regularity or pattern, frequently observed in the specific context and associated with certain outcomes. It should not be understood as a formal causal relationship (that is, as a necessary or sufficient condition), since the events and entities under description have not been formally defined. From a Critical Realist perspective, what we observe in the workplace (through the explanations given by observers and participants) may have underlying causes that are “out of phase” with these observations. The approach of systematic observation through controlled experimentation can reveal these underlying causes in a closed system (eg laboratory), but it is simply not possible in an open system.

It can be difficult to isolate the outcome of interest from the range of possible consequences described in a CMO configuration. This is made all the more challenging when analysing verbal descriptions of events by subjects who themselves were participants and who will undoubtedly apply their own criteria of interest through imperfect recollections. This suggests that to more objectively determine the outcomes, the study could pursue techniques from case study research, such as having multiple participants describing the same events or incorporating supporting documents (such as financial or project reports).

However, the subjects’ subjective selection of certain outcomes as worth reporting in the interview (and, implicitly, leaving out others) has significance in itself. The subjects (typically with many years or even decades of experience), when asked to share their views on their experience, are implicitly drawing on their own mental models of “how the world works”. Drawing out these patterns is more useful to the task of understanding existing practice than running an objective (yet arbitrary) ruler over past projects.

The expression of the CMO configurations follows a simple format – a “headline” proposition followed by an explanation of the context, mechanism and outcome identified. Supporting evidence – with disconfirming or balancing evidence – is provided in the form of direct quotes, where suitable.

4.5.5        Summary of Data Analysis

The study uses Critical Realism to identify regularities in the data, in the form of propositions (CMO configurations, in CR terms). These propositions are summarisations of the norms and practices of IS practitioners in IQ justification; as such they capture the “state of the art” in industry. They do not constitute a comprehensive descriptive theoretical framework, but can be seen as an expression of the implicit requirements for the construction of a normative framework for IQ justification.

The propositions were induced from the data in three phases: the first considered each subject’s interview data in its entirety, distilling key themes (narrative analysis with open coding). The second examined each theme in turn, grouping and re-aggregating the summarisations (topic analysis with axial coding). The last pass involved constructing and evaluating the propositions with reference to the original data (selective coding).

4.6       Key Findings

This section outlines the key findings from the interview study expressed as Context/Mechanism/Outcome configurations, a technique from Critical Realism. Each section addresses a high-level mechanism pertaining to customer information quality investments (evaluation, recognition, capitalisation and quantification). The configurations are supported by direct quotes from subjects and analysis and interpretation.

4.6.1         Evaluation

 

P1: Organisations evaluate significant investments with a business case.

Context: Organisational decision-making about planning and resource allocation takes place at the situated activity level of stratification. Individual managers and executives, supported by analysts, prepare a case (or argument) for expenditure of resources, typically in the form of initiating a project.

Mechanism: A separate entity (typically a committee) evaluates a number of business cases either periodically or upon request. The criteria for evaluation are specified in advance and are the same across all cases. The process is competitive and designed to align management decisions with investors’ interests.

Outcome: A subset of proposals is approved (perhaps with modifications) and each is allocated resources and performance criteria.

All subjects described in some detail the evaluation process for initiatives to be developed and approved. Regardless of their role, experience, organisation or sector, there was an extensive shared understanding about the concept of a “business case” and how it justifies expenditure.

Whether a junior IT analyst or a senior marketing executive, this shared understanding consisted of a number of common features. Firstly, business cases are initiated by a group of employees who see the need and then pass the request for funding to their superiors. In this, business cases are “bottom-up”.

Almost equally widespread was the view that the initiative needs to be driven by the “business side” of the organisation, that is, the marketers, product managers and sales units rather than the technology side. One Telstra executive explained:

In terms of where we spend in data integrity and quality, it predominantly is business initiative driven as well as the ones that we ourselves identify and therefore ask for money – funding – to improve the quality of data. The ones that are business initiatives, it’s predominantly an individual business or group – let’s say marketing decided to industry code all of their customer base – they would put a business case. So these are the logical steps: the initiator of a business group identifies the business requirements for their own functional group […] Once they put that business idea together then it’s presented to a business unit panel, who then determines whether the idea itself has any merit and if it has then it’s approved […] the resources are approved to go ahead with the idea. So all projects that relate to data integrity or data conformity or data quality go through this process. (S2, Telstra)

This view was confirmed by a marketing analyst:

In a marketing driven organisation, marketing takes the lead on this [investing in IQ initiatives] and it comes back to what benefits we can deliver to customers” (S3, Telstra)

Secondly, the initiators have a belief that the proposal should be undertaken before they prepare the business case. They approach the preparation of the business case as a means of persuading funding authorities to prioritise their (worthy) initiative rather than as a means for determining for themselves whether it should proceed. From the proponents’ perspective, the business case is a means for communicating with senior management rather than a decision-making tool for themselves.

Thirdly, there is a large degree of consensus about the way to appraise a business case regardless of its subject matter: financial measures of future cash flows under different scenarios.

The way we structure it is we have a bucket to spend on IT and each group will put up a case of why we need to do this. Projects are driven from a marketing side to deliver improvement but also IT need to be involved […] There’s a bucket of money and people just have to put up their hands to bid for it and senior management will make an assessment on what’s the costs and benefits and which ones get priority. […] In terms of financial measures, it’s pretty standardised across the company because everybody has to go through the same investment panels (S3, Telstra)

The role of financial measures in the formulation of the business case – and some alternatives – is discussed further below in some detail. No evidence was found that contradicted this configuration, that is, instances were significant investments were undertaken without even a cursory (or implicit) business case.

Given the sampling strategy for selecting organisations, it’s not surprising that there is such a large conformance of views on funding initiatives. After all, financial discipline in public companies stems from a legal obligation of board-members to seek to maximise the value of the shareholders. Approving funding (or appointing executives to investment panels) is perceived as an effective way to discharge this obligation and ensure discipline:

The days of just being able to say ‘well, if you give me $5 million bucks and we’ll increase the take-up rate on x by 5%’ and that’s what gives us the money … it really doesn’t work like that. You’ve got to get people to really hone the number that they’re claiming. (S10, Telstra)

Interestingly, these funding methods seem to have been replicated in the two organisations that are privately held (ISP and Data). Presumably, this is because the private investors regard these methods as best practice. An alternative explanation is that they may wish to retain the option to take their firms public later, so adopting the methods of publicly-listed companies would increase the value of their shares in the eyes of public investors.

4.6.2        Recognition

 

P2: Organisations recognise Customer Information Quality as important.

Context: Organisations structure their organisation, processes and technologies at the setting level (organisation-wide values, norms and practices). These values, in part, drive resource allocation and prioritisation.

Mechanism: The importance or value of Customer Information Quality is recognised by the organisation through the deployment of resources: appointing managers and creating organisational units, undertaking projects, engaging with service and technology vendors and training employees.

Outcome: Customer Information Quality is conceived as a capital good expected to justify its use of resources in terms of its costs and benefits to the organisation through its flow-on impact on other initiatives.

In order to get a sense of how “Customer Information Quality” (CIQ) is conceived by industry practitioners, it is worth considering how the phrase is used. For instance, there were three executives (one with board visibility) from two organisations with that phrase (or near-synonym, such as “Customer Data Quality”) as part of their job title. This suggests that CIQ – in some form or other – must be a principal activity and responsibility for these senior people.

One such interviewee led a team of over 30 analysts (S2, Telstra) while another had in excess of 12 (S8, Telstra). Both had been in their current role for over five years. These staffing levels alone suggest a multi-million dollar commitment by Telstra to Customer Information Quality. Add to this the costs of software, hardware, services and other infrastructure costs related to CIQ operations across the business and we see it is an area of significant expenditure.

This is only the starting point when you consider the groups’ respective work in devising training programs for call centre staff to properly collect information, for technologists in processing the information and for managers in credit, finance and marketing in using the information.

Expending a large amount on resources towards CIQ or ensuring accountability for it rests in senior staff is not the only way that organisations recognise the importance of CIQ; awareness throughout the organisation by non-specialists is also an indicator of a pervasive recognition.

Subjects and prospective subjects invariably responded positively to a request to participate in “research about Customer Information Quality”. There were no questions about what that is or uncertainty about whether their organisation “did” it - or even why this topic should be the subject of academic research. It is fair to say that recognition of this abstract concept as a topic in its own right was 100%.

Analysts, vendors and researchers all reported some experience with CIQ, indicating when prompted that even if they do not have responsibility for it they regard it as a defined concept that is important to their job. This is not to say that the subjects agreed about the specifics of the definition of CIQ.

Of course, it may be the case that subjects who were confused about the topic, had never heard of it, regarded it as a waste of time or a “buzz-word driven fad” would self-select out of the study. Certainly, if anyone regarded the topic in such a light they did not reveal it during the interviews.

In the 25 hours of interviews, the strongest “negative” statement about the importance of CIQ came from a vendor working on data warehousing:

“I find data quality is an area that no one really cares about because a) it’s too hard, it’s too broad a field and involves a lot of things. Quality itself is one of those nebulous concepts […] quality is a dimension of everything you do […] It’s almost, I reckon, impossible to justify as an exercise in its own right. ” (S5, Telstra)

This remark suggests that the subject’s difficulty is with how to analyse the concept rather than either complete ignorance of the topic or an outright rejection of it.

4.6.3         Capitalisation[4]

 

P3: Organisations regard Customer Information Quality as a capital good.

Context: In order to justify use of rivalrous organisational resources like capital and employees, investments are expected to create value for stakeholders (the setting level). Customer Information Quality is not a valuable good in itself, but does create value when used in organisational processes.

Mechanism: This value creation is unspecified, though its effects are observed through increasing future revenues or decreasing future costs associated with servicing customers.

Outcome: The financial performance of the organisation improves through customer process performance.

A strong theme to emerge from the study was the capacity of Customer Information Quality to create business value. Many subjects were at pains to explain that CIQ is not valuable in itself, but that it plays a supporting role in contributing to value-creation in other initiatives, particularly organisational processes that focused on customers:

It’s difficult to envisage seeing a business proposal for a project that improves the quality of information by itself – it would most likely be wrapped up in another initiative. (S6, Telstra)

A vendor, with substantial experience selling and implementing data warehouses in corporate environments, makes a similar observation:

It’s a means to an end and a lot of people carry on like it’s an end in itself. Which is why, I think, it’s hard for quality projects to get off the ground because there is no end when presented that way. (S5, Telstra)

Even managers primarily focused on systems and systems development regard CIQ as a value-creating element.

One of the reasons I was interested in the role [of data quality manager] was that time and time again the success of projects depended to a large degree on the quality of the data in the application being delivered. And so often we found that we either had poor quality data or poor interfaces to existing systems. And basically it comes down to: no matter how good the CRM system is it’s only as good as the data you present to the operator, user or customer. (S12, OzBank)

A more concrete example of linking CIQ with processes comes from another executive who explains it in terms of the “value chain” concept.

The most successful way of engaging with our business partners is to translate factual data statistics into business impacts and business opportunities. So what we’ve had to do is engage our customers on the basis that they don’t have 50,000 data errors in something - it’s that you have data errors and this means you’ve been less than successful in interacting with that customer and that in turn translates into lost sales or marketing opportunities which translates into reduced revenue which is a key driver for the company. […] The other outcome is reduced cost.

The quality of the information itself does not get much traction in the company it’s that value chain that leads to how the company can be more successful in its profitability. […] When we make changes at the data level we can now track that to a change in the business impact. (S8, Telstra)

The focus on couching CIQ initiatives in terms of contributing to or enabling value creation in other initiatives is a necessary consequence of the decision-making process (business case) and the treatment of customer information as a capital good (ie it is a factor of production for other goods rather than one in its own right).

Within process improvement, I’m not sure you’d ever actually go forward with a proposal to improve information quality by itself. It would obviously be part of a larger initiative. The end game is to make a significant process improvement or a breakthrough process improvement and maybe one of things you have to do to do that is to improve the quality of the information you are collecting including there being a complete absence of information in the first place.(S6, Telstra)

It’s through this process improvement that business owners (and project sponsors) expect to see value created and are ultimately interested in. This is summarised by an analyst working at the same company:

By doing this what do we get? In practice it’s hard to do always, but it’s what the business is after. (S3, Telstra)

As he notes, the difficulty lies in translating information systems events into business outcomes to facilitate rational investment. A consultant flags this as a gap in research.

There’s a lot of value in looking not just at the data itself in isolation, but looking at how the data is used at the end of the day by people to make decisions and that’s an area of […] research that is lacking. (S5, Telstra)

 

 

4.6.4        Quantification

 

P4: Organisations expect to quantify their Customer Information Quality investments.

Context: Investments are prioritised and tracked and managers are made accountable for their performance. Numbers that quantify business events and outcomes are collected to support the appraisal and scoring of individual initiatives and the management processes that govern them.

Mechanism: Investments are evaluated (beforehand and afterwards) against objective, quantitative, value-linked criteria. These criteria, expressed as financial metrics, are driven by underlying organisational processes.

Outcomes: Financial models of Customer Information Quality investment are created, including assumptions and predictions, to allow comparison between investments and to guide decision-making.

(Blocker): There is no accepted framework for quantifying the value of Customer Information Quality in customer processes.

(Outcomes): Customer Information Quality investments are approved on an ad hoc basis by intuition and personal judgement.

Where a business case exists and is accepted, the usual organisational decision-making applies. One executive with responsibility for information quality (S2) was asked to nominate a project that went “by the book” (that is, with a business case):

That was a program of work which had a business case, it had a driver, it had the sponsors from all the most senior sales people in the company and the resources were allocated. At the end of the day they actually saw the benefits. (S2, Telstra)

This was the exception. Others struggled to recall an instance they were prepared to say followed the normal organisational processes and was, in hindsight, successful.

Sometimes, the rationale offered for an initiative is not explicitly couched in financial terms. Regulatory compliance is an oft-cited example, where there seems to be little discretion for firms. However, the threat of fines or litigation provides a means for cash flows to be considered:

When they’re looking at an initiative, if it reduces costs to the business by improving the quality of data you’re reducing the cost to the business ongoing. If a company is exposed by the quality of data is not right – meaning, we have so many regulators in our market place these days – that in some cases, if the data is not accurate and the customer complains, then we can get fined from $10K to $10M. So it’s removing that risk by making sure that you have taken care of the quality of data and it’s at the highest level of integrity. (S2, Telstra)

Another possibility is the loss of reputation (or brand damage) associated with adverse events:

I have heard of one example, not in my direct experience, but there was one at a bank where letters went out to deceased estates, offering them an increase in their credit limits. It’s kind of offensive to the people who got those letters that start out by saying ‘we notice you haven’t been making much use of your credit cards the last three months’. (S1, ISP)

In a similar vein, anecdotes about particular difficulties in conducting normal business operations can percolate upwards and be lodged in the minds of executives:

The local branch managers get a list of their top 200 customers. What they’re supposed to do is contact them and 25% of those people don’t have a phone number. So how are they supposed to contact and develop a relationship with those customers if you don’t even have the bloody phone number? (S12, OzBank)

While there’s an awareness of these issues, a reluctance or inability to quantify the damage wrought stifles the response. One senior executive posits a disconnection between operational and strategic management that he attributes to the difficulty in quantifying and reporting on CIQ:

From a managerial point of view it would be great to have those management systems that help you manage your data quality … that you had reports that actually helped you write your business case […] From a strategic point of view, it’s making sure that all those things are understood and worked. The strategic point of view has suffered a lot I think because the managerial stuff hasn’t happened. […] From a strategic point of view you can say ‘well, we need to make sure that we’ve got the right measures, that we’ve got our managers looking at data quality, that they’re sending through the right feedback to the places that matter so they can quantify it.’ Once they’ve done that, all of a sudden you’ve got a much bigger strategic issue since all of a sudden it’s quantified. (S9, Telstra)

Quantification of the financial impact of CIQ is essential for efficient investment. Its absence makes justification difficult as participants expect comparative performance of alternatives to be expressed numerically. The difficulty in providing such numbers frustrates the ability to get CIQ initiatives approved. In the wider organisational context of projects-as-investments, there is an expectation that CIQ initiatives can prove their worth in a financial sense:

One of the most difficult things from a data quality standard point is when people say to me: Can you prove that this data quality improvement ... show me what the business benefit is. And that’s difficult because there’s no model where you can put the figures in and pop out the answer. There isn’t a model that … it depends on the data attributes. (S2, Telstra)

This frustration was expressed by senior executives, in particular. This is because justifying investments through business cases is a principal activity at this level. For example, a senior manager in another part of the organisation echoed this frustration:

I often find it really frustrating that information quality things are made to show a financial benefit … my view is that if you’re building quality information that you should take that as a necessary cost and say ‘If I don’t do it, okay, it’s going to be really bad for us in terms of maintaining our market position’. We’re prepared to wear a cost here, and then we’ll see what benefits are derived in the follow up work. (S11, Telstra)

In terms of the cost and benefit sides of the ledger, there was a consensus that the benefits – particularly increasing revenue – are the most difficult part to anticipate. One very experienced data warehousing manager was explicit about this:

When you get into the space of increasing revenue, that’s much harder to justify. That’s when you start to get into “guess”. You can fairly easily assume that if you knock off ten people there’s a cost saving. That’s where the problem is with this sort of expenditure [information quality] and justifying it and trying to work out how do you do it from an investment point of view. And that’s hard. (S5, Telstra)

With CIQ conceived as a capital good whose value comes from its ability to impact upon a diverse range of business activities, it’s to be expected that benefits in particular will be diffused throughout the organisation. Business case discipline ensures that costs are concentrated and visible, but benefits will be intangible. An analyst on the marketing side made this point about “flimsy benefits”:

We have got so much data that we need to nail data how we’re going to segment customers. What usually happens is that it’s all very costly. And sometimes with this type of project it’s harder to justify all the costs and returns. This makes senior management a bit hesitant to commit the money to actually do it. As a result, what usually happens is that a lot of the scope went into place and the project is trimmed down to really the bare minimum. It’s not ideal but that’s the reality, because it’s very difficult to quantify the projects. […] Is the project really worth $4M, or is it $2M? That’s really hard to justify on the benefits side. It’s a very flimsy benefit. (S3, Telstra)

The point about the difficulty of quantifying benefits is also made by senior people on the technology side. This vendor argues that such quantification is simply not possible and a “strategic” approach needs to be taken:

To justify data quality is a very hard exercise. Mainly because the benefits are invariably intangible, so therefore not very concrete, and tend to be a bit fluffy as well. For instance, if you decide to make date of birth more accurate, you could spend … for the sake of the argument, say a million dollars doing that. But how do you justify the benefit? To me, you can’t. It very much needs to be done I think perhaps at the portfolio scale where you say ‘this is a strategic investment, we are doing marketing around xyz space therefore as part of our marketing program we need to have a high level of accuracy with regard to the date of birth field’ and drive it that way. (S5, Telstra)

Throughout the discussions with senior executives, it was repeatedly asserted that projects or initiatives need to happen but that the lack of financial metrics for CIQ hampered or frustrated this. The assumption underpinning these views is that the value must be there, it’s just that it cannot be articulated. This suggests a problem with the valuation mechanism.

All respondents reported that they were familiar with at least some of the “system-level” Information Quality metrics (validity, currency, completeness and so on). All reported that they were familiar with “investment-level” metrics (Net Present Value, Internal Rate of Return, Return on Investment and so on). There was a high-degree of familiarity with the “customer-level” metrics (cross-sell rates, retention rates, customer lifetime value etc). Many respondents, including all executives on the business side, reported using these in business cases they’ve either seen or prepared themselves.

For example, a senior analyst working for a data warehousing vendor indicated that:

Business cases I’ve dealt with tended to deal with this category of [customer] relationship measures rather than system measures as justification for the work. (S4, Telstra)

However, in not one instance did anyone report seeing a business case that explicitly linked “system-level” CIQ measures with investment measures. These system-level CIQ measures were cited as being involved in contract management (S4, S5) or intra-organisational agreements (S11). Two respondents working in direct marketing indicated some specialised metrics impacted on commercial rates charged by information broking business (S13, S15).

The only subject to even mention system-level measures in this context was a marketing analyst. He offered only this vague passing remark and couldn’t provide further detail:

… in terms of actual [customer] relationship measures there’s a lot of room for individual projects or people as to how they link that back to financial measures. And systems measures usually get popped in there. (S3, Telstra)

This disconnection between CIQ quantification and investment quantification means alternative approaches must be found if initiatives are to proceed. Some respondents (eg S5 above), advocate taking a “strategic” approach. This seems to mean bypassing the discipline of the formal business case and relying on the intuition or judgement of senior people:

A lot of our data quality business cases have been approved because they’ve been seen to be strategically important and they haven’t had value written around them. The strategic side is going to be pretty hit and miss unless you get the managerial side happening. (S9, Telstra)

The impact of the “hit and miss” nature of this approach on resource allocation was discussed by another senior executive at a retail bank:

When I say the bank’s not very mature in allocating scarce resources, the prioritisation is a good example of how that works because a lot of it is to do with who yells loudest … it’s the ‘squeaky wheel’ and how much effort’s gone into preparing what needs to be done … I haven’t seen much science go into ‘let’s evaluate this proposal’ … So that’s the whole bunch of competing proposals going forward and data quality being one of those and then ‘saying okay, where does all this fit? Who’s going to benefit from this?’ … Data quality is not sexy, it is just not sexy. (S12, OzBank)

Note that the subject is indicating here that the shortcomings of this approach are two-fold: an inability to compare between CIQ initiatives and an inability to compare CIQ initiatives with other proposals.

Unsurprisingly, one executive reported that the “strategic” or intuitive approach finds less acceptance amongst finance professionals. The explanation offered is that people working directly with the data are better placed to understand the costs:

It’s interesting that it’s the marketing people now who get the message who have the best understanding of what it [data quality] is costing in terms of bad data and what is achievable if they improve the data. So it’s marketing people who are driving this [data quality improvement]. The [credit] risk people are not far behind. The finance people [shakes head] … very hard, very hard. (S1, ISP)

When asked about methodologies or standards or generally accepted principals for capturing or articulating the benefits of CIQ initiatives, none was nominated by any of the subjects. While this isn’t proof that none exists, it does suggest that, if it does, it is not widely known.

This was confirmed by one manager who reported a view that his organisation is not suffering under a competitive disadvantage because the problem is widespread throughout the industry:

No company’s doing this [IQ investments] really well. It’s a problem that’s always been there. Some are just devoting more time and money – and probably intellectual capacity – to fixing up these issues because of down-stream dependencies. (S11, Telstra)

Again, the suggestion is that differences in investment (time, money and “intellectual capacity”) are due to the visibility of problems “down-stream”.

4.6.5        The Context-Mechanism-Outcome Configuration

Based on the above analyses, a tentative explanation for how practitioners understand and approve Customer Information Quality improvement initiatives is presented.

The account is presented in two parts. Firstly, there is the normative model of how decisions should generally be made in larger private-sector organisations (the context). Essentially, this is the view that decisions are made about initiatives by evaluating them as investments (P1) using a business case with financial metrics (the mechanism). The outcome is an optimal set of proposals going forwards, with uneconomic ones rejected.

 

 

 

 

 

 


Figure 5 Normative CMO Configuration

Element

Description

1

At the setting level, this context includes shared values about the organisation’s goals, responsibilities to stakeholders and what constitutes good organisational practices. Also present is recognition that CIQ (like many other goals) is important (P2).

 

M1

The mechanism is the funding approval process that determines which initiatives proceed to implementation. This mechanism relies on a comprehensive quantitative assessment of the financial costs and benefits of each proposal in a way that allows direct comparison (P1).

 

O1

The outcome is a set of all initiatives or proposals which will optimally meet the organisation’s goals. (That is, their benefits exceed their costs.) Initiatives which would detract from this optimality are excluded.

 

Table 9 Normative CMO elements

The second part is a descriptive model of what happens in practice for CIQ improvement initiatives in particular. Participants have an expectation that the general normative model can be used as a template. (This expectation forms part of the context.) However, the mechanism (assessment of costs and benefits) cannot always operate in this case. As a result, an alternative mechanism may be employed resulting in a less-than-optimal outcome.

 

 

 

 

 

 

 

 

 

 


Figure 6 Descriptive CMO Configuration

Element

Description

2

At the situated activity level, this is the context of individuals navigating a complex social realm of organisational processes and hierarchies to advance their agendas. It encompasses the normative configuration (C1, M1, O1) outlined above as well as beliefs about the best course of action for managing the organisation’s systems and processes.

 

M2

This mechanism is to conceive of CIQ as a capital good (P3) and then apply metrics to systems and processes to quantitatively inform the business case of how candidate CIQ initiatives’ costs (direct and indirect) impact on the organisation’s financial position, including benefits of increasing revenue and reducing costs (P4).

 

As is shown, this mechanism cannot operate (“blocked”) because the quantification of the financial impact is not possible, not undertaken or not accepted by decision-makers.

 

O2

The outcome is a set of initiatives to proceed (and others that are declined) where the formally estimated portion of total benefits that can be articulated exceeds the cost.

 

M2

This mechanism is an alternative to the financial-metric led business case described in M2. Here, the commercial judgement (based on experience and intuition) of a sufficiently senior manager is used to approve (or deny) CIQ proposals without reference to quantitative assessments of costs and benefits.

 

O2

This outcome results from the use of the alternate “strategic” approval mechanism, M2’. It is the set of initiatives which are perceived by the manager as having sufficient worth to proceed. This includes the set of initiatives where the (unspecified) benefits are judged to exceed the (unspecified) costs, as well as others.

 

Table 10 Descriptive CMO elements

The outcomes in the descriptive model are likely to be sub-optimal. That is, the set of initiatives approved (and declined) will be different to the outcome if full knowledge of the costs and benefits were available. (It’s worth noting that perfect knowledge of such things will always be unavailable; however, some estimates are better than others.)

Under the first outcome (O2), an organisation is likely to experience under-investment in CIQ initiatives. This is because initiatives that would have created value are declined due to a soft benefit. Under this circumstance, actors may proceed to use the second mechanism if it is available.

The second outcome (O2’) relies on a fiat decree that an initiative should proceed. This introduces three potential problems: firstly, the wrong ones may be approved by virtue of their visibility (characterised by one executive as “the squeaky wheel”) rather than a cool assessment of the pros and cons of competing alternatives. This could lead to either over or under-investment.

Secondly, the absence of a quantitative financial basis hampers industry standard project governance practices (such as gating) and management performance evaluation (eg key performance indicators).

Finally, bypassing the established organisational norms about how important decision-making should proceed (C1) undermines confidence in the approval mechanism (M1). Individual workers in the organisation may feel resentment or distrust of more senior figures if they are seen to exercise power arbitrarily, opaquely or capriciously. Further, in the private sector, this will impact upon shareholders, who rely on the discipline of business cases to ensure their interests are aligned with management’s.

4.6.6        Conclusion

The application of standard approval mechanisms to Customer Information Quality improvement initiatives requires a comprehensive quantitative assessment of the financial costs and benefits associated with undertaking.

CIQ, as a capital good, is not a valuable end in itself. It creates value for an organisation through its capacity to improve customer processes. The causal relationship between improvements in customer information quality (as measured in the organisation’s repositories) and creation of organisational value (as measured by financial metrics) is not well understood. There is no widely-accepted method or framework for undertaking this analysis.

The potential value of CIQ improvement initiatives are understood (and proposed) by people “on the ground” who deal with the systems, processes and customers in question. The lack of clear measures of organisational benefits is of particular import. The inability to articulate this perceived value to senior decision-makers means that valuable initiatives are declined. Alternatively, an initiative without a financially-supported business case may still proceed by fiat.

The outcome is a significant risk of resource misallocation. This can arise from under-investment (when benefits are seen as “flimsy” or “guesses”) or over-investment (the “squeaky wheel” is funded). Additionally, these mechanism failures can undermine confidence in the organisation’s collaborative decision-making processes.

[3]Saturation” is sometimes referred to as “adequacy” in the social sciences.

[4] Here, we refer to “capitalisation” as the process of conceiving of a good as a means of further production rather than a directly consumable good. This is not to be confused with the notion of “market capitalisation” as a measure of the market value of a firm.



 

Prev:
Chapter 3 - Literature Review
Up:
Contents
Next:
Chapter 5 - Conceptual Study