Chapter 1 - Introduction
Chapter 2
Research Method & Design
Chapter 3 - Literature Review


Summary. 1

Chapter 1 - Introduction. 12

Chapter 2 - Research Method and Design. 18

2.1 Summary. 18

2.2 Introduction to Design Science. 18

2.3 Motivation. 19

2.4 Goals of the Research Design. 20

2.5 Employing Design Science in Research. 21

2.5.1 Business Needs. 22

2.5.2 Processes. 22

2.5.3 Infrastructure and Applications. 23

2.5.4 Applicable Knowledge. 23

2.5.5 Develop/Build. 24

2.5.6 Justify/Evaluate. 25

2.6 Overall Research Design. 27

2.6.1 Philosophical Position. 28

2.6.2 Build/Develop Framework. 30

2.6.3 Justify/Evaluate Framework. 32

2.7 Assessment of Research Design. 32

2.8 Conclusion. 34

Chapter 3 - Literature Review.. 36

Chapter 4 - Context Interviews. 56

Chapter 5 - Conceptual Study. 84

Chapter 6 - Simulations. 124

Chapter 7 - Research Evaluation. 166

Chapter 8 - Conclusion. 180

References. 184

Appendix 1. 194

Research Method and Design

2.1       Summary

This research project employs a research approach known as Design Science to address the research problem. While related work predates the use of the term, it is often presented as a relatively new approach within the Information Systems discipline (Hevner et al. 2004) . Hence, this chapter explains the historical development of the approach, its philosophical basis and presents an argument for its appropriateness for this particular project as justification. Subsequent sections deal with the selection and justification of particular data collection (empirical) and analysis phases of the research:

1.        Review of Relevant Literature

2.       Semi-Structured Interview Series

3.        Conceptual Study and Mathematical Modelling

4.       Model Simulation Experiments

5.        Research Evaluation


This project undertakes both qualitative (textual) and quantitative (numerical) data collection and analysis. A hybrid approach that encompasses both domains is a necessary consequence of building and evaluating a framework that entails the use of measurements by people in a business context.

2.2       Introduction to Design Science

While humans have been undertaking design-related activities for millennia, many authors – for example, Hevner et al. (2004) and March and Storey (2008) – trace the intellectual origins of Design Science to Herbert Simon’s ongoing study of the Sciences of the Artificial (Simon 1996). Simon argues that, in contrast to the natural sciences of eg. physics and biology, an important source of knowledge can be found in the human-constructed world of the “artificial”. The kinds of disciplines that grapple with questions of design include all forms of engineering, medicine, aspects of law, architecture and business (Simon 1996). In contrast to the natural sciences (which are concerned with truth and necessity), these artificial sciences are focused on usefulness and contingency (possibility). The common thread throughout these disparate fields is the notion of an artefact: the object of design could be an exchange-traded financial contract or a public transport system.

However, Simon argues that since the Second World War the validity of such approaches has succumbed to the primacy of the natural sciences. As a consequence, the artefact has been pushed into the background. Simon’s work is in essence a call-to-arms for academics to embrace these artificial sciences and in particular, design as a means for undertaking research.

Since then, Design Science has been examined within Information Systems as a research method (Gregor 2006; Gregor and Jones 2007; Hevner et al. 2004; Jörg et al. 2007; Peffers et al. 2007) as well as used for conducting research on IS topics (Arnott 2006).



2.3       Motivation

Firstly, I provide background and context for the project. The five steps outlined in the methodology from Takeda et al. (1990) form a natural way of presenting the history of the development of the project.








Text Box: Conclusion Text Box: Results


Figure 1 Design Science Research Process Adapted from takeda (1990)

Firstly, awareness of problem came about through discussions with the industry partner and the academic supervisor. I identified that, while there are a number of theories and frameworks around Information Quality, none specifically addressed the question of valuing the improvements to information quality ie quantifying the “value-adding” nature of information quality to organisational processes. The industry partner was particularly keen to understand how to formulate a business case to identify, communicate and advocate for these improvements. The outcome of this step was an agreement between the University, supervisor, candidate and industry partner for an industry-sponsored doctoral research project.

The suggestion step was the insight that ideas (theories, constructs and measures) from the disciplines of Information Theory and Information Economics could prove beneficial in tackling this problem. These ideas are not readily transferable: it requires an understanding of the Information Quality literature, IS practice context, formalisation into an artefact and evaluation against some criteria. The output from this step was a doctoral proposal, accepted by the industry partner and academic institution as likely to meet the terms of the agreement.

The development and evaluation steps comprise the body of the empirical work in the project, and their rationale is outlined in this chapter. The output from the development step is the artefact for valuing information quality improvements. The output from the evaluation steps is the assessment of the artefact against recommended criteria.

Finally, the analyses and conclusions (including descriptions of the research process, empirical phases, the artefact itself and results of the evaluation) are embodied in the academic publications, including final thesis.


2.4       Goals of the Research Design

In order to tackle the customer information quality investment problem, it is important to understand what form a suitable response might take and how it might be used in practice. The over-riding consideration here is to utility rather than truth. That is, I am primarily concerned with producing a framework that is useful to practitioners and researchers as opposed to discovering an underlying truth about the world. The knowledge acquired is hence of an applied nature.

In this case, there must be a structured approach to building and evaluating the framework to ensure it has rigour and relevance. As Hevner et al. argue, IS research needs to be rigorous to provide an “addition to the knowledge base”, and relevance allows for “application in the appropriate environment (2004)”.

The question of whether IS research has favoured rigour at the expense of relevance has been discussed and debated widely throughout the IS research community. This debate was re-started most recently by commentary in the MISQ in 1999 by Benbasat and Zmud, arguing for increased relevance in IS research (1999). Their central thesis – that IS was too focused on gaining academic legitimacy through rigour, at the expense of practitioner legitimacy through relevance – was seized upon and other noted scholars joined the fray (Applegate 1999; Davenport and Markus 1999). Lee, for example, argued for the inclusion (and hence acceptance) of non-positivist approaches in IS research (1999). Robert Glass, writing an opinion piece in CAIS, reflects on his experiences to highlight the gulf between practitioners and academicians in the information systems world (2001).

Interestingly, Davenport and Markus argue that IS should model itself on disciplines like medicine and law to successfully integrate the rigour and relevance (1999). These are two examples of disciplines identified by Simon as employing the Design Science methodology (1996). In medicine and law (and related disciplines like engineering, architecture and planning), relevance and rigour are not seen as necessarily antagonistic and both goals may be pursued simultaneously through the two distinct “modes”: develop/build and justify/evaluate. In this regard, Design Science picks up on an earlier IS specific approach known as systems development methodology (Burstein and Gregor 1999). Here, the research effort is centred on developing and evaluating a novel and useful information system, making a contribution to theory by providing a “proof-by-construction”.

The main differences between the broader approach of Design Science and Information Systems Development are:

·         Scope. Design Science is applicable to a much wider range disciplines than IS development. Indeed, Simon’s conception of the Sciences of the Artificial spans medicine, architecture, industrial design and law (Simon 1996), in addition to technology-based fields.

·         Artefact. Design Science takes a broader view of what constitutes an “artefact” for the purposes of research evaluation. Rather than just working instantiations, it also includes constructs, models, methods and frameworks.

In this case, the artefact is a framework for evaluating Information Quality improvements, in the context of Customer Relationship Management. So, where a Systems Development approach may be to build and test a novel system that identifies or corrects defects in customer information, a Design Science approach allows for focus on a more abstract artefact, such as a process or set of measures for evaluating such a system.

Some authors, such as Burstein and Gregor (1999), suggest that the System Development approach is a form of Action Research. It is reasonable to ask whether Design Science is also a form of Action Research. Here it is argued that this is not the case. Kock et al. propose a test for Action Research as being that where “intervention [is] carried out in a way that may be beneficial to the organisation participating in the research study” (Hevner et al. 2004; Kock et al. 1997).

Since I am not concerned with actually intervening in a particular organisation during this research, it should not be considered Action Research. Further, since there is no objective of implementing the method within the organisation, there is no imperative to trace the impact of the changes throughout the organisation – another aspect of Action Research (Burstein and Gregor 1999).

2.5       Employing Design Science in Research

The specific model of Design Science selected for use here is that presented by Hevner et al. (2004). This model was selected as it is well-developed, recent and published in the top journal for Information Systems. This suggests it is of high quality, accepted by researchers in this field and likely to be a reference source for a number of future projects. It also presents a number of criteria and guidelines for critically appraising Design Science research, which govern the research project.

This model makes explicit the two modes (develop/build and justify/evaluate) and links these to business needs (relevance) and applicable knowledge (rigour). This sits squarely with the applied nature of this project. I proceed by identifying the key elements from this generic model and map them to this specific project.

At this point it is useful to clarify the levels of abstraction. This project is not concerned with the information quality of any particular Information System (level 0). Neither is it concerned with methods, techniques or algorithms for improving information quality, such as data cleansing, data matching, data validation, data auditing or data integration (level 1). It is instead focussed on the description (or modelling) of such systems, techniques or algorithms in a general way that allows for comparison, appraisal, justification and selection (level 2). Lastly, in order to assess or evaluate this research itself, its quality and the degree to which it meets its goals, I employ Design Science. So, the prescriptions for evaluation within Hevner et al. pertain to this research project (level 3), not to the management of information quality (level 2). To recap the different levels of abstraction:

·         Level 0. A particular Information System.

·         Level 1. A specific method (or technique etc) for improving Information Quality within in Information Systems.

·         Level 2. A framework for describing (and justifying etc) improvements to Information Quality within Information Systems.

·         Level 3. A model for conducting (and evaluating) Design Science research.

With this in mind, I can proceed to map the elements in the model (level 3) to this research (level 2).

Figure 2 Design Science Research Model (Adapted from Hevner et al. 2004, p9).

2.5.1         Business Needs

I begin with the business need, which ensures the research meets the goal of relevance. Hevner et al. argue that the business need is “assessed within the context of organisational strategies, structures, culture and existing business processes”. Hence, to understand the business need for an IQ evaluation framework I must examine these elements. If such a framework is developed but its assumptions or requirements are anathema to the target organisations then the framework will not be relevant. This also requires a careful definition of the “target organisations” to ensure that the scope is not so large that any commonalities in these elements are lost, nor so small that the research is too specific to be of wide use.

2.5.2        Processes

From the research problem, it is clear that the target organisations must employ customer-level decision-making processes driven by extensive customer information. Examples of customer information include:

·         information about the customer, such as date of birth, marital status, gender, contact details, residential and work locations and employment status,

·         information about the customer’s relationship with the organisation, such as histories of product purchases or service subscriptions, prior contacts (inquiries, complaints, support, marketing or sales), billing transactions, usage patterns and product/service preferences.

This information is sourced either directly from the customer, from the organisation’s internal systems or from external information providers, such as public databases, partners or information service providers (“data brokers”). Of course, sourcing, storing and acting on this information is governed by the legal system (international treaties, national statutes and case law and local regulations), industry codes of practice, internal organisational policies and customer expectations.

Here, “customer-level decision-making” means that the organisation makes a decision about each customer, rather than treating all customers en masse. Examples of this include credit scoring and loan approval, fraud detection, direct marketing and segmentation activities. In each case, a business process is in place that produces a decision about each customer by applying business rules to that customer’s information.

2.5.3         Infrastructure and Applications

The customer information is encoded and stored in large databases (data warehouses, data marts, operational data stores or other technologies), supported by computer infrastructure such as data storage, communication networks and operating environments. This infrastructure may be outsourced or provided in-house or shared between partners and suppliers.

The information is accessed (either stored or retrieved) by applications for Enterprise Resource Planning, Customer Relationship Management or Business Intelligence. These applications could be purchased “off-the-shelf” and customised or developed internally. People using these applications (and accessing the information) may be internal organisational staff, suppliers, partners, regulators or even the customers themselves.

Based on these key organisational and technological considerations, the IQ evaluation framework is targeted on IS-intensive, customer-facing service organisations. Examples of relevant service sectors include:

·         financial services (personal banking, insurance, retail investment),

·         telecommunications (fixed, mobile, internet),

·         utilities (electricity, gas, water),

·         government services (taxation, health and welfare).

Other areas could include charitable and community sector organisations, catalogue or subscription-based retailers and various customer-facing online business.

To ensure the IQ evaluation framework is relevant, the research design must include an empirical phase that seeks to understand the drivers of the business need (organisational and technological) in these target organisations.

2.5.4        Applicable Knowledge

In order for Design Science to achieve the objective of being rigorous, the research must draw on existing knowledge from a number of domains. “The knowledge base provides the raw materials from and through which IS research is accomplished ... Prior IS research and results from reference disciplines provide [constructs] in the develop/build phase. Methodologies provide guidelines used in the justify/evaluate phase.” (Hevner et al. 2004, p. 80)

Note that knowledge is drawn upon (in both phases) from prior IS research and reference disciplines. Design Science must also make “a contribution to the archival knowledge base of foundations and methodologies” (Hevner et al. 2004, p. 81). While this could conceivably include the reference disciplines, this is not required. There must, however, be a contribution to the IS knowledge base.

The point of access for this knowledge base varies with topic. In general, the IS research will be found in journal articles and conference papers as it is still emerging and being actively pursued by scholars. In addition, practitioner-oriented outlets may offer even more specific and current knowledge. The reference discipline knowledge for this project, in contrast, is more likely to be in (older) textbooks as it is well-established, standardised and “bedded-in”.

I begin mapping key elements of this model to the IQ evaluation framework by examining the specific IS research areas that form the knowledge base. From the research problem, it is clear that I am dealing with two sub-fields of Information Systems: Information Quality and Customer Relationship Management.

A number of Information Quality (IQ) models, frameworks, methods and theories have been proposed, analysed and evaluated in the IS literature (Ballou et al. 1998; Lee et al. 2002; Paradice and Fuerst 1991; Price and Shanks 2005a; Wang and Strong 1996). A solid understanding of existing IQ research, particularly for IQ evaluation, is required to avoid redundancy and misunderstanding. Fortunately, a large body of academic scholarship and practice-oriented knowledge has been built up over the past two decades or so. Importantly, the prospects of contributing back to this knowledge base are very good, as evaluation of information quality in the context of CRM processes is still an emerging area.

Customer Relationship Management (CRM) is a maturing sub-field of Information Systems, at the interface of technology and marketing. It has witnessed an explosion in research activity over the past ten years in both the academic and practitioner worlds (Fjermestad and Romano 2002; Romano and Fjermestad 2001; Romano and Fjermestad 2003). As a result, a significant amount of knowledge pertaining to theories, models and frameworks has accrued that can be drawn upon for this research project. Since customer information quality is flagged as a key determinant for CRM success (Freeman and Seddon 2005; Gartner 2003), it is likely that this research project will make a contribution to the knowledge base.

The next area to consider is the reference disciplines. This is the part of the knowledge base that provides a new perspective or insight to the problem that leads to ‘building a better mouse trap’. Examples of Information Quality research employing reference disciplines include ontology (Wand and Wang 1996) and semiotics (Price and Shanks 2005a). In this research project, it is proposed that the reference disciplines include Information Theory (Shannon 1948) and Information Economics (Arrow 1984; Marschak 1974; Marschak et al. 1972; Theil 1967). These disciplines provide the foundational ideas for the “build phase”, through their theories, models, formalisms (including notation) and measures.

Specifically, these reference disciplines provide very clear definitions of concepts such as entropy and utility. Additionally, these concepts can be communicated effectively to others through tried-and-tested explanations, representation and examples.

In light of the knowledge base, the research design must include a thorough review of existing knowledge in the IS research sub-fields (Information Quality and Customer Relationship Management) and the presentation of relevant material from the reference disciplines (Information Theory and Information Economics).

2.5.5        Develop/Build

For a body of work to count as Design Science, it must produce and evaluate a novel artefact (Hevner et al. 2004). This has to be balanced by a need for IS research to be cumulative, that is, built on existing research where possible (Kuechler and Vaishnavi 2008). This project seeks to achieve this by taking the existing ontological IQ framework (Wand and Wang 1996) and extending it and re-interpreting it through the lens of Information Theory. In this way, it satisfies the requirement to be both cumulative and novel.

Also, I note that the artefact in Design Science does not have to be a particular system (Level 0, in the abstractions mapped out earlier) or technique (Level 1) but can be something more abstract (Level 2): in this case a framework for IQ valuation.

While March & Smith (1995) argue that constructs, models and methods are valid artefacts (March and Smith 1995), I need to be able to describe the proposed framework. To that end, I employ a modified form of the “Framework for Comparing Methodologies” developed by Avison & Fitzgerald (2002). While originally intended as a means for describing (and comparing) systems development methodologies, I argue that it is useful here for organising the ideas embodied in the valuation framework. The Avison & Fitzgerald framework can act as a “container” to describe the framework proposed here.

They outlined the following seven components:

1.        Philosophy

a.        Paradigm

b.       Objectives

c.        Domain

d.       Target

2.       Model

3.        Techniques and Tools

4.       Scope

5.        Outputs

6.       Practice

a.        Background

b.       Userbase

c.        Players

7.        Product

Here, I will not use numbers six and seven since there is no practitioner group or instantiated product (the framework is still under development and evaluation). With this end in mind, the develop/build phase involves:

·         synthesising a large body of knowledge (drawn from the IS research literature as well as the foundation or reference disciplines),

·         acquiring a thorough understanding of the problem domain, organisational context and intended usage,

·         assessing, analysing and extending the synthesised knowledge in light of this acquired understanding of the domain.

The next step is to subject the resulting artefact to the justify/evaluate phase.

2.5.6        Justify/Evaluate

In order to ensure the artefact is both useful to practitioners (relevant) and contributing back to the IS knowledge base (rigorous), it must undergo stringent evaluation and justification.

Note that here I am not assessing the valuation of Information Quality improvements (Level 2), but rather assessing the artefact (framework) for doing this (Level 3).

Before I can justify/evaluate the framework, I need to clarify the nature of the claims made about it. For example, I could be stating that it is:

·         necessarily the only way to value correctly IQ improvements,

·         better – in some way – than existing approaches,

·         likely to be preferred by practitioners over other approaches,

·         may be useful to practitioners in some circumstances,

·         is of interest to academics for related research.

These claims must be addressed in the formulation of the artefact during the develop/build phase, in light of the existing approaches and framework scope, and clearly stated.

While the precise claims cannot be stated in advance of the develop/build phase, the research problem make clear that the framework must satisfy two goals:

·         Internal validity. It must allow for the modelling of a wide-range of organisational processes of interest. These models must conform to the foundational theoretical requirements, including representation, rationality assumptions and mathematical conventions.

·         External validity. In order to be useful, the framework must be acceptable to the intended users in terms of its components (eg scope, outputs) but also explicable in its calculations, arguments and conclusions.

In other words, an artefact to help people quantify benefits must not only produce numerical results, but the users must have some confidence in those outputs and where they came from. Both of these goals must be met for this framework to be rigorous and thus likely to contribute to IS research.

With this in mind, I consider each of the evaluation methods prescribed by Hevner et al.

Evaluation Method




Case Study: Study artefact in depth in business environment.

Not possible since the IQ valuation framework has not been employed in an organisational setting.


Field Study: Monitor use of artefact in multiple projects.

Would require deep access to IQ improvement projects, including to sensitive financial information (during business case construction) and customer information (during implementation). Not likely for an untested framework.



Static Analysis: Examine structure of artefact for static qualities (eg complexity).


This approach would not meet the goal of external validity.

Architecture Analysis: Study fit of artefact into a technical IS perspective.


This is method is not appropriate for an abstract framework.

Optimisation: Demonstrate inherent optimal properties of artefact or provide optimality bounds on artefact behaviour.


This method relies on a clear optimality criterion or objective and accepted “figure-of-merit”. This does not exist in this case.

Dynamic Analysis: Study artefact in use for dynamic qualities (eg performance).


Again, performance criteria would need to be established as for optimisation.


Controlled Experiment: Study artefact in controlled environment for qualities (eg. usability).

This is a promising candidate: I can generate evidence to support (or not) the artefact’s utility. The results would also provide feedback to further refine the framework.


Simulation: Execute artefact with artificial data.

Employing simulations (with artificial data) gets around the problem of access to real-world projects while still providing plausible evidence. Even better - for external validity - would be to use real-world data.



Functional (Black Box) Testing: Execute artefact interfaces to discover failures and identify defects.


The interfaces to the framework are not clearly defined and so this testing approach will not be sufficiently general.

Structural (White Box) Testing: Perform coverage testing of some metric (eg. execution paths) in the artefact implementation.


Similarly, this approach suffers from a lack of a suitable metric for evaluating something as abstract as a framework.


Informed Argument: Use information from the knowledge base (eg relevant research) to build a convincing argument for the artefact’s utility.

There is unlikely to be sufficient information in the knowledge base to convince practitioners and academics of the internal and external validity of the framework.


It’s more likely that practitioners would expect empirical evidence to be weighted against the claims.


Scenarios: Construct detailed scenarios around the artefact to demonstrate its utility.

Another promising avenue to pursue since a contrived scenario grounds the artefact in a specific context without relying on an indefensible generalisation.


Table 1 Possible Evaluation Methods in Design Science Research, adapted from (Hevner et al. 2004)

2.6       Overall Research Design

With an understanding of the general Design Science approach and the particular needs of this research, I can now present the overall research design. I begin by outlining the philosophical stance I’ve taken (the nature of the world, how we acquire knowledge and our values in conducting research). Then, I show how each of the five empirical phases of the research project meet the requirements for doing Design Science. Lastly, I discuss the proposed research design in light of the research guidelines advocated by Hevner et al. to argue that this design is well-justified.

2.6.1         Philosophical Position

For this study, I have adopted “Critical Realism” (Bhaskar 1975; Bhaskar 1979; Bhaskar 1989). It’s use in IS research has been advocated by a number of authors, including Mingers (Mingers 2000; Mingers 2004a; Mingers 2004b), Dobson (Dobson 2001), Smith (Smith 2006) and Carlsson (Carlsson 2003b; Carlsson 2005a; Carlsson 2005b), who has identified it as having a particularly good fit with Design Science. Similarly, Bunge posits that Design Science works best “when its practitioners shift between pragmatic and critical realist perspectives, guided by a pragmatic assessment of progress in the design cycle.” (Vaishnavi and Kuechler 2004).

Broadly speaking, Critical Realism argues that there is a real-world, that is, that objects exist independently of our perception of them. However, it differs from so-called scientific realism (or naïve empiricism) in that it seeks “to recognise the reality of the natural order and the events and discourses of the social world.” (Carlsson 2005a, p80). This is a very useful perspective, in the context of this research, as I outline.

Objects like Customer Relationship Management systems are complex socio-technical phenomena. At one level, they are manifestly real objects (composed of silicon, plastic and metal), whose behaviours are governed by well-understood physical laws (such as Maxwell’s electromagnetic theory). At another level, they have been explicitly designed to implement abstractions such as microprocessors, operating systems, databases, applications and work flows. Lastly, CRM systems also instantiate categories, definitions, rules and norms – at the organisational and societal level. Examples include the provision of credit to customers, or the targeting of marketing messages.

It is not sensible to adopt a purely empiricist view to analyse such concepts as “customer”, “credit” and “offer”. Further, (social) positivism – with its emphasis on the discovery of causal relationships between dependent and independent variables through hypothesis testing – is not appropriate given the design-flavoured objectives of the research. In broad terms, the objective of positivism is prediction, whereas design science is concerned with progress (Kuechler and Vaishnavi 2008).

By the same token, it is important that the knowledge produced by the research is of a form acceptable to target users in the practitioner and academic communities. This means the IQ valuation framework will require a quantitative component, grounded in the norms of the mathematical and business communities. As such, philosophical positions that produce only qualitative models (such as hermeneutics, phenomenology and interpretivism in general) are unsuitable for this task. Critical Realism allows for the study of abstract phenomena and their interrelationships with both qualitative and quantitative modes of analysis:

“Put very simply, a central feature of realism is its attempt to preserve a ‘scientific’ attitude towards social analysis at the same time as recognising the importance of actors´ meanings and in some way incorporating them in research. As such, a key aspect of the realist project is a concern with causality and the identification of causal mechanisms in social phenomena in a manner quite unlike the traditional positivist search for causal generalisations.“ (Layder 1993).

I can now present the philosophical underpinning of the research project.   Ontology

The Critical Realist ontological position is that the real world (the domain of the real) is composed of a number of structures (called “generative mechanisms”) that produce (or inhibit) events (the domain of the actual). These events are known to us through our experiences (the domain of the empirical). Thus, the real world is ontologically stratified, as summarised here:


Domain of Real

Domain of Actual

Domain of Empirical













Table 2 ontological stratification in critical realism (Adapted from Bhaskar 1979)

Ontological assumptions of the critical realistic view of science (Bhaskar 1979). Xs indicate the domain of reality in which mechanisms, events, and experiences, respectively reside, as well as the domains involved for such a residence to be possible. (Carlsson 2003b, p329).

This stratification can be illustrated by way of example. Suppose that an experimenter places litmus paper in a solution of sulphuric acid. In this case, the event (in the domain of the actual) is the litmus paper turning red. We experience the colour red through our senses (domain of the empirical), but the generative mechanisms (ie the oxidisation of molecules and the resulting change in emission of photons) take place in the domain of the real. Bhaskar argues that:

[R]eal structures exist independently of and are often out of phase with the actual patterns of events. Indeed it is only because of the latter that we need to perform experiments and only because of the former that we can make sense of our performances of them (Bhaskar 1975, p13)

Here, the underlying mechanisms of chemistry would exist as they are without the litmus test being conducted. Since we cannot perceive directly the wavelengths of photons, we can only identify events in the domain of the actual. However, without the persistence of regularities within the domain of the real, it would not be possible to make sense of the experiments ie theorise about these generative mechanisms. The relationship between the domains of the actual and empirical are further expounded:

Similarly it can be shown to be a condition of the intelligibility of perception that events occur independently of experiences. And experiences are often (epistemically speaking) 'out of phase' with events - e.g. when they are misidentified. It is partly because of this possibility that the scientist needs a scientific education or training. (Bhaskar 1975, p13)

So, in this example, the experimenter must take into account that other events may interfere with the perception of the red colour on the litmus paper. Perhaps the experiment is conducted under (artificial) light, lacking a red component. Or maybe the red receptors in the experimenter’s retina are damaged or defective.

It is the consideration of these kinds of possibilities that gives Critical Realism its “scientific” feel, while its rejection of the collapse of the empirical into the actual and the real (what Bhaskar calls the “epistemic fallacy”) stops it being simply (naïve) empiricism. Similarly, Critical Realism differs from positivism in that it denies the possibility of the discovery of universal causal laws (invisible and embedded in the natural structure ie in the domain of the real) but instead focuses on the discernment of patterns of events (in the domain of the actual).   Epistemology

The epistemological perspective taken in this research could best be described as coherence during the build/develop phase and then pragmatic during the evaluation stage. This is not unexpected, as

[C]ritical realists tend to opt for a pragmatic theory of truth even though some critical realists still think that their epistemology ought to be correspondence theory of truth. Other critical realists prefer to be more eclectic and argue for a three-stage epistemology using correspondence, coherence and pragmatic theory of truth. (Kaboub 2002, p1)

The coherence theory of truth posits that statements are deemed to be knowledge (that is “justified true beliefs”) if they are in accordance with (“cohere with”) a broader set of knowledge, in this case from the reference disciplines of Information Theory and Information Economics. This fits well with the build/develop phase of Design Science, as applicable knowledge is drawn in from the knowledge base to construct the framework.

Later, during the justify/evaluate phase, the nature of knowledge claim shifts to a pragmatic theory of truth – in a nutshell, what’s true is what works. Pragmatism, in epistemology, is primarily concerned with the consequences and utility (ie impact upon human well-being) of knowledge.

Pragmatism asks its usual question. "Grant an idea or belief to be true," it says, "what concrete difference will its being true make in anyone's actual life? How will the truth be realised? What experiences will be different from those which would obtain if the belief were false? What, in short, is the truth's cash-value in experiential terms?" The moment pragmatism asks this question, it sees the answer: TRUE IDEAS ARE THOSE THAT WE CAN ASSIMILATE, VALIDATE, CORROBORATE, AND VERIFY. FALSE IDEAS ARE THOSE THAT WE CANNOT. (James 1907, p201).

The emphasis here on utility, rather than truth, is appropriate given the goals of the evaluate/justify phase of Design Science: I seek to contribute back to the knowledge base a form of knowledge that is validated and useful (to practitioner and academic communities). From this perspective, justified true beliefs are knowledge that will work.   Axiology

The practice of research reflects on the underlying values of the various participants and stakeholders. In this case, the project is committed to conducting research ethically and in compliance with University statutes and regulations and the terms of the industry partner agreement. This means I must be ethical with all my dealings including research subjects, industry partners, academics and other stakeholders.

Further, I uphold the value of contributing to the knowledge base of the research community in an area with demonstrable need to practitioners, without consideration to potential commercial or other advantage to individuals or organisations. As such, the knowledge acquired must be placed into public domain, immediately, totally and without reservations.

2.6.2        Build/Develop Framework

In this section I present an outline of the research phases, why each phase is necessary and a rationale for each particular method’s selection over alternatives. The goal is to show the overall coherence of the research design and how it fits with the requirements for Design Science of Hevner et al., as discussed in the preceding sections.   Literature Review

This first phase consists of gathering, assessing and synthesising knowledge through a review of literature. As discussed, rigour demands that Design Science research draw upon an existing knowledge base comprising the reference disciplines and accumulated knowledge in the IS domain. Further, the research project must be guided by the contemporary needs of IS practitioners in order to be relevant.

These requirements must be met by reviewing relevant literature from three broad sources:

·         Current Information Systems research, comprising the top-rated scholarly journals, conference proceedings, technical reports and related publications. The authors are typically academics writing for an audience of academics, postgraduate students and “reflective practitioners”. This constitutes an important source of knowledge around methodology (Design Science for IS), Information Quality models and theories, and Customer Relationship Management systems and practices. This is also the knowledge base to which this project seeks to add.

·         IS practitioner literature as found in practice-oriented journals, white papers, web sites and industry seminars. These authors are usually senior practitioners and consultants writing for others in their field. Knowledge from this source is useful for understanding the issues which concern practitioners “at the coal face”, and how they think about them. It is important to understand their needs, as these people form one of the key audiences for the outcomes from this research project.

·         Literature from the reference disciplines, in the form of textbooks and “seminal papers”, is needed to incorporate that specific knowledge. While the authors and audience of these sources are also academics, it is not necessary to delve as deeply into this literature as with the IS research. This is since the reference discipline knowledge is usually much older (decades rather than years), has been distilled and codified and is now relatively static.   Interviews

The second phase is the development of a deep understanding of the business needs for IQ valuation. I argue that this is best achieved through a series of semi-structured interviews with analysts, consultants and managers in target organisations. This is since these people are best placed to explain the business needs around IQ that they have dealt with in the past, and how they have been met to date. They are also able to articulate the organisational strategies, cultural norms and business processes that will dictate the usefulness of any IQ valuation frameworks.

I considered and rejected two alternative approaches. Firstly, case studies would not be suitable owing to the “thin spread” of cases to which I would have access, combined with commercial and legal sensitivities involved in a very detailed examination of particular IQ valuation projects. I also wanted to maximise the exposure to different stakeholders (both by role and industry) given the time, resource and access constraints.

Secondly, surveys were deemed unsuitable for acquiring the kind of deeper understanding of business needs required for this research. A face-to-face conversation can elicit greater detail, nuance and context than a simple form or even short written response. For example, interviews allow the tailoring of questions to individual subjects to draw out their particular experiences, knowledge and perspectives; something that cannot be done readily with survey instruments.   Conceptual Study and Mathematical Modelling

The third phase is where the knowledge from the reference disciplines and IS domain (Literature Review) is brought to bear on the business needs elicited from the second phase (Context Interviews). The outcome is a conceptual model of Information Quality in organisational processes, amenable to mathematical analysis and simulation.

I argue that this can be characterised as a Conceptual Study since it involves the synthesis of disparate knowledge and key insights to argue for a re-conceptualisation of a familiar problem situation. Shanks et al. posit that:

Conceptual studies can be effective in building new frameworks and insights … [and] can be used in current situations or to review existing bodies of knowledge. Its strengths are that it provides a critical analysis of the situation which can lead to new insights, the development of theories and deeper understanding. (Shanks et al. 1993, p7)

This step is essential to the overall research design, in that it is where the framework is conceived and developed. The resulting artefact (a framework for IQ valuation) comprises a model (a set of constructs and the mathematical formulae defining and relating them) and some guidelines for practitioners to use analyse their particular system. This artefact must then be evaluated to understand its likely impact in practice and contribution to the knowledge base.

2.6.3         Justify/Evaluate Framework   Simulation Study

In order to evaluate the framework, I must put it to use to generate outputs that can be analysed. It is necessary to demonstrate that the framework can be employed and the results are intelligible.

I propose that computer simulations using synthetic data provide the best way of producing these results. By “synthetic data” I mean data from real-world scenarios made publicly available for evaluation purposes, which have had various kinds of information quality defects artificially introduced. The behaviour of the mathematical model (including the impact on outputs and relationships between constructs) can then be assessed in light of these changes.

Other methods considered included a field trial and mathematical proof of optimality. Problems with the former included the difficulty of getting access to a real-world IQ project (given the commercial and legal hurdles) and the scope (ie time and resource constraints would not allow examination of multiple scenarios). The second approach – formal proof – was considered too risky as it might not be tractable and such a proof might not be acceptable to the intended audience of practitioners and academics.   Evaluation by Argumentation

Lastly, the research process and resulting artefact must be evaluated against some criteria. It is not sufficient to rely on the statistical analysis of the simulation study as this will not take in a sufficiently broad view of the performance or suitability of the framework and ensuring that the research is indeed “Design Science” and not just “design”. Of course, this stage hinges crucially on the selection of an appropriate set of criteria. Here, I’ve opted to use the guidelines published in MIS Quarterly (Hevner et al. 2004), perhaps the leading IS research publication, and heavily cited by other researchers in this field. Below is preliminary discussion on how the proposed research design meets these criteria.

An alternative evaluation method here would be a focus group of practitioners, as intended users. However, to seek practitioner opinions on likely use or adoption of the framework in the space of a few short hours would not be feasible. Providing sufficient knowledge of the proposed framework to elicit meaningful and thoughtful comments would require a large investment of time, not something that practitioners generally have in large amounts.

2.7        Assessment of Research Design

With an emerging Information Systems research approach, there is often some consternation about how one should assess the quality of the work. To go some way to meet this need, Hevner et al. were invited to develop some general guidelines for the assessment of Design Science. These guidelines are intended to be used by research leaders and journal editors.

This section describes their guidelines, and discusses how the research design presented here meets them.




Design as an Artefact

Design Science research must produce a viable artefact in the form of a construct, a model, a method, or an instantiation.


The IQ valuation framework produced during the development phase meets the criteria of an artefact, as it embodies a construct (conceptualisation of problem), a model (description of IS behaviour) and method (in this case, a socio-technical method for organisational practice).


Problem Relevance

The objective of Design Science research is to develop technology-based solutions to important and relevant business problems.


That the industry partner - and other practitioners - have provided time and resources to tackling this problem signals the extent to which they perceive the problem as important and relevant.


Design Evaluation

The utility, quality, and efficacy of a design artefact must be rigorously demonstrated via well-executed evaluation methods.


The artefact is evaluated by contriving scenarios with real data and decision processes with rigorous statistical analyses on results.


Research Contributions

Effective Design Science research must provide clear and verifiable contributions in the areas of the design artefact, design foundations, and/or design methodologies.


This research project identifies a clear gap in the existing IS knowledge base and seeks to fill it through the careful application of the appropriate research method (Design Science).


Research Rigour

Design Science research relies upon the application of rigorous methods in both the construction and evaluation of the design artefact.


While the construction process for Design Science artefacts is not widely understood (March and Smith 1995), this research design follows well-founded prescriptions from the IS literature (Hevner et al. 2004) for understanding business need (interviews) and the existing knowledge base (literature review).


Design as Search

The search for an effective artefact requires utilising available means to reach desired ends while satisfying laws in the problem environment.


Here, the artefact is bounded by organisational norms, assumptions and cultures and, to the extent practicable, seeks to understand these and operate within them.

Communication of Research

Design Science research must be presented effectively both to technology-oriented as well as management-oriented audiences.


Owing to the industry partnership and involvement with the wider IS practitioner community, the research outcomes are to be communicated to IS managers. Indeed, as information quality has visibility in the broader management world, these findings will be communicated more widely.


Table 3 Guidelines for assessment of Design Science REsearch Adapted from(Hevner et al. 2004)

2.8       Conclusion

This research project is concerned with developing and evaluating a novel instrument for valuing Information Quality in Customer Relationship Management processes. With this emphasis on a producing an artefact that is useful to practitioners, I argue that the most suitable research design is one employing Design Science. Critical Realism offers the best fit for a philosophical basis for this kind of research as it is “scientifically-flavoured”, without being unduly naïve about social phenomena. The model of Design Science outlined by Hevner et al. is appropriate for my purposes and so I adopt their terminology, guidelines and assessment criteria.

Specifically, the build/develop phase employs a review of relevant literature (from academic and practitioner knowledge sources) and a series of semi-structured interview with key practitioners in target organisations. The framework itself is produced by a conceptual study synthesising this understanding of business need with applicable knowledge.

The justify/evaluate phase proceeds with a simulation study of the valuation framework using synthetic data, followed by a reflective evaluation examining the framework and simulation results.



Chapter 1 - Introduction
Chapter 3 - Literature Review