Prev:
Chapter 6 - Simulations
Chapter 7
Research Evaluation
Next:
Chapter 8 - Conclusion

1             

Summary. 1

Chapter 1 - Introduction. 12

Chapter 2 - Research Method and Design. 18

Chapter 3 - Literature Review.. 36

Chapter 4 - Context Interviews. 56

Chapter 5 - Conceptual Study. 84

Chapter 6 - Simulations. 124

Chapter 7 - Research Evaluation. 166

7.1 Summary. 166

7.2 Evaluation in Design Science. 166

7.3 Presentation of Framework as Artefact 168

7.4 Assessment Guidelines. 174

7.4.1 Design as an Artefact 174

7.4.2 Problem Relevance. 174

7.4.3 Design Evaluation. 175

7.4.4 Research Contributions. 175

7.4.5 Research Rigour. 176

7.4.6 Design as a Search Process. 177

7.4.7 Communication as Research. 177

Chapter 8 - Conclusion. 180

References. 184

Appendix 1. 194


Research Evaluation

7.1        Summary

Design Science (or Design Research) has long been an important paradigm within Information Systems research. Its primary distinction from other approaches to research in the field is the pursuit of the goal of utility, as opposed to truth (Simon 1996). As outlined in Chapter 2, the framework for the valuation of customer information quality (IQ) falls squarely within the remit of DS. This chapter explains how both the research process (activities) and product (output) constitute Design Science and draws upon published guidelines to evaluate the research.

Specifically, following best practice guidelines for research (Hevner et al. 2004), the framework is presented as an artefact, in this case an abstract one, and is assessed against the seven guidelines laid out in their MISQ paper. The case is made that the framework satisfies the criteria and is both rigorous and relevant, with significance for practitioners and researchers.

7.2        Evaluation in Design Science

When evaluating Design Science research, it is necessary to establish an appropriate set of definitions, guidelines or assessment criteria. Firstly, this is used to ensure that DS is the appropriate way to conceive of and evaluate the research effort. Secondly, this set forms the basis of the evaluation proper.

Note the distinction between evaluation of the DS research – the subject of this discussion – and the evaluation of the artefact itself. Chapter 6 describes the evaluation of the artefact per se (Section 4c below) whereas this chapter addresses the overall research, including the process, its likely impact and contribution.

The guidelines chosen for this evaluation are those published in MISQ (Hevner et al. 2004). Their paper, Design Science in Information Systems Research, has the goal of offering “clear guidelines for understanding, executing, and evaluating the research”. It was selected for the following reasons:

·         it specifically addresses DS in an Information Systems research context,

·         MISQ is the leading journal in Information Systems and this paper is widely read and cited,

·         the authors have experience in conducting DS research projects and prior publications on the topic,

·         the paper is contemporary and reflects current thinking,

·         it offers seven clear dimensions for evaluation, with a number of examples.

This is not to say that the paper represents an absolute consensus within the IS academic community about how to define and evaluate artefacts as part of research. However, it is a credible, familiar and useful basis for discussion.

The framework was developed in Chapter 5 (with a conceptual study) and tested and refined in Chapter 6 (with simulations) with the expressed intention of solving an organisational problem. Specifically, that problem is “How can organisations efficiently and objectively value the economic contribution of IQ interventions in customer processes?” This problem statement arose from a qualitative analysis of context interviews with practitioners and senior managers (Chapter 4), who indicated that this is an existing, important and persistent problem. In conjunction with a review of academic literature (Chapter 3), this is identified as an unsolved problem. Furthermore, it is an Information Systems problem, as it relates to the planning and use of IS artefacts within organisations.

[Design Science] creates and evaluates IT artefacts intended to solve identified organizational problems. Such artefacts are represented in a structured form that may vary from software, formal logic, and rigorous mathematics to informal natural language descriptions. A mathematical basis for design allows many types of quantitative evaluations of an IT artefact, including optimization proofs, analytical simulation, and quantitative comparisons with alternative designs. (Hevner et al. 2004, p77)

The approach to solving the problem consisted of asking prospective users (in this case, managers and executives) about the form a solution to such a problem would take; investigating a wide range of “kernel theories” (or reference theories) and applying skill, knowledge and judgement in selecting and combining them; and undertaking a rigorous testing/refining process of the initial conceptualisation.

This research is prescriptive, rather than descriptive. The intent is to provide practitioners and researchers with a set of (intellectual) tools for analysing and intervening in existing (or proposed) information systems. So, importantly, the goal of the research project is to increase utility. In this context, that means the framework is likely be valuable for organisations because it allows for the objective valuation of (possible) IQ interventions to be undertaken in an efficient manner. A design for a an IQ valuation framework that requires infeasible pre-conditions (in terms of time, knowledge, staff or other resources) or produces opaque, dubious or implausible valuations will not have this utility.

This project is research, as opposed to a design project, because the resulting artefact – the framework – is sufficiently generic and abstract that it can be applied to a wide range of organisational settings and situations. It also has a degree of evaluative rigour and reflection that exceeds what is required for a one-off design effort.

The artefact is a framework comprising a number of elements, including constructs, models and a method, grounded in theory.

IT artefacts are broadly defined as constructs (vocabulary and symbols), models (abstractions and representations), methods (algorithms and practices), and instantiations (implemented and prototype systems). (Hevner et al. 2004, p336)

It is worth noting that other authors, such as Walls et al. (Walls et al. 1992) and Gregor and Jones (Gregor and Jones 2007), regard the abstract artefacts (constructs, models and methods) as a special kind of artefact, dubbed an Information System Design Theory (ISDT):

The ISDT allows the prescription of guidelines for further artefacts of the same type. Design theories can be about artefacts that are either products (for example, a database) or methods (for example, a prototyping methodology or an IS management strategy). As the word “design” is both a noun and a verb, a theory can be about both the principles underlying the form of the design and also about the act of implementing the design in the real world (an intervention). (Gregor and Jones 2007, p322)

However, in keeping with the prescriptions of Hevner et al. (Hevner et al. 2004), their broader sense of artefact, which encompasses “IS design theory”, will be used here:

Purposeful artefacts are built to address heretofore unsolved problems. They are evaluated with respect to the utility provided in solving those problems. Constructs provide the language in which problems and solutions are defined and communicated (Schön 1983). Models use constructs to represent a real world situation – the design problem and its solution space (Simon 1996). Models aid problem and solution understanding and frequently represent the connection between problem and solution components enabling exploration of the effects of design decisions and changes in the real world. Methods define processes. They provide guidance on how to solve problems, that is, how to search the solution space. These can range from formal, mathematical algorithms that explicitly define the search process to informal, textual descriptions of “best practice” approaches, or some combination. Instantiations show that constructs, models, or methods can be implemented in a working system. They demonstrate feasibility, enabling concrete assessment of an artefact’s suitability to its intended purpose. They also enable researchers to learn about the real world, how the artefact affects it, and how users appropriate it. (Hevner et al. 2004, p. 341)

When conducting DS research, it is not necessary to produce a working IT system, such as a software package or spreadsheet as proof of concept or even a complete instantiation:

[A]rtefacts constructed in design science research are rarely full-grown information systems that are used in practice. Instead, artefacts are innovations that define the ideas, practices, technical capabilities, and products through which the analysis, design, implementation, and use of information systems can be effectively and efficiently accomplished. (Hevner et al. 2004, p349)

The primary purpose of a proof of concept or artefact instantiation is to demonstrate the feasibility of the research process and the product (framework). In this case, the feasibility of the research process is argued for by the existence of the framework itself (ie the process did produce a product). Further, feasibility of the framework is demonstrated by noting that the input measurements are either common organisational parameters (eg the discounting rate) or have been derived from the real datasets sourced for the simulations (eg information gain ratio), while the model formulae are entirely amenable to computation. In this sense, appraisals for proposed interventions can always be produced ie the framework is feasible. Whether these are likely to be useful or not is discussed in Section 4.

This research project has all the elements required to constitute Design Science research: It identifies an existing, important, persistent, unsolved Information Systems problem. The proposed solution is a novel artefact informed by reference theories, intended to be used by practitioners in solving their problems. The steps of requirements-gathering, solution design and testing/refinement constitute the construction and evaluation phases identified in DS research. It is of sufficient abstraction and rigour that its product (the framework) can be applied to a wide range of organisational settings and situations.

7.3        Presentation of Framework as Artefact

The framework is conceptualised in Chapter 5, which involves elucidating and applying the relevant “kernel theories” to the broad organisational situation mapped out during the context interviews (Chapter 4). This results in the broad constructs, candidate measures and boundaries for the framework. In Chapter 6 (Simulations), the statistical and financial models are “fleshed out”, new measures are derived, tested and refined, the sequence of steps clearly articulated and a simple “tool” (Actionability Matrix) is provided for analysts’ use. The resulting framework is articulated below.

The framework takes an organisational-wide view of customers, systems and customer processes. It includes the creation of value over time when information about those customers is used in organisational decision-making within those processes:

Figure 27 High-Level Constructs in the Framework

As shown, the framework assumes there is one system representing the customers (perhaps a data warehouse) shared by a number of customer processes. This “shared customer data” pattern fits many organisations.

The base conceptual model of how each process uses information is dubbed the Augmented Ontological Model, as it extends the Ontological Model of Wand and Wang (1996) to include decision-making:

Figure 28 The Augmented Ontological Model

This diagram introduces a number of the key constructs used: a set of customers, each of whom exists in the external-world and exists in precisely one of a set of possible states, W. In keeping with the realist ontology throughout, these customers (and their state value) exist independently of any observation by the organisation. The organisational information system maps these customers (and their state values) onto a system representation drawn from the set of possible system states, X. In this way, each customer state is said to be communicated to the system.

For each customer undertaking the process, the system state is used by a decision function to select one action from a set of alternatives, Y. This action is realised against the optimal action, Z. (z Z is not known at the time of the decision and y Y is the system’s best-guess.)

The impact of the realisation for each customer is expressed via a penalty matrix, Π, which describes the cost, πy,z, associated with choosing action y Y when z Z is the optimal choice. When x=y the best choice is made so πy,= 0. Each organisational process is run periodically on a portion of the customer base, generating future cash flows.

As a matter of practicality, both the customer statespace (W) and the system representation statespace (X) are decomposed into a set of a attributes A1, A2, …, Aa. Each attribute, Ai, has a number of possible attribute-values, so that Ai {a1, a2, …, aN}. The statespace W (and X) is the Cartesian product of these attribute sets, so that W = A1 x A2 x … x Aa. In practice, these attributes are generic properties of customers like gender, income bracket or post code, or organisational-specific identifiers like flags and group memberships.

The decision-function is conceived as any device, function or method for mapping a customer instance to a decision. The only formal requirement is that it is deterministic, so that the same decision is made each time identical input is presented. While this could be implemented by a person exercising no discretion, this research examines computer implementations. Examples include different kinds of decision trees, Bayesian networks and logistic model trees.

The next construct to consider is the intervention – either a one-off or an ongoing change to the way the external -world is communicated to the internal representation system. This may also involve changes to the representation system itself:

Figure 29 Model of IQ Interventions

In this model, the optimal decision z Z has been replaced with y* Y*. This decision is not necessarily the “true optimal” decision z; it is the decision that the particular decision-function, D(•), would make if presented with perfect external-world state, w W, instead of the imperfect system state x ∊ X:


In using y* instead of z, the effect of decision-making is “cancelled out”, leaving the focus on Information Quality only, not algorithm performance.

The other extension is the intervention, T, which results in a revised system state, x’. This revised state is then presented to the same decision-function, D(•), to give a revised action, y’:

The intervention is modelled as a change to how the external-world state, w, is communicated to the system, resulting in a revised system state, x’ and hence a revised action, y’. This action is realised and compared with y*. The difference between the cost of the prior decision and cost of revised decision is the benefit of the intervention.

The relations between these constructs, grounded in Information Theory and Utility Theory, was outlined in Chapter 5 (Conceptual Study):

Stake. The “value at risk” over time of the customer processes.

Influence. The degree to which an attribute “determines” a process outcome.

Fidelity. How well an external-world attribute corresponds with the system attribute.

Traction. The effectiveness of an intervention upon an attribute.

These broad measures were refined and examined in detail in Chapter 6 (Simulations). The starting point was modelling the communication between external-world and system by noise. An intervention can be modelled as the elimination of (a degree of) noise from this communication. This leads to the first statistical model, that of garbling.

A garble event is when a state-value is swapped with another drawn “at random”. The garbling process has two parameters: γ (the garbling parameter) is an intrinsic measure of an attribute and g (the garbling rate) parameterises the degree of noise present. Together, these capture the notion of Fidelity by giving the probability of an error event, ε, a disagreement between the external-world and system state values:


Some error events result in a mistake event (where the action does not agree with the “correct” one), with probability μ:



This proportion, α, of error events translating into mistake events characterises the Influence of the attribute within that process. Since α is difficult and costly to ascertain for all possibilities, the cheaper proxy measure information gain ratio (IGR) is used.

Mistake events are priced using the penalties in the penalty matrix, Π. The expected per-customer per-instance penalty is M. This, in turn, is converted into discounted cash flows using the parameters β (proportion of customer base going through the process), f (frequency of operation of process), n (number of years of investment) and d (appropriate discount rate). This addresses the Stake construct.

Lastly, Traction – the effectiveness of a candidate intervention in removing noise – is parameterised by χ, the net proportion of garble events removed. Combining all these elements with a cost model (κF for fixed costs, κT for testing and κE for editing) yields an intervention valuation formula:

Where an annual discount rate of d needs to be applied, the discounted total aggregated value is:

These parameters characterise the statistical and financial models which “flesh out” the broader conceptual constructs. The final component of the framework is a method for analysts to follow when performing the analysis in their organisation.

In Chapter 6, Section 6, the method is spelled out in some detail. The idea is to efficiently identify the key processes, attributes and interventions for analysis in order to avoid wasted analytical effort. The method comprises a sequence of iterative steps where successive candidate elements (processes, attributes and interventions) are selected based on the above parameters and their value assessed. The recommended order for analysis is Stake, Influence, Fidelity and Traction (“SIFT”), a handy mnemonic[22]. To help with this task, a tool (Actionability Matrix) is proposed and illustrated, which keeps of track of the model parameters as they are measured or derived and facilitates the selection of the next.

Rounded Rectangle: Value AssessmentsRounded Rectangle: Quality InterventionsRounded Rectangle: Data AttributesRounded Rectangle: Customer Processes

Actionability Matrix

 

Figure 30 Process Outline for value-based prioritisation of iq interventions

A modified form of the "e;Framework for Comparing Methodologies"e; (Avison and Fitzgerald 2002) is used as an outline to summarise this framework:

 

1.        Philosophy

a.        Paradigm

The underpinning philosophy in this research is Critical Realism. However, the particular ontological and epistemological stances taken for the purposes of building and evaluating this framework are not required by end-user analysts employing it in practice.

For example, analysts may wish to adopt a scientific stance (an Realist ontology and a Positivist epistemology) for the purposes of comparing the system attribute values with the “external world” values.

Where the framework does constrain analysts is the requirement that the situation being modelled can be decomposed into customers and processes, with well-defined states and decisions, respectively.

b.       Objectives

The framework is not focused on developing a particular system, but improving the value to the organisation of its customer information, through its use in customer processes. This is measured through widely-used, investment-focused financial metrics.

c.        Domain

Here, the domain addressed by the framework is at the level of organisational planning and resource allocation. It’s not concerned with how one can (or should) improve customer IQ, but with capturing and articulating the expected benefits and costs of such initiatives.

d.       Target

The kinds of organisations, systems and projects targeted involve large-scale, information-intensive ones with automated customer-level decision-making. Such environments would typically have call-centres, web sites, data warehouses supporting CRM activities.

2.       Model

Figures 26-28 above express the model of the framework. This comprises a high-level view of the customer processes (Figure 26), the augmented Ontological Model of IQ (Figure 27) and a model of the impact of IQ interventions (Figure 28). These models are operationalised with related construct definitions and formulae.

3.        Techniques and Tools

The principle techniques expressed here are the statistical sampling and measurement of the performance of the system in representing customer attributes s and the “downstream” impact on customer decision processes. The proposed metrics – Stake, Influence, Fidelity and Traction – are also tools to help the analyst assess and prioritise IQ interventions.

The Actionability Matrix tool makes the evaluation and recording of these metrics more systematic, improving the search process and assisting with re-use across time and on multiple evaluation projects.

 

4.       Scope

The framework’s method begins with a review of the organisation’s set of customer decision processes and works through the “upstream” information resources and candidate interventions. It ends with a financial model of the costs and benefits of possible improvements to IQ.

This financial model is intended to be used in a business case to secure investment from the organisation’s resource allocation process to implement the preferred intervention. It could also be used to track post-implementation improvements (budget vs actual) for governance purposes.

5.        Outputs

The key outputs of the framework are 1) a “map” of high-value customer processes and their dependency on customer attributes, in the form of an up-to-date populated Actionability Matrix; and 2) a generic financial model of the expected costs and benefits associated with customer IQ interventions.

This section has outlined the conceptual constructs, statistical and financial models and sequence of method steps for the framework. The next section applies the chosen guidelines for evaluating the research.

7.4        Assessment Guidelines

This section applies the Design Science evaluation guidelines from Hevner et al.(Hevner et al. 2004) to the research project. The goal is to show how the research (including the process and the product) satisfies each of the criteria.

7.4.1         Design as an Artefact

Design-science research must produce a viable artefact in the form of a construct, a model, a method, or an instantiation. (Hevner et al. 2004, p. 347)

As outlined in Section 3 above, the artefact is a framework comprising of a construct, a series of models (statistical and financial) and a method. The viability of the artefact is claimed in the fact that it can be expressed and applied to a problem domain. Further, its feasibility is argued from the computability of the models and the derivation of key metrics from real-world datasets.

 

7.4.2         Problem Relevance

The objective of design-science research is to develop technology-based solutions to important and relevant business problems. (Hevner et al. 2004, p. 347)

This question of problem relevance is discussed in Section 2. The extensive contextual interviews (Chapter 4) identify the inability to financially appraise customer IQ interventions as an “important and relevant business problem”. Specifically, the problem is persistent, widespread – and largely unsolved. The inability to quantify the benefits, in particular, in business terms like Net Present Value (NPV) and Return on Investment (ROI) is especially problematic in situations where organisations take an investment view of such projects.

The framework does not construe a “technology-based solution” in itself. Rather, the objects of analysis (the host organisation’s Information Systems and wider customer processes) are implemented using a range of technologies. The steps of analysis prescribed in the framework are made much easier using technology to populate the models.

7.4.3         Design Evaluation

The utility, quality, and efficacy of a design artefact must be rigorously demonstrated via well-executed evaluation methods. (Hevner et al. 2004, p. 347)

The specific evaluation of the key elements of the framework is the simulation study (Chapter 6). Hevner et al. define simulation as a type of experiment, where the researcher executes the artefact with “artificial data”. Here, the artefact is executed with “synthetic data”: real data with “artificial noise” added to it. In this way, the behaviour of the models (relationship between different parameters and measures) can be explored.

Chapter 2 (Research Method and Design) explains how this is the most suitable method for evaluating the artefact since access to a reference site to perform such a disruptive, sensitive and complicated analysis is not practicable. This constraint eliminates case and field studies, as well as white- and black-box testing. However, the goal of rigour (especially internal validity) requires that the underpinning mathematical and conceptual assumptions be tested. Purely analytical or descriptive approaches would not test these: the framework must be given a chance to fail. Reproducing “in the lab” the conditions found in the external-world is the best way to do this, providing sufficient care is taken to ensure that the conditions are sufficiently similar to invoke the intended generative mechanisms.

As is the norm with DS research, the evaluation and development cycles are carried on concurrently so that, for example, the “Fidelity” construct is re-cast to reflect the garbling process used to introduce controlled amounts of artificial noise. The garbling algorithm is developed to meet design criteria and is evaluated using a mathematical analysis (including the use of probability theory, combinatorics and calculus). This derivation is then checked against computer simulations with the “synthetic” data (ie real data with artificial noise). The correctness of the derivations and validity of the assumptions is proven using a range of “closeness” metrics: absolute differences, Root Mean Square Error (RMSE) and Pearson correlation.

In a similar vein, the information gain ratio (IGR) is used to measure the “Influence” construct instead of the information gain (IG), as initially proposed in Chapter 5 (Conceptual Study). This is because IGR performs better as a proxy for actionability, using a number of performance measures: Pearson correlation, Spearman (rank) correlation and the “percentage cumulative actionabilty capture” graphs.

Hevner et al. note that “[a] design artefact is complete and effective when it satisfies the requirements and constraints of the problem it was meant to solve” (Hevner et al. 2004, p352). In this context that means that a suitably-trained analyst can apply the framework to an IQ problem of interest and efficiently produce a valuation of possible interventions in a language readily understood by business: Net Present Value, Return on Investment or related investment-linked measures.

7.4.4         Research Contributions

Effective design-science research must provide clear and verifiable contributions in the areas of the design artefact, design foundations, and/or design methodologies. (Hevner et al. 2004, p347)

The principal research contribution is the framework, encompassing the constructs, models and method. Hevner et al. specify that “[t]he artefact must enable the solution of heretofore unsolved problems. It may extend the knowledge base … or apply existing knowledge in new and innovative ways”. They suggest looking for novelty, generality and significance across the artefact itself (research product), “foundations” (theory) and methodology (research process).

Firstly, this framework enables the valuation of IQ interventions – an important, widespread, persistent and unsolved problem in the business domain. While this has direct applicability to industry, it may also prove useful to research, in that it provides a generic, theoretically-sound basis for understanding and measuring the antecedents and consequents of IQ problems in organisations:

Finally, the creative development and use of evaluation methods (e.g, experimental, analytical, observational, testing, and descriptive) and new evaluation metrics provide design-science research contributions. Measures and evaluation metrics in particular are crucial components of design-science research. (Hevner et al. 2004, p. 347)

The formal entropy-based measurement of attribute influence (and actionability) within a decision process is of particular significance here. Further, it extends the knowledge base by introducing a practical use for Shannon’s Information Theory (Shannon and Weaver 1949) into IS research and practice. Information Theory is not widely used as a “kernel theory” (or reference discipline) in Information Systems so this constitutes a novel adaptation of a large and well-developed body of knowledge to a class of IS problems.

Lastly, from a methodological perspective, the use of Critical Realism as a philosophy to underpin hybrid (qualitative and quantitative) research focused on designing an abstract framework is a contribution to the academic knowledge base. Using CR concepts like “generative mechanisms” to articulate and explain both the simulations and contextual interviews meant that a unified approach could be taken to analysing quantitative and qualitative data. A demonstration of how CR is not anathema to a carefully executed study incorporating detailed mathematical and statistical analyses is also a contribution to knowledge.

7.4.5         Research Rigour

Design-science research relies upon the application of rigorous methods in both the construction and evaluation of the design artefact. (Hevner et al. 2004, p. 347)

The rigour of the research relates to the research process and how it impacts upon the resulting claims to knowledge. In the context of the empirical work, in the interviews (Chapter 4) the research project determined the nature of the problem, the existing “state of the art” and requirements and constraints for any proposed solution. This research follows prescribed qualitative sampling, data capture and analysis methods. Chapter 6 (Simulations) employed careful experimental procedures to reduce the possibility of unintended mechanisms interfering with the invocation of the generative mechanisms under study. It also uses rigorous mathematical models and tested assumptions and approximations, complying with the expected conventions in doing so. In both cases, sufficient detail is provided to allow subsequent researchers to reconstruct (and so verify through replication) the findings, or to make an assessment as to their scope, applicability or validity.

In the conceptual work, rigour is applied in the design of the overall study in the selection of Critical Realism as a unifying philosophical “lens” and the adoption of Design Science to describe and evaluate the study. The literature review (Chapter 2) and conceptual study (Chapter 5) draw upon existing knowledge bases and synthesise a framework from them. This chapter (Evaluation) involves reflecting on the research as a whole, how its constituent parts fit together and how it fits within a wider knowledge base.

Throughout the research project, the articulated ethical values were upheld, adding to the rigour of the process. These values include, for example, fair and honest dealings with research participants and stakeholders and academic integrity in acknowledging other people’s contributions and reporting adverse findings.

7.4.6         Design as a Search Process

The search for an effective artefact requires utilizing available means to reach desired ends while satisfying laws in the problem environment. (Hevner et al. 2004, p. 347)

The framework can be conceived as the result of an extensive search process. The conceptual study (Chapter 5) surveys a wide range of candidate concepts from engineering, economics and philosophy (introduced in the Literature Review, Chapter 3) and synthesises a suitable sub-set of them into a broad outline, the conceptual framework. This is further refined iteratively through the Simulations (Chapter 6) to produce the constructs, models and method that comprise the framework. Doing so requires the judicious selection, use and testing of mathematical assumptions, approximations, techniques and other formalisms focused on the “desired ends” (in this case, a transparent value model of IQ costs and benefits).

Throughout, the constraints of the problem domain (organisational context) are kept in mind. This includes access to commercial information, measurability of operational systems, mathematical understanding of stakeholders and the expectations and norms around organisational decision-making (ie business case formulation). For example, the contextual interviews (Chapter 2) show clearly that decision-makers in organisations expect to see value expressed as Net Present Value or Return on Investment (or related discounted cash flow models).

Seen in this light, the framework is a “satisficing” solution (Simon 1996) to the problem of valuing investments in customer IQ interventions. In particular, the use of the information gain ratio as a proxy for actionability to yield quicker, cheaper (though less accurate) value models constitutes a pragmatic approach to the problem.

7.4.7         Communication as Research

Design-science research must be presented effectively both to technology-oriented as well as management-oriented audiences. (Hevner et al. 2004, p. 347)

Chapter 2 (Research Method and Design) explains why, at this stage of the research, it is not feasible to present the framework to practitioners (of either orientation) to assess the effectiveness of the presentation, how readily it could be applied or inclination to deploy it. In short, the time available to present the complex material means rich, meaningful discussion is unlikely.

So, as a piece of scholarly research, the framework is presented to an academic audience for the purpose of adding to the knowledge base. (See “Research Contributions” above.) There is sufficient detail in the presentation of the constructs, models and method to allow researchers to reproduce the mathematical analyses and derivations and computer simulations, in order to verify the reported results and extend or refine further the artefact.

It is unlikely (and not the intent) that “technology-oriented” audiences would understand, evaluate or apply the framework as it is not directly concerned with databases, programming, networking or servers. Rather, the technical skills required (or “technologies”, in a very broad sense) are in business analysis: modelling customers, processes and data, measuring performance, preparing cost/benefit analyses and working with allied professionals in marketing, operations and finance. The framework presented above is amenable to assessment for deployment within a specific organisation, providing the analyst has a strong mathematical background – for example, from operations research, statistics or data mining.

“Management-oriented” audiences are the intended beneficiaries of this framework. For them, the framework can be treated as a “black box”: proposals, assumptions and organisational measurements go in and investment metrics (NPV, ROI) come out. They may appreciate the constructs at a high-level, but the detail underpinning the statistical and financial models is not required to use the value models produced by the framework. These goals and constraints emerged during the contextual interviews (Chapter 4) with managers and executives, and so were guiding requirements for the development and refinement of the framework.

This section has assessed the research process and product against the seven DS guidelines advanced by Hevner et al. The research is found to meet the criteria for DS as it has the requisite features or elements:

·         produced an artefact (the framework with its constructs, models and method),

·         that tackles an important, widespread and persistent problem (IQ investments),

·         through an iterative development/refinement cycle (conceptual study and simulations),

·         with a rigorous evaluation of the artefact (mathematical analysis and simulations),

·         which draws upon and adds to the knowledge base (Information Theory),

·         resulting in a purposeful, innovative and generic solution to the problem at hand.



[22] This process of identifying and prioritising IQ interventions is akin to triage in a medical context. Interestingly, “triage” comes from the French verb “trier”, which can be translated as “to sift”.

Prev:
Chapter 6 - Simulations
Up:
Contents
Next:
Chapter 8 - Conclusion