Quality . . .
How Can I Manage My Suppliers and Delight My Users?
Before deciding that failure to use your firm's expensive new reporting environment is a disciplinary matter, it's worth remembering that users may have good reason for avoiding the reporting system. After all, the highly discretionary nature means that their perceptions of quality matter! So, let's look at two different views on what quality means.
The first definition is Conformance to Specifications, which takes the view that reports and the reporting environment are specified (with tolerances). The actual reports are compared against the specs and any differences are deemed deficiencies. Assessment tends towards objective measures such as defect rates and uptime. This view seems to fit well in work cultures with a strong technical focus, such as that found amongst people with a background in engineering and accountancy.
The second definition is Meeting (or Exceeding) Customer Expectations. Here, the referrant (ie yard stick for comparison) is not an ideal report, but what the actual information consumers are expecting before they access their reports. Under this view, quality problems aren't as much defects as they are disappointments. Here, quality is better assessed through focus groups and interviews, and perhaps subjective measures such as surveys using Likert scales. As a crude generalisation, this view is prevalent in work cultures where people have a background in marketing, hospitality, education and other "people-oriented" disciplines.
As a practical note, it is easier to strike agreements with data suppliers and tool vendors using the first perspective (objective measures of deficiency). This is because the contracts and other agreements require objectivity. (They may not measure the right thing, but an independent third party can agree on the measures!) On the other hand, when assessing the usability or otherwise of reporting systems and features, the second view (subjective perceptions of disappointments) is far more likely to strike a chord with end-users and the developers actually building the interfaces.
Product or Service?
Now, the other perspective to consider is: How do report users understand the reporting function? Do they see it as a product or a service? This will frame their quality assessment, and hence propensity to use the reports. Typically, users in larger organisations will see it as reporting - that is, a service. Consequently, their expectations are different: if they forget their password, or dispute the source data, or don't understand a trend line, they want to be able to speak to a person to resolve their query. Users in smaller organisations are more inclined to see the reporting function as a factory for producing a product: a set of reports. Consequently, they take responsibility for getting value out the information. The difference is like that between the phone book (a product) and directory lookup (a service).