In Part 1, I discussed the need to begin by understanding Critical Success Factors and Key Performance Questions when you’re setting up a metrics program. In this article, I’ll talk about iterative approaches to implementation.
Once you have figured out what performance questions you’re trying to answer, you can then determine which measures different stakeholders will use to answer those questions and how to present the numbers. Very quickly, you will also need to identify data sources that enable you to always present the best version of “the truth.” Automate wherever possible: avoid the need to compile or manipulate data before they can be used. Many of the data points you’ll need are already being generated or captured in systems such as your CTMS, IxRS or other systems. Wherever possible, use automated feeds (or exports) of data from source systems to avoid duplicate work.
From the start, think of the development of your metrics dashboard or scorecard as an iterative process. You won’t get it right the first time and you will need to be flexible. Work with your stakeholder groups to figure out whether their respective performance questions are being answered adequately, and if they’re getting the information they need to assess their Critical Success Factors.
Because of the need for flexibility, it’s imperative to choose the right tools to create your metrics dashboard or read-outs. Larger companies may leverage data management or analytics groups to do this. Ad hoc query tools (like Spotfire and others) are flexible and can be set-up and modified quickly. That’s key. Smaller organizations may need to rely on Excel. That could be fine, but it’s important to get the measures right before investing in extensive programming. You know things will change, so plan for changes when you start.
My final thought on implementation is not to be afraid to take things away. It sounds like I am getting ahead of myself, I know. But quite often, data or analytics group will happily produce a suite of standard reports to meet stakeholder needs, then add to those reports as time goes by. The problem is that new reports get added to older ones and we end up with a long list of standard reports that intimidate users and scare them off.
Understand what’s being used and what’s not useful. Take away the less effective reports so users are able to find the best ones. Unfortunately, understanding what’s most useful is not as simple as monitoring traffic (as with so much in life, wisdom doesn’t necessarily live in the majority). You will also need to identify your lead users and understand what they use, how they use it and why, then use that information to educate other users.