What are the top errors made by AR teams trying to develop measurements and integrate them into balanced scorecards? Carter and Dave have some suggestions, but while they certainly picks up on challenging aspects of measurement, I’m not sure if they are all mistakes – let alone the most serious mistakes.
To pick up on just two of their ‘top five’:
- Personally, I rarely come across AR managers trying to shift metrics they can’t affect. Sometimes you see teams that aspire to measure things that are hard to measure (for example, sales) but they are not being measured.
- Just because data are granular, that doesn’t mean they are hard to collect. Surveys of analysts are much harder than tracking highly-granular items of reports, citations and so on. It’s much more common to find metrics methods that are hugely under-laboured, and are to simple to give guidance.
Much more major problems are
- - Avoiding metrics altogether. Some firms, even large ones, say that they don’t need to measure their AR because that function has strong corporate support or because they know what the analysts think. Sometimes thus just reflects the (understandable and real) fear of knowing or of being held accountable for performance. However, it exposes the AR team to a substantial risk, especially if there is a turn-over of management.
- - Using measurements with a bias towards you. One large vendor, for example, tracks the profile of competitor brands in reports mentioning its brand name. By definition, their method means that every report mentions the vendor itself; as a result they are guaranteed to come first. A more common approach, especially with analyst perception audits, is to not survey a random sample of analysts but instead to select analysts who you consider to be important. This approach means you don’t select analysts who could be equally influential on your clients, or more, but who you have less awareness of because (for example, because you are mot paying them).
- - Collecting overwhelmingly domestic data. I see organisations in North America basing their measurements on samples that are 85% or 90% North American, when that does not reflect the influence of analysts, the universe of analysts they are targetting or their firm’s revenues. If you’re collecting, for example, share of voice data then there’s no reason not to look at German-language (for example) research. You might not speak German, but you can still search to see how often you and your competitors are mentioned. if you see something interesting then get a German-speaking student in and put a pizza in front of the screen.
What are your thoughts?