A non-profit client was concerned because the number of applicants for its college scholarships had declined from the previous year, from 880 to 840. The client, a foundation, only awarded about 35 of these scholarships per year, so they didn’t lack for applicants. Still, they wondered whether the decline in applications meant that the money they had spent promoting the scholarships over the previous year had been wasted. They also wanted to know how to increase the number of people applying.
As we examined the situation, I pointed out that they were looking at the wrong metrics. Here’s why:
The ideal situation for this foundation—that is, the most efficient way for them to award their scholarships—would be for only 35 extremely qualified students to apply for the 35 awards. That’s incredibly unlikely, perhaps even impossible, of course. The worst situation would be to have a very large number of unqualified students apply for the scholarships, resulting in extra work for no benefit.
In other words, the total number of applicants is not a good measure of the foundation’s outreach efforts. It’s more sensible to look at the number of highly qualified applicants within that larger pool. If, for example, the number of strong applicants grew year to year from 45 to 55, then the foundation’s outreach was probably effective even if the total number of applicants went down. (I say “probably” because we don’t yet know the causal link between the increase in strong applicants and the promotion.)
But if the number of strong applicants went down from 45 to 40, it wouldn’t matter if the total number of applicants had doubled, or even tripled. That’s a sign that the outreach was either ineffective or counterproductive. Unfortunately, it’s all too common for organizations to focus on the “gross numbers” instead of figuring out whether that’s the right metric for measuring success.
So, before you either worry or celebrate, ask yourself if you’re measuring the right things.