Benchmarks are Bullsh*t
Benchmarks are bullshit.
There — I’ve said it.
The benchmarks that we all live and die by — the well-annotated, appendiced-to-death, 97-page Annual Findings that determine whether you are successful — are crap.
Why? Because they don’t represent anyone’s reality. Instead, they embody the reality of a perfectly curated, exactly-representing-the-market slice of organization. And, because each benchmark is calculated based on said organizations compiled together, practically every outlier is removed.
However, if your experience is anything like what I’ve seen in every nonprofit that is developing their KPIs and setting expectations, outliers are reality.
Has there ever been a time in your organization when your fundraising performance wasn’t affected — for good or for bad — by:
- A change in key development staff?
- A new channel being integrated for the first time?
- A fantastic but unexpected messaging opportunity that motivated donors more than you could have imagined?
- Data security restrictions?
- The biggest nightmare of all — a database and website migration, at the same time?
My guess is no.
If you’re like me, you find yourself reading industry benchmarks the moment they are published, furiously combing through to find one KPI, one response rate, one cost-per-dollar-raised ratio, one website conversion metric that you can compare to your own performance, which will either:
- A) Prove that your fundraising is great! (“Board Members, please give us more funding so we can continue doing magnificent things!”)
- Or B) Prove that your fundraising is terrible. (“Board Members, please give us more funding so we can stop the bleeding, revamp our fundraising strategy, and add 13 additional digital channels for a true 360-degree donor experience.”)
Can I suggest something different?
Not to discount the value that can be found in time-honored, well-respected benchmark studies, but as an industry, we need to be realistic with how we use them.
Are they useful for painting an over-arching view of the nonprofit space and how nonprofit organizations are performing compared to yesterday? Absolutely.
Are they effective for analyzing donor behavior and how that behavior differs between sectors, countries, and channels? Of course.
Should we cut and paste the KPIs from said benchmark reports and establish them as our goals for the next fiscal year? That’s just silly.
So, let’s take an altered approach. Let’s take benchmark reports down from their holy pedestal, and instead focus on realistic growth from OUR OWN organization’s prior KPIs, week over week, month over month, and year over year.
Let’s take a look at our own budget — and not the budget of the five largest charities who have been in existence since 1756 — and determine what level of growth is attainable, given the current monetary restraints and personnel skills that exist within our current teams.
Let’s stop waiting for an annual report to be published — one that typically represents data that is already 18 months old by the time we read it — to determine if we’re falling behind, or ahead of, the curve.
Because at the end of the day, it’s about YOUR mission. YOUR donors. The impact of YOUR fundraising program.
So, spend 15 minutes each morning to review how your campaigns are doing. Are they on track? Spend an hour each week meeting with your team to check in on performance. What can you do more of? What should you do less of? Do this each month, each quarter, and each year.
Create your own benchmarks and hold yourself accountable. Because measuring your real, bled-for results against those well-intentioned but unrealistic ideals is like pasting the cover of Sports Illustrated’s swimsuit edition or People magazine’s “Sexiest Man Alive” to your bathroom mirror — you’re going out of your way to torture yourself, in a way that is counteractive to actually achieving results.