While the nonprofit and funding communities have intensely discussed reporting on inputs and outputs for years, the evaluation community has been squabbling over outcomes, objectives, goals, results and impact.
Despite the confusing jargon and long hours spent pouring over data and writing the same report 10 different ways, funders and nonprofits agree that what they’re trying to do is answer a few simple questions:
1) Is “x” better as a result of “y” activity(ies)?
2) If “x” is better as a result of “y” activity(ies), what was learned in the process?
3) How do we continue to learn and improve so that resources support what works?
That is evaluation.
Most nonprofit organizations are already doing some type of reporting on their outputs–that is, the tangible products that result from their program’s activities, such as the number of workshops held, the number of people who attended a training seminar, or pamphlets distributed. Reporting on these tangibles helps to measure performance and accountability, or that activities happened and monies were spent as planned.
But nonprofits and funders are not just in the business of accountability.
We want to create change and demonstrate that inputs and outputs–the lingo most often used to describe this change–can logically link to change.
So how do we do this?
I was pleased to learn that Washington Area Women’s Foundation’s Stepping Stones is, for the most part, getting evaluation right according to a recent Stanford Social Innovation Review article “Drowning in Data.”
Sharing the Burden
As a public foundation, we are definitely sharing in the burden of evaluation with our Grantee Partners, and as implied in the article, are evaluating our own work through Stepping Stones by using evaluation in our own decision making.
Further, we are building cohorts of Grantee Partners to evaluate the overall effectiveness of Stepping Stones rather than each individual program.
Yet, we are still requiring each program to conduct its own evaluation.
To do this we are partnering with Innovation Network, a leader in the field of participatory evaluation, to evaluate Stepping Stones overall, and to provide evaluation training and technical assistance to build the capacity of Stepping Stones Grantee Partners.
We’re not just doing this to lessen the blow, share in the agony, or enforce “our evaluation requirements” onto our grantee organizations.
Rather, by providing resources and capacity, our hope is that Grantee Partners are learning practical ways to incorporate evaluation into their structure because, let’s face it, evaluation isn’t going away anytime soon.
Standardizing the Standards
Unfortunately, it is hard to understand whether or not institutional evaluation capacity and structure are actually occurring when nonprofit organizations must be responsive to a variety of funders. As a funder and nonprofit organization, we know first-hand what it is like to have to be responsive, but also have an opportunity to influence the field when it comes to discussing the challenges of evaluation.
There’s great hope in the “Outcome Indicators Project,” recently completed by The Center for What Works and Urban Institute, which identifies common outcomes and indicators for nonprofit organizations providing direct services in fields such as employment training, adult education and literacy, and community organizing, among others.
This may not be the answer to the intense staff time and resources necessary to collect data and report, but it definitely helps address the heavy burden of developing logic models and data collection methods on the front-end of evaluation.
It also allows funders and nonprofit organizations to get closer to identifying more standardized evaluation and to incorporate evaluation activities into their organizational structure and program operations.