Washington Area Women's Foundation

Evaluation: Beyond accountability to change.

While the nonprofit and funding communities have intensely discussed reporting on inputs and outputs for years, the evaluation community has been squabbling over outcomes, objectives, goals, results and impact.

Despite the confusing jargon and long hours spent pouring over data and writing the same report 10 different ways, funders and nonprofits agree that what they’re trying to do is answer a few simple questions:
1)  Is “x” better as a result of “y” activity(ies)?
2)  If “x” is better as a result of “y” activity(ies), what was learned in the process?
3)  How do we continue to learn and improve so that resources support what works?

That is evaluation.

Most nonprofit organizations are already doing some type of reporting on their outputs–that is, the tangible products that result from their program’s activities, such as the number of workshops held, the number of people who attended a training seminar, or pamphlets distributed.  Reporting on these tangibles helps to measure performance and accountability, or that activities happened and monies were spent as planned.

But nonprofits and funders are not just in the business of accountability.

We want to create change and demonstrate that inputs and outputs–the lingo most often used to describe this change–can logically link to change.

So how do we do this?

I was pleased to learn that Washington Area Women’s Foundation’s Stepping Stones is, for the most part, getting evaluation right according to a recent Stanford Social Innovation Review article “Drowning in Data.” 

Sharing the Burden 
As a public foundation, we are definitely sharing in the burden of evaluation with our Grantee Partners, and as implied in the article, are evaluating our own work through Stepping Stones by using evaluation in our own decision making.

Further, we are building cohorts of Grantee Partners to evaluate the overall effectiveness of Stepping Stones rather than each individual program. 

Yet, we are still requiring each program to conduct its own evaluation.

To do this we are partnering with Innovation Network, a leader in the field of participatory evaluation, to evaluate Stepping Stones overall, and to provide evaluation training and technical assistance to build the capacity of Stepping Stones Grantee Partners.

We’re not just doing this to lessen the blow, share in the agony, or enforce “our evaluation requirements” onto our grantee organizations.

Rather, by providing resources and capacity, our hope is that Grantee Partners are learning practical ways to incorporate evaluation into their structure because, let’s face it, evaluation isn’t going away anytime soon.

Standardizing the Standards
Unfortunately, it is hard to understand whether or not institutional evaluation capacity and structure are actually occurring when nonprofit organizations must be responsive to a variety of funders.  As a funder and nonprofit organization, we know first-hand what it is like to have to be responsive, but also have an opportunity to influence the field when it comes to discussing the challenges of evaluation.

There’s great hope in the “Outcome Indicators Project,” recently completed by The Center for What Works and Urban Institute, which identifies common outcomes and indicators for nonprofit organizations providing direct services in fields such as employment training, adult education and literacy, and community organizing, among others.

This may not be the answer to the intense staff time and resources necessary to collect data and report, but it definitely helps address the heavy burden of developing logic models and data collection methods on the front-end of evaluation.

It also allows funders and nonprofit organizations to get closer to identifying more standardized evaluation and to incorporate evaluation activities into their organizational structure and program operations.

  • Lisa Kays

    C, I couldn’t agree more…one of my favorite terms in a past life working on African development issues was the concept of “mutual accountability,” and of doing evaluations from the bottom up, as well as top down–which I think we also apply here at The Women’s Foundation since we constantly evaluate ourselves, and not just our partners.

    And invite our partners to evaluate us, which is crucial (and scary). 😀

    I also remember the many, many issues that arose from our NGO partners in Africa who talked about how much time and waste was generated when they had to constantly be altering their benchmarks, reporting styles, etc. for various funders–not to mention how much information was lost.

    Moving towards a landscape where common indicators and reports are the norm would do so much not only to save time and energy, but also move everyone towards really defining and then demonstrating whether social change is taking place or not across a broad swath of cities, communities, etc…and not just across projects.

    I do wonder if we’ll ever get there, and how. It’s a scary call on many levels and I sometimes wonder if debating the intricacies is a safe way to really keep ourselves from owning up and looking at whether our work is achieving the results we want.

    It’s a scary question, with lots of scary implications if the answer isn’t what we want it to be.

  • Good I found this wonderful blog. Really good post! Thanks for the information. Without a doubt I will be back to check out your site soon.