As a sector, we’ve gone through much hand-wringing and many fads trying to work out how to demonstrate the impact we’re having. I’ve been asked to comment on the five very different approaches to evaluation outlined in this series of articles. My own belief is that every evaluation approach should incorporate the views of stakeholders – especially the intended beneficiaries of the service or product – so the degree to which they do so is something I will highlight in my comments.
Why is it important to get beneficiary feedback? First, our sector aims to ‘empower’ the people or causes we serve. Yet when was the last time a non-profit you know asked its clients whether it was doing a good job, or conducted a ‘customer satisfaction survey’? Feedback like this can help non-profits to improve and hold them more accountable. Second, ‘word on the street’ information gives you a different perspective. Think about how you make decisions in your personal life. When was the last time you bought a book on Amazon without reading reviews by other book buyers?
Lastly, and perhaps most importantly, stories told by people who have felt the impact of a non-profit first hand are immediate and will appeal to potential donors and volunteers. After Hurricane Katrina, Stanford Social Innovation Review (of which I was then publisher) sent a reporter to Biloxi, Mississippi, who simply asked people, ‘Who’s been here giving out blankets? Who’s been here to help you find a shelter?’ One guy told him how he had broken his leg and had been living in his car until volunteers from a local non-profit found him and took him to the doctor. Would I be more likely to give to or volunteer for that non-profit after hearing this story? Absolutely. Stories and experiences are not the whole picture, but they are important and can, in most cases, be easily added to existing evaluation methods.
CEP: Grantee Perception Reports
Center for Effective Philanthropy’s core product, the Grantee Perception Report, is the gold standard for using personal experiences, aggregating them and comparing results to provide a methodologically solid system for rating foundations. It has injected a new sense of accountability into foundations. The anonymous and aggregated survey method allows grantees to give honest feedback without fear of offending a funder, while the benchmarking allows a foundation to be compared to its peers on the same dimensions.
CEP’s Grantee Perception Reports are widely respected and the methodology is solid. Many foundations have changed their grantmaking processes as a result of CEP’s Grantee Perception Reports. I’m eager to see them expand their methods to other stakeholders in philanthropy – especially beneficiary perception reports.
SVT: Measuring Social Return on Investment
When people hear the term SROI, they tend to think of the original model, which tried to translate all social impact into dollar figures. While SROI as used by SVT still focuses on the value created per investment amount, it incorporates other elements as well. For instance, in one project working with 600 communities with little phone or internet access, SVT attended community meetings and had volunteers distribute surveys. The surveys then helped inform the final SROI analysis.
Recognizing the impossibility of using solely quantitative measures to assess outcomes whose value is essentially subjective – the value, for instance, of a homeless person having a place to go for free breakfast and meet friends – SVT incorporates qualitative, narrative elements when describing these difficult-to-quantify benefits.
Acumen Fund: Measuring market-based approaches
Acumen Fund’s evaluation approach is consistent with its goal of building financially sustainable and scalable organizations that serve poor people. Acumen is upfront about its assumption that the market will provide feedback on whether a service or product is useful. The organization’s criterion for evaluating an investment is whether it is the best market-driven and sustainable method of addressing the social problem.
The famous bednets project, for instance, is cashflow-positive and has been able to access private capital markets such as local bank financing and investment from Sumitomo. They also measure cost-effectiveness, looking, for instance, at how many bednets their investment can deliver on a dollar-to-dollar basis compared to UNICEF’s programme. They don’t worry whether people hang the nets properly. They assume that if the bednets don’t work, people won’t buy them.
I worry about this assumption because there are many cases when the market doesn’t provide perfect feedback, especially in developing countries where competitive products and full consumer information may be lacking. And if the market doesn’t provide the necessary feedback, then other evaluation methods would be valuable, like asking the bednet users whether they find the product useful or how it could be improved. Customer satisfaction is a leading predictor of company performance while sales and revenue are often lagging indicators. Sales data and other financial data can be collected along with customer satisfaction measures to paint the complete picture.
Ricardo Wilson-Grau: Participatory evaluation of social change organizations
Of all the five articles, this is the only one that explicitly focuses on trying to measure systemic change. Global Water Alliance’s (GWA) awareness that non-profits can’t do it alone is evident in the way they define the big picture impact: they look for changes in ‘other social actors that together represent a pattern of change’.
Ricardo Wilson-Grau, GWA’s evaluation consultant, understands the challenges of measuring, particularly in the real world where there are many variables outside GWA’s control. If GWA’s success lies in their ability to influence a variety of stakeholders – government, business, non-profits – how would one measure this? One can infer that the Government of Punjab is taking action and providing money owing to a change in attitude caused by GWA. But what about other stakeholders? What constitutes a ‘pattern of change’ and how long would it take to see such a pattern? The impact of the environmental movement is only now truly visible, 30 years after the first Earth Day march.
I am a big believer in the need for social change on a systemic level, but I think Wilson-Grau could supplement his evaluation method with scales to measure improvements along the way. He already involves GWA’s key staff and partners in designing and analysing data. I hope he takes the next step and goes directly to the people living in these communities and asks them if they’ve seen a change in the availability of water supply in their community or how water could be better conserved. This sort of feedback could enable ongoing improvements in programmes.
Sustainable Food Lab: Using feedback in formative evaluation
Sustainable Food Lab is very interested in feedback – much of it collected informally. Each specific project has its evaluation feedback, and many of them, primarily value chain projects, are in fact anchored in participatory assessment of the value chain and participatory co-design of interventions.
The Lab overall collects feedback from members and beneficiaries, primarily through interviews with those who are most involved and also with some of their colleagues who are not involved. It is a membership association and it also surveys participants after meetings and workshops.
The feedback is mostly used informally and shared with the Lab steering committee, which represents all the key stakeholders. They use it in iterative strategic planning, which is a relatively informal process of dialogue among those present at each major meeting. It would be useful for Food Lab to use more rigorous and perhaps more structured ways to collect its stakeholder feedback. This would enable it to set benchmarks and compare between programmes.
Why does evaluation matter?
It’s vitally important that we as a sector find ways to set benchmarks, measure ourselves and hold ourselves accountable. Let me illustrate this from my own experience. When my family immigrated, we had about $100, and if you look at photos of me when I was a kid, practically everything I wore came second hand from non-profits. My cavities were filled for free at a non-profit community dental clinic. I think of these things when people wonder if measuring the results of a non-profit is just an academic exercise.
The importance of collecting qualitative, narrative feedback has driven me to launch GreatNonprofits (http://www.greatnonprofits.org), a new non-profit technology provider and website (think of it as a ‘Zagat’s guide’ about non-profits) that helps non-profits collect feedback easily. We have many challenges to work out. If qualitative feedback is to be useful, for instance, it needs to be rigorous and be sufficiently structured, and there needs to be enough of it to provide a representative sample. Yet the possibility of finding practical and compelling ways for non-profits to demonstrate their social impact and for donors and volunteers to easily find the non-profits doing the best work is real and closer than ever before.
Perla Ni is founder and CEO of GreatNonprofits. Email perlani@greatnonprofits.org
Disclosure: Phil Buchanan of CEP is on GreatNonprofits’ advisory board.
Comments (0)