As captured in the December issue of Alliance, many foundations are adapting their evaluation practice in order to understand how to improve their work rather than to isolate the impact of their investments. The aim is increasingly to use evaluation outcomes to take action while programmes can still be adjusted and impact increased.
What the Alliance issue did not perhaps bring out strongly enough was the profound effect this has on design: the evaluation design and the way in which the results are communicated will be very different if the primary objective is to plumb the past for insights to guide future actions rather than to isolate the impact of past grants.
In From Insight to Action: New directions in foundation evaluation, FSG Social Impact Advisors’ report on evaluation, which involved discussions with over 100 foundation leaders and advisers, we found that evaluation is most effective when used to answer three questions addressing different stages of the grantmaking process: How can we better plan our work? How can we improve implementation? How can we track progress towards our goals? These three questions form a cycle of continual performance improvement along which data needs to be processed and interpreted at regular intervals to guide mid-course corrections and to design future programmes to achieve greater impact.
Evaluation that assists planning defines outcomes and establishes baselines, learning from past grantmaking and summarizing research or assessing demand for proposed services. The evaluation process at this stage provides the hard data foundation leaders need to make new investments.
Where the aim is to assist implementation, evaluation covers activities like bringing grantees together to share experience, providing technical assistance, monitoring changes in context and improving the foundation’s internal operations. The objective is to adjust funding and interventions to increase initiatives’ potential for impact.
Evaluation that measures progress uses publicly available data to track indicators, generates new data to monitor progress, and collects feedback from grantees and their beneficiaries to understand how best to re-orient a programme for success.
Many of these practices go beyond what is typically thought of as evaluation, yet all are referred to as ‘evaluation’ within the foundation field. Each can be used to better inform a foundation’s work. Each involves a pragmatic effort to gather knowledge in order to shape future behaviour. The challenge foundations currently face in evaluation is to understand the full range of choices available, the different purposes they serve, and the circumstances in which they are relevant.
Much of the confusion surrounding evaluation seems to result from the search for a single ‘right answer’, which does not exist. Furthermore, foundation boards, programme officers, evaluators and grantees typically bring different needs and expectations to the evaluation process. Developing a common understanding among all participants will enable more effective practices. Ideally, the approach needs to be developed in common, or at least discussed extensively to clarify and align expectations.
Marc Pfitzer
Managing Director, FSG-Social Impact Advisors
Comments (0)