The status quo in development evaluation is not good enough. While evaluation is relatively strong in North America and Europe, in other parts of the world, it is quite weak. If evaluation is to blossom in the South (Africa, Asia, Latin America), evaluation research cannot remain the preserve of northern-based institutions with northern values.
This simply reinforces the status quo and strengthens existing inequalities in evaluation practice. If donors want evaluation to be part of the development process and not simply for the purpose of donor accountability and learning, they need to support researchers in the South. It is critical that the citizens, development researchers and professionals of southern nations lead the way in building the field of evaluation research and practice in their regions. That is the real evaluation gap.
All evaluation approaches, from participatory to experimental, have their limitations. They all require assumptions at certain stages and all are limited by the factors that do not get included in a study. We work, therefore, with imperfect knowledge and make the best decisions we can at the time. So, what we should be seeking is not the perfect approach to evaluation, but approaches that respond to the specific question at hand, that can absorb change and new information, and whose findings can be triangulated with findings from other studies.
The most challenging question concerns who does this work. I want to make the case that it is best done by locally based researchers and organizations who know the culture and context, live with the consequences, and have a responsibility to build capacity to use research in decision-making in local institutions, governmental, corporate or non-governmental.
As Levine and Savedoff note elsewhere in this issue (see p53), a paltry sum is spent on evaluation of development assistance. Like them, I am happy to see the increased discussion of evaluation and hope that it will lead to its increased use in development assistance decisions and a deeper understanding of rigour. But the real challenge and the real opportunity lies in helping developing country nationals to use evaluation in support of national development strategies and programmes. This article proposes four key issues that need to be considered in building evaluation that makes a difference in national development. I conclude with comments on moving forward.
What is evaluation for?
The first issue concerns use. Who is the evaluation for, and what purpose will it serve? If the primary purpose is to ensure donor accountability and learning, then the status quo – with some improvements in quality – is adequate. Development evaluation emerged from this agenda. If, as we hope, evaluation is meant to serve the development process, then it needs to address the needs of local decision-makers and governance systems as well. Simply tweaking the existing donor-oriented, donor-designed systems is not enough. The shift that is needed is not a technical one but a socio-organizational one.
What method will be used?
The second issue concerns the conduct of evaluation. Since Carol Weiss started writing about the use of evaluation in policy in the 1970s, most who have studied this issue recognize the central importance of engagement with the users, both programme staff and decision-makers; understanding of the context; application of new constructs and perspectives; and follow-up after the study is complete. Trust plays a key role in this process, and trust is significantly enhanced through local engagement. The purpose of an evaluation should play a key role in determining what the most appropriate evaluation method will be. Whatever the method, conduct of the study should take account of the importance of understanding local conditions, and ensuring teams that can respond to and engage with the programmes under review on an ongoing basis, including well after the study is complete.
The need for rigour
Proponents of active local engagement in evaluation frequently face the criticism that the approaches lack independence. The third issue to consider is that ‘independence’ is a useful myth – an ideal to which we should strive but which we never achieve. We strive to create it in many ways such as evaluation groups reporting to a board rather than management, or using an external evaluator – either an individual or an organization. However, because individuals, boards and organizations all have values, beliefs and needs for survival, their independence is always limited. What is crucial to understand is that if the values and beliefs of one society are imposed on another through evaluation, we have a situation that is likely to lead to errors, resentments and misunderstanding. More important than independence is the need to embrace methodological and intellectual honesty – rigour. Rigour is not in the purview of a single methodology but rather is embedded in all evaluation methods and in the care taken in design, data-gathering and analysis.
Getting the timing right
The fourth issue is timing. Development has its own pace and rhythms. These are seldom in line with project deadlines and work plans. This means that it is essential to know more about how to engage with the processes affecting an intervention. In terms of who needs to know, again those who can follow development closely over time are critical actors. These are the researchers, decision-makers and interested parties who have to live with the consequences of a policy or practice change on a daily basis.
The evaluation gap revisited
If funding organizations want to support the use of evidence in decision-making as part of the development process, it is crucial to build the field of evaluation in the South. To clarify: building a field is not training nor is it technical transfer. It involves enabling nations, their citizens, researchers and evaluation professionals to build indigenous evaluation cultures and capabilities to contribute to improved decision-making.
Evaluation in development will remain relatively weak so long as it is driven primarily by the donors and through northern-based and created institutions. For country-led evaluation to take hold, the field needs to be built so that more and better evaluators are working on the ground on issues of development.
There is no simple solution to this challenge, nor is it short term. It calls for those supporting development efforts to consider whether they see evaluation as a serious part of the development landscape or merely as a tool for donor accountability and learning. If we do not fill this gap, evaluation will continue to struggle with the contradiction that it is meant to support development, but in operation it supports the status quo and reinforces the gap between North and South.
Fred Carden is Director, Evaluation, International Development Research Centre and Research Fellow in Sustainability Science at Harvard’s Center for International Development. Email Fred_Carden@ksg.harvard.edu
The views expressed are those of the author and do not necessarily reflect the views of the centres with which he is affiliated.
Comments (0)