Competencies can be defined as “clusters of related knowledge, skills, abilities, and other requirements necessary for successful job performance.” This definition taken from the Unites Nations Evaluation group suggests a concept of competency that is defined at the individual level. One challenge as we look ahead is to also explore how organizational and system-level supports can help enhance evaluation “job performance.”
The Development Monitoring Evaluation Office (DMEO) of Niti Aayog in India had organized a panel titled, “Systemic Essentials for Strengthening Government Evaluation Capacities” in June 2021. In this blog I reflect on some questions on evaluation competencies that can help sharpen our focus.
Learning from other discussions on evaluation competencies
A good starting point is to explore some examples of evaluation competencies. The UN Evaluation Group in 2016, for example, in its focus in achieving Sustainable Development Goals, Gender, Equality and Human Rights, listed the following five domains of evaluation competencies:
- Professional Foundation – these include ethics and integrity, evaluation norms and standards
- Technical Evaluation Skills – these include evaluation approaches, methods and data analysis
- Management Skills – work planning, coordination, and supervision
- Interpersonal skills – these include communication, facilitation, and negotiation
- Promoting a Culture of Learning for Evaluation – these include focus on utilization and integration of evaluation into policy and programs
The American Evaluation Association in 2018 focus is on the following five domains:
- Professional Practice
- Methodology
- Context
- Planning and Management
- Interpersonal
Any policy maker or evaluator interested in what evaluation capacities matter is also advised to explore other evaluation competencies including the Aotearoa New Zealand Evaluation Association competency document from a decade ago (2011). One focus here is on paying attention to cultural values in shaping competencies: “… our intention is to ensure that cultural competency is not treated like a peripheral or marginalised aspect, rather a central component of the development of our framework of evaluation competencies and practice standards.” They go on to add: “Cultural competency is central to the validity of evaluative conclusions as well as the appropriateness of the evaluation practice.”
Similarly, the article by Jean King and co-authors from 2001, titled, “Toward a Taxonomy of Essential Evaluation Competencies” is also a must-read for anyone interested in the challenge of defining evaluation competencies. Jean is a friend, and we continue to collaborate and communicate. I know what continues to engage her is: How can systems and organizations sustain evaluation capacities over time? The big challenge as I see it is not about competencies at the individual level, but what it would take to ensure that the capacities of the broader ecosystem and organizations are sustainable over time.
Questions to precipitate action
All of the above serves as a backdrop of what I think are the essential questions that the field of evaluation and organizations like the DMEO will need to address:
The Opportunity
1) Focusing on Sustainable-Capacities Systems and Organizations: What programs of work can help build evaluation capacities in the broader ecosystem? How can it help build capacities at the organizational level in a way that capacities are sustained even if key individuals leave organizations? A good starting point will be to recognize we don’t have good answers to these questions.
2) Embracing the Sustainable Development Goals (SDGs): The SDGs provide a remarkable opportunity for the entire field of evaluation to prove its salience and utility. What kinds of evaluation skills can help practitioners promote an understanding of system dynamics, coherence across interventions, focus on sustainable impacts and “no one left behind” to help contribute to enhancing evaluation’s contributions to achieving the SDGs? The relevance of each of these concepts were discussed in a remarkable set of webinars hosted by the Evaluation Centre for Complex Health Interventions in Toronto.
3) Competencies for Evaluators in the Public Sector – The enormous opportunity that the dialogue led by DMEO presents is to help raise questions around what specific competencies are needed for evaluators working within governments. How can a program of work on building evaluation capacities help an evaluator working within the Central or State government do their jobs better?
Some Contemporary Challenges
4) Focus on Diversity, Equity, and Inclusion — Much of our dialogue around evaluation capacities and capabilities are occurring at a time when we are facing deep discussions and divides around inequities, hierarchies and privilege: How can evaluators create greater voice and inclusiveness in understanding the impacts of programs and policies? I do think that as a field we need to more clearly understand evaluation’s role in addressing inequities and promoting inclusion.
5) Understanding the Architecture of Complex Programs and Policies — My view is that most evaluators have a poor understanding of representing the complexities of social programs and policies. Our tools to understand the architectures of complex programs and policies are quite limited. How do we promote understanding of theories of change of complex interventions? How do we build competencies in understanding the program mechanisms/processes that make a difference to impacts? How do we more clearly represent and understand contexts and its vital role in the success of policies and programs?
6) Enhancing Interpersonal Skills: I find evaluation as a profession both in the North and the South still focused primarily on the technical. I am unsure that we have cracked how to build interpersonal skills. How does one build a program of work that enhances the interpersonal skills of evaluators?
7) Towards Complexity-Informed M&E: Models of Continuous Improvement – While much of our preoccupation has been on ‘what works’ and ‘what doesn’t work,’ I’m unconvinced that we have paid as much attention to how systems and organizations improve in an ongoing manner. Most interventions, and organizations are ‘complex systems thrust amidst complex systems.’ What are programs of evaluation capacity building that can help inform more complexity-informed monitoring and evaluation?
8) Skill Sets to Synthesize Evidence and Build an Ecology of Evidence – Another area that I find missing is an ability to synthesize very disparate sources of information towards creating a policy environment in which an ecology of evidence is used to make decisions. Despite all of our focus on mixed methods approaches, I’m not convinced that we have trained individuals to synthesize evidence that can help build an ecology of evidence.
Looking ahead
I do want to end by reiterating that this is an exciting time to raise questions around evaluation competencies. With the focus on SDGs and recent debates on what evaluation criteria matter, this rich dialogue around competencies can lead towards a more vibrant field of evaluation both in India and globally. It is perhaps fitting to end with remembering one of the founders of the field of modern-day evaluation, Donald Campbell. His vision of evaluation was that of a “mutually monitoring, disputatious community.” It’s important to recognize that our work can lead towards evolution of knowledge and systems and mostly importantly improved lives and a more sustainable world. There will be differences and disputes as we argue about different views of progress. How do we promote an ecosystem of evaluators that have both confidence and grace while disputing diverse views of what constitutes progress?
Leave a Reply