Engage the broader architecture of evaluating practices
Need to consider alliances with other applied researchers that are engaged in activities of valuing, judging and recommending.
Develop capacities to work in transdisciplinary/interdisciplinary teams. Working with a variety of different kinds of knowledge producers, and blurring boundaries between practices to engage integrative knowledge to address complex problems.
Do collaborative knowledge work
Participatory approaches. Experiential and the relational understanding of practitioners who are engaged with situations that they themselves are a part of.
Learning to work in uncertain futures, deal with contingent scenarios, and address issues of the wellbeing of underprivileged people marked by social and political change.
Expanded notions of collaborative knowledge work include things like: collective social learning, co-production, collaborative adaptive management, and triple loop learning.
Expand the repertoire of questions evaluators ask
Focus tends to be on questions of what works, effectiveness, efficiency and impact, and to some extent sustainability.
Need to ask questions like: What assumptions underlie our understandings of the problem we are addressing and our efforts to address it? Who gains and who loses from what we plan to do or have done? What should we do to address potential exclusion and marginalization of peoples in our activities?
Develop epistemic fluency
We work on real world problems and to do so we require different combinations of specialized and context dependent knowledge – different ways of knowing.
Develop ethical and political fluency
Developing ethical fluency involves developing moral expertise and capacity for normative analysis. It is the competency to state and clarify moral questions, and provide justified answers to those questions.
Moral expertise involves conceptualizing and elaborating on the meaning of norms, values and ends that are at stake in a particular intervention.
Developing political fluency means that evaluators focus on the political dimensions of acting and learning, as well as learning to deal with policy discord and moral disharmony.
Current training in evaluation largely assumes that evaluators are dealing with values in a world of facts. We need far more attention to value dimension of things than the factual dimension.
Diverse competencies on our teams are extremely important – great for professional development, as we learn from each other on the job
Our response to the manifestation of discontinuities in our work and the competencies required:
Change approach mid-project – Developmental Evaluation vs. Implementation and Outcome Evaluation. Understanding different approaches was critical.
Change role – Facilitator and Problem-solving partner, sometimes Activist vs. Evaluator and Subcontractor. Process competencies become very important.
Change focus – What is the big picture? What else is happening vs. what is the project trying to achieve? Requires understanding of systems theory + complexity.
Get comfortable with uncertainty. Requires a resilient disposition comfortable with constant change and chaos.
Developing evaluation competencies is a systems thing!
What changed with capacity development?
VOPE activities moved online – Networking less effective, circles narrowed.
Academic programs with some contact time moved fully online – Peer learning and networking diminished.
Staff in organizations worked more offsite and more discontinuous – Mentoring and on-the-job training reduced.
Building teams tended to be with established relationships – Referrals to “weak ties” reduced.
Erosion of the settings for capacity building. This disruption is going to continue for a longer time. Our evaluation systems are going to become more fragile. This has implications for equity and diversity.
Implications for training evaluators in 2021 and beyond?
Resilience and Adaptability, at individual as well as system level.
Limits of social suppression interventions exposed in complexity theory – 8 modes:
Disparate Command and Control Systems
Interaction and Emergence
Policy Discord and Moral Disharmony
Contextual Heterogeneity
Implementation Heterogeneity
Ambiguity in Relations and Guidelines
Temporal Change in Public Attitudes
Exit and Sustainability Effects
A Calamitous Conclusion?
No public policy has ever been subject to more effort, management, investment, and scrutiny than the social interventions to overcome COVID-19
Yet, only a halting intermittent solution was provided
Complexity dynamics and the oscillating impact of major policy interventions: Have we seen the merry-go-round before?
Is there light at the end of the tunnel?
For policymakers: Remember you are designing complex, adaptive, self-transformative systems. The key task is to try to anticipate the complexity dynamics.
For evaluators: Remember you are researching complex, adaptive, self-transformative systems. The key task is to try to trace the complexity dynamics.
The pandemic has given rise to different vulnerabilities and inequities (e.g., In India, the lockdown prompted a flood of reverse migration). It has also brought about many changes in the functioning of public systems and the services we assumed would always be available.
Any assumptions we make about the context may no longer be valid. An understanding of the change context becomes very important as well as factoring this into our evaluations.
M&E has been largely project-focused/project-driven. It has been less focused on unravelling and bringing to light new vulnerabilities or rising inequities. To do this requires more independence in framing the M&E agenda, enlarging the scope to not only look at program activities but also new issues that may have arisen.
COVID-19 has spurred us to look at the need to use data for developmental purposes for internal learning. For this, acceptance of data both within the organization and outside becomes very important. Building collaboration thus becomes very important, and all the skills that are required for building collaboration assume a much greater significance.
Real-time monitoring has assumed a much greater role. As evaluators, we need to proactively equip ourselves with new technologies. Generally, there is resistance to accept the data. The first step in bringing about change involves thinking about how to overcome this resistance. More collaboration, sharing, and transparency about our methods is needed. We also need to be more rigorous with our data validation.
COVID has encouraged us to become more conscious of the importance of humanitarian work. Our monitoring systems have also been nudged to respond to the compulsions of measuring humanitarian work and integrating it with the other kinds of development work we have been doing.
There is a need for leadership in evaluation because any one organization may not have either the capacities or the bandwidth to do it on their own.