Creating an Open Space of Ideas: Inviting all Evaluators | Rethinking & Reshaping eValuations at a Time of Crisis | Making Evaluations Matter

Author: Sanjeev Sridharan

Rethinking Evaluation Competencies in 2021: Questions to Focus Our Efforts

Competencies can be defined as “clusters of related knowledge, skills, abilities, and other requirements necessary for successful job performance.”  This definition taken from the Unites Nations Evaluation group suggests a concept of competency that is defined at the individual level. One challenge as we look ahead is to also explore how organizational and system-level supports can help enhance evaluation “job performance.”

The Development Monitoring Evaluation Office (DMEO) of Niti Aayog in India had organized a panel titled, “Systemic Essentials for Strengthening Government Evaluation Capacities” in June 2021. In this blog I reflect on some questions on evaluation competencies that can help sharpen our focus.

Learning from other discussions on evaluation competencies

A good starting point is to explore some examples of evaluation competencies. The UN Evaluation Group in 2016, for example, in its focus in achieving Sustainable Development Goals, Gender, Equality and Human Rights, listed the following five domains of evaluation competencies:

  • Professional Foundation – these include ethics and integrity, evaluation norms and standards
  • Technical Evaluation Skills – these include evaluation approaches, methods and data analysis
  • Management Skills – work planning, coordination, and supervision
  • Interpersonal skills – these include communication, facilitation, and negotiation
  • Promoting a Culture of Learning for Evaluation – these include focus on utilization and integration of evaluation into policy and programs

The American Evaluation Association in 2018 focus is on the following five domains:

  • Professional Practice
  • Methodology
  • Context
  • Planning and Management
  • Interpersonal

Any policy maker or evaluator interested in what evaluation capacities matter is also advised to explore other evaluation competencies including the Aotearoa New Zealand Evaluation Association competency document from a decade ago (2011). One focus here is on paying attention to cultural values in shaping competencies: “… our intention is to ensure that cultural competency is not treated like a peripheral or marginalised aspect, rather a central component of the development of our framework of evaluation competencies and practice standards.” They go on to add: “Cultural competency is central to the validity of evaluative conclusions as well as the appropriateness of the evaluation practice.”

Similarly, the article by Jean King and co-authors from 2001, titled, “Toward a Taxonomy of Essential Evaluation Competencies” is also a must-read for anyone interested in the challenge of defining evaluation competencies. Jean is a friend, and we continue to collaborate and communicate. I know what continues to engage her is: How can systems and organizations sustain evaluation capacities over time? The big challenge as I see it is not about competencies at the individual level, but what it would take to ensure that the capacities of the broader ecosystem and organizations are sustainable over time.

Questions to precipitate action

All of the above serves as a backdrop of what I think are the essential questions that the field of evaluation and organizations like the DMEO will need to address:

The Opportunity

1) Focusing on Sustainable-Capacities Systems and Organizations: What programs of work can help build evaluation capacities in the broader ecosystem? How can it help build capacities at the organizational level in a way that capacities are sustained even if key individuals leave organizations? A good starting point will be to recognize we don’t have good answers to these questions.

2) Embracing the Sustainable Development Goals (SDGs): The SDGs provide a remarkable opportunity for the entire field of evaluation to prove its salience and utility. What kinds of evaluation skills can help practitioners promote an understanding of system dynamics, coherence across interventions, focus on sustainable impacts and “no one left behind” to help contribute to enhancing evaluation’s contributions to achieving the SDGs? The relevance of each of these concepts were discussed in a remarkable set of webinars hosted by the Evaluation Centre for Complex Health Interventions in Toronto.

3) Competencies for Evaluators in the Public Sector – The enormous opportunity that the dialogue led by DMEO presents is to help raise questions around what specific competencies are needed for evaluators working within governments. How can a program of work on building evaluation capacities help an evaluator working within the Central or State government do their jobs better?

Some Contemporary Challenges

4) Focus on Diversity, Equity, and Inclusion — Much of our dialogue around evaluation capacities and capabilities are occurring at a time when we are facing deep discussions and divides around inequities, hierarchies and privilege: How can evaluators create greater voice and inclusiveness in understanding the impacts of programs and policies? I do think that as a field we need to more clearly understand evaluation’s role in addressing inequities and promoting inclusion.

5) Understanding the Architecture of Complex Programs and Policies — My view is that most evaluators have a poor understanding of representing the complexities of social programs and policies.  Our tools to understand the architectures of complex programs and policies are quite limited. How do we promote understanding of theories of change of complex interventions? How do we build competencies in understanding the program mechanisms/processes that make a difference to impacts? How do we more clearly represent and understand contexts and its vital role in the success of policies and programs?

6) Enhancing Interpersonal Skills:  I find evaluation as a profession both in the North and the South still focused primarily on the technical. I am unsure that we have cracked how to build interpersonal skills. How does one build a program of work that enhances the interpersonal skills of evaluators?

7) Towards Complexity-Informed M&E: Models of Continuous Improvement – While much of our preoccupation has been on ‘what works’ and ‘what doesn’t work,’ I’m unconvinced that we have paid as much attention to how systems and organizations improve in an ongoing manner. Most interventions, and organizations are ‘complex systems thrust amidst complex systems.’ What are programs of evaluation capacity building that can help inform more complexity-informed monitoring and evaluation?

8) Skill Sets to Synthesize Evidence and Build an Ecology of Evidence – Another area that I find missing is an ability to synthesize very disparate sources of information towards creating a policy environment in which an ecology of evidence is used to make decisions. Despite all of our focus on mixed methods approaches, I’m not convinced that we have trained individuals to synthesize evidence that can help build an ecology of evidence. 

Looking ahead

I do want to end by reiterating that this is an exciting time to raise questions around evaluation competencies. With the focus on SDGs and recent debates on what evaluation criteria matter, this rich dialogue around competencies can lead towards a more vibrant field of evaluation both in India and globally.  It is perhaps fitting to end with remembering one of the founders of the field of modern-day evaluation, Donald Campbell. His vision of evaluation was that of a “mutually monitoring, disputatious community.” It’s important to recognize that our work can lead towards evolution of knowledge and  systems and mostly importantly improved lives and a more sustainable world.  There will be differences and disputes as we argue about different views of progress. How do we promote an ecosystem of evaluators that have both confidence and grace while disputing diverse views of what constitutes progress?

Accelerating towards a Post-Normal Evaluation?

I was fortunate to be a discussant of Thomas Schwandt’s presentation titled “Indications of Post-Normal Evaluation” organized by the International Evaluation Academy in May 2021.  The presentation built on Schwandt’s far-reaching ideas on ‘post-normal evaluation’ developed in his 2019 article. This is how Schwandt introduces the idea of ‘post-normal’ evaluation: “Post-normal evaluation can be seen, on the one hand, as a result of the failure of normal evaluation to rationalize the world and, on the other hand, as the amplification of the inevitability of and capacity for constant change which can only be managed. We will gauge the success of the future practice of evaluation in terms of whether it is resilient enough to adapt to the ontological realities of complexity, uncertainty, and contradiction in ways other than being methodologically innovative. That resilience is largely an ethical matter, of evaluators taking full responsibility for the choices they make in framing and bounding the evaluations they conduct with the public”.”

The need for a post-normal evaluation predates the COVID-19 pandemic.  The urgency and need for a post-normal evaluation that comes to terms with the uncertainties and incompleteness of knowledge to inform action has only grown with the cold realities of COVID-19.  The challenges of COVID-19 highlight Schwandt’s call for coming to terms with unpredictability, instability, and incompleteness in our knowledge in value determination.

Over the past three years I have been working in India on a range of interventions focused on improving Maternal, Newborn and Child Health.  A number of the interventions focus on strengthening health systems.  Working in the interface between interventions and systems, I have been interested in how evaluations can help build capacities and stronger systems.  I have found the traditional set of evaluation skills on design and individual-level surveys inadequate in understanding system-level constructs and the complex dynamics of system-level changes. 

The goal of this blog is to raise questions that can help sharpen focus on addressing SDGs and COVID-19.  My hope is that these questions will precipitate thinking on the types of skills evaluators need to have in order to contribute to responding to the ongoing pandemic and enhancing local and global responses to SDGs.

Responding to COVID-19

It is hard to think about a post-normal evaluation without asking how evaluation as a field could have contributed more to avert the worst impacts of COVID-19?  This blog is being written at a time when India and many parts of South America are being ravaged by COVID-19.

Despite the number of very insightful blogs written on the implications of COVID-19 for evaluative practice, there is a need for more focused dialogue on how evaluations could have been be more useful during the pandemic.  One guiding question is: In what ways is evaluation as a field contributing to a search for an architecture of a post-normal COVID-19 world?

There are three issues that have emerged as I reflect on systemic responses during COVID-19:

  1. As we witness the collapse of systems in multiple settings during the pandemic, how can the evaluator’s gaze move from a focus on projects and interventions towards system-level capacities and resilience? In my experience, most evaluations focus primarily on projects and interventions; there is limited focus on understanding system and system-level resilience.
  2. There is also a need to pay greater attention to dynamics.  Evaluation’s focus typically tends to make judgements over the short run.  As a field, we have paid limited attention to problems of dynamics.  While there is growing attention to evaluating complex interventions, there is more limited focus on understanding anticipated dynamics of interventions.  COVID-19 has highlighted the need to understand non-linearities in systems change.  What can we do as a field to elevate the focus on dynamics?   
  3. Perhaps most importantly, our preoccupation with saying what works or doesn‘t work has often failed to represent and communicate uncertainties that might exist in our knowledge.  As a field, I think we could do a much better job representing, estimating and communicating uncertainties.  COVID-19 has highlighted the ubiquity of uncertainities. What role can evaluators play in creating greater comfort around uncertainties in social decision-making and social navigation?  Given the uncertainties that exist in responding to almost every facet of the pandemic, how can we as a field play a role in promoting greater comfort in navigating uncertainties?    

Embracing Understanding of Contexts More Deeply

As a realist evaluator, I am interested in understanding the contexts, mechanisms and outcomes associated with interventions.  One insight from realist evaluation is that it’s not interventions that bring about change; rather, it’s interventions under the right context, conditions and support structures that bring about change.  Yet, what strikes me as I work on challenging problems of health and nutrition in a range of settings is that, for the most part, we can do far more as a field to focus on multiple dimensions of contexts and support structures in the planning, implementation, and sustainability of interventions. 

Focusing on Inequities, Sustainability and Heterogeneities

Interventions and system-reform efforts focused on achieving the SDGs will need to pay attention to problems of inequities, heterogeneities of contexts, and finding solutions that can have sustainable impacts.  Addressing problems of inequities will require a focus on what ‘works for whom under what conditions’ and also attention to the heterogenous needs of different intersectional segments.  My sense is that, as a field, we have not done enough to bring a sufficient focus on problems of inequities, sustainability and heterogeneous solutions.  Questions that I have raised in other settings include: What is a theory of change that is serious about addressing inequities?  How does a theory of change focused on sustainable impacts differ from a theory of change of immediate or intermediate impacts?  How do we both represent and translate insights on heterogeneities of solutions that would be needed to address the diverse needs across different contexts?

Towards an Ecology of Evidence

We also need to ask tough questions on what types of evidence are useful.  It’s perhaps not enough to simply say that we’re moving from evidence-driven interventions towards evidence-informed interventions.  I think we need to raise deeper question around the ecology of evidence needed to implement and sustain systems and interventions over time.  What are the types of evidence needed to make a difference in inequities?  To help plan for sustainability?  Given the multiple definitions of complexity including contextual, dynamic, and multi-components, how do we promote a view of evaluation that generates knowledge on the contexts, processes, and impacts associated with systems, programs, and policies?  

Evaluation as Way Finding: Towards an Ecology of Solutions

An important concept that Schwandt and others have raised is that of evaluation as ‘way-finding.’  Schwandt raised this as an issue of being focused on solving problems, not just being preoccupied with specific solutions.  Developmental economist Lant Pritchett’s insight is especially helpful here: “We should be in the business of solving problems, not just of selling solutions”.  Addressing challenging problems like maternal mortality, hunger, and nutrition will require more than a singular solution; an ecology of solutions tailored to the needs of specific contexts may be needed.  Further, such solutions have an essential dynamic long-term aspect to them because it’s unlikely that solutions to difficult problems like inequities, maternal mortality, and malnutrition can be found solely with quick fixes.  As a field, evaluation needs to more clearly explore what does evaluation as way-finding look like—we need to shift our focus to ‘what does it take to solve a problem in specific contexts’, rather than the more commonly addressed question ‘does intervention ’x’ work’.  Relatively recent approaches like Developmental Evaluation and Principles-Focused Evaluation offer great promise in providing insights on evaluation as way finding.  

Questioning the Roles of Evaluators: Understanding Influence

In my view, any focus on a ‘post-normal’ needs to address the role that evaluators play and the changing roles of evaluators.  If one accepts that we should not only pay attention to interventions/projects, but also incorporate a broader understanding of a range of systems, what should be the role of evaluations and evaluators in building the capacities of systems?  Could evaluation as a field have done more to promote a focus on the ‘known unknowns’ and ‘unknown unknowns’ of societal responses to COVID-19? While I think there has been a focus on and around the different roles of evaluations, we can do more to discuss the multiple roles of evaluation in navigating and discovering solutions for complex problems. Much of our discussions have been about solutions in specific contexts.  As evaluation seeks to have influence in the technical, social, and political environments in which it is located, there is a need to reflect not just on the pathways of influence of a single evaluation, but also on how a variety of evaluative products/approaches can help move understanding and solutioning of a problem.  I find the concept of a ‘fractured information ecology’ especially useful in helping locate both the role of evaluations, as well as understanding how evaluations can have influence: “However, there have been even more worrying signs of a fracturing information ecology at the interface of science, policy and public discourse.

We need better narratives on the boundaries of our influence and how evaluations can have influence despite the fractured information ecologies in a number of decision-making environments.

Urgency in Action

There needs to be urgency in rethinking our role as evaluators and how we can be both simultaneously disruptive and constructive as we respond to the fractured information ecology by promoting collaborative problem-solving.  “The damaged information ecology in which global and national responses to COVID-19 are currently playing out should be of deep concern to all those working towards the adoption of more sustainable policies, economics and ways of life.”  As a field, we need to spur focus on actual examples of how evaluators have navigated changes in fractured contexts, built an ecology of evidence, and also navigated the boundaries of influence as the contexts of learning have changed, shrunk, or expanded.

Adaptations and Nimbleness at the Time of Crisis: Some Questions for Evaluators

Adaptiveness as a Clunky Dance

The pandemic provides a reminder of how dynamic the world can be.  It continues to surprise and challenge the creativity of even the most experienced program implementer working in community settings.  In almost all of the public health interventions I know, the pandemic has forced a re-think of the implementation plans.  The pandemic also has posed critical challenges for how evaluation as a field can be adaptive and nimble when faced with flux and the need to drastically change the playbook.  The pandemic provides an opportunity for us to ask ourselves how evaluations can be adapted to be helpful at a time of crisis.  Over the past month, I have had a chance to connect online with multiple implementers working both in community and policy settings.  Despite the severity of the crisis, the creativity of many organizations in adapting to the pandemic has been striking.

Adaptiveness

One program implementer refers to her organization’s adaptiveness as a “clunky dance.” Another individual refers to ‘muddling towards an authentic response.’  This blog is written from the perspective that evaluators have a chance to learn from our implementation partners’ nimbleness and also incorporate their adaptiveness into the frameworks we use to value interventions.  Given the challenges some community organizations are experiencing, evaluators might have a role to play in creating an enabling environment by raising questions around changing needs and the need for systemic coordinated responses.

Some Questions

Based on multiple dialogues with policy and community partners, I raise six sets of questions:

1. Narratives of programmatic adaptations4. Re-thinking theories of systemic change
2. Measuring nimbleness and adaptiveness 5. Minimal components needed for an intervention to work
3. Deeper learnings about strengthening systems6. Intersectionalities and cracks in the network

1. Narratives of programmatic adaptations

Most interventions have had to adapt during the pandemic.  One somewhat straightforward area in which evaluation can be responsive is to highlight through simple, succinct narratives the adaptations that have occurred, the drivers of such adaptations, and if these adaptations were successful in responding to the emerging crisis.  A number of the programs I am aware of are already documenting such adaptations.  This can be both straightforward and challenging; but it requires coordination between program and evaluation teams.  Attention needs to be paid to the constraints facing the programs, the timeframes the programs had to make the adaptations, the doors that were closed on the earlier service delivery platforms as a result of the pandemic and how new platforms were created to still provide the services.  Evaluation teams need to leverage such narratives and help programs tell their stories systematically.  What are exemplars of good evaluation stories of the adaptiveness/nimbleness of specific interventions?

2. Measuring nimbleness and adaptiveness  

Few evaluation frameworks include adaptiveness and nimbleness as criteria to judge the success of interventions.  As an example, it is interesting that the newly revised DAC evaluation criteria (by the OECD’s Development Assistance Committee; https://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm does not focus on either nimbleness or adaptiveness as desired attributes of interventions and criteria for evaluation.

Here is my attempt at defining these terms.  Adaptiveness can be defined as the extent to which an intervention morphs to respond to changes in its context.  In my judgement, adaptiveness is related to the contextual awareness of an intervention.  Nimbleness can be defined as the agility with which an intervention responds to changes in perceived needs; there is a temporal aspect to nimbleness.  In my judgment, based on a quick scan of the literature, neither of these constructs has been a key focus of evaluation frameworks. The pandemic provides an opportunity to learn from how programs have both conceptualized and operationalized adaptiveness and nimbleness.   The pandemic reminds us that the needs of individuals are dynamic, and the opportunity for community, food, social protection, and health systems to respond to such needs are constrained by multiple factors. I have found dialogues with community organizations helpful in surfacing critical ideas on how we can measure adaptiveness and nimbleness. I raise this issue because evaluators have an opportunity to elevate the salience of adaptiveness and nimbleness as important criteria in evaluating interventions.  What are innovative examples of measures of adaptiveness/nimbleness of programmatic responses during the pandemic

3. Deeper learnings about strengthening systems  

Interventions are embedded within broader systems.  No intervention is an island.  This pandemic has highlighted the need to better understand the connections between the intervention and its underlying systemic contexts/supportive structures.  One of the most striking insights arising from dialogues with policy and community partners has been that this crisis can sometimes potentially accelerate systems and coordinated responses to health and social problems.  There is a literature around crisis-driven learning: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-5973.2009.00578.x

Some stakeholders have mentioned that there has been a greater sharpening of focus of the systemic response during this crisis.  Others have noted that the coordination between different organizations has actually been enhanced during this pandemic. However, other organizations have described how the crisis has paralyzed their ability to respond to needs and their limited capacities to coordinate with other organizations. My view is that evaluation as a field has a role to play in exploring if and how crisis-focused coordination can be enhanced.  Even in the happy situation where coordination has been enhanced, evaluation can still play a role by exploring what is likely to happen when the intensity of the crisis recedes.  Are there examples of evaluations that have taken a developmental approach to enhance coordination between organizations at this time of crisis?

4. Re-thinking theories of systemic change

Many stakeholders also conjectured that the crisis has helped them re-imagine what is possible with system-level efforts.  A number of community organizations are beginning to re-think their own theories of change, the role of different partners and how their organization relates to the overall system.  In some settings, the pandemic has served to highlight and build support for interventions focused on poverty and social isolation in a way that was not there before the pandemic. It may be argued that responding to the pandemic has provided a greater understanding of the needs of the vulnerable and a deeper lived experience of what social isolation means.  If this is the case, what does this suggest for refining interventions that are focused on poverty and social isolation? Are there examples of intervention-level theories of change that have been refined as a response to adapting to the pandemic?

5. Minimal components needed for an intervention to work

Evaluations often focus on what an optimal set of components is needed for an intervention to produce favorable impacts.  We rarely focus on the minimal set of components that are required for an intervention to function.  An important insight from a few community organizations has been that this pandemic has taught them lessons about how to simplify their programs; it has also provided insights on a minimal set of components required to meet the needs of individuals, especially the most disadvantaged.  In the Behavioral Medicine literature, there is the concept of MINC (minimum intervention needed to produce change; https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3958586/).  Once the intensity of COVID-19 has abated, a focus on the minimal set of components needed might help with thinking about our social programs in a richer light, especially if the focus is on using limited resources to address the needs of disadvantaged stakeholders.  Can a focus on a minimal set of components needed to produce change help enhance a focus on meeting the needs of the disadvantaged given limited resources?

6. Intersectionalities and cracks in the network

One consistent feedback from a number of community organizations is that COVID-19 has helped surface or highlight key cracks in the existing “networks of care”.  There is a need to pay attention to individuals who fall through such cracks and also focus on individuals who fall in the ‘intersections’ of multiple categories of deprivation.  A number of interventions have adapted to respond to such intersectionalities and perceived cracks in the network.  Over time, there is a need to re-think how our strategies address the needs of individuals who fall through such cracks.  While many interventions focus on the needs of disadvantaged individuals/groups, in my experience, the theories of change rarely incorporate (in a sharp focus) the mechanisms by which the interventions can address the needs of such individuals who fall in such intersections.

Similarly, there is also a need to move beyond static definitions of vulnerability towards paying attention to the dynamics of vulnerability. (https://www.ncbi.nlm.nih.gov/pubmed/23549696).  The issue is not just that the interventions meet the needs of the vulnerable, but additionally, that they help disrupt the generative mechanisms that COVID-19 might have amplified (https://www.brookings.edu/blog/future-development/2017/06/19/pandemics-and-the-poor/).  Now, more than ever, there is a need to pay attention to syndemic (https://www.thelancet.com/series/syndemics) processes between poverty and social problems like homelessness that might exacerbate problems over time.  Some important evaluation questions to consider are: Do the proposed solutions incorporate the lived realities of individuals who fall through systems cracks?  Are the systems, structures and processes that are being set up to address the needs of such individuals consistent with the needs and expectations of the clients whose needs are intended to be served?  Does the proposed solution pay attention to the dynamics of vulnerability that might be especially acute for marginalized individuals during and post-pandemic?

Looking Ahead

As we go forward, one ethical principle that should guide our work is that measurement and valuing should not interfere with the programmatic response during this crisis.

The above set of issues should be seen as questions, opportunities and challenges for the field of evaluation.  In my view, much of the recent measurement work during this time of crisis has surfaced the problem space of COVID-19.  The pandemic’s impacts on individuals, especially on highly vulnerable individuals, and such a problem space have also helped raise deep questions about how our existing systems of care might not be adequate.  However, accompanying this crisis-driven problem space there has been a creative response to address as best as possible within existing constraints how programs have had to adapt in a timely manner to respond to the growing sets of needs.  As a field, evaluation needs to help tell the story of such emergent solutions.  Such a focus on the solution space  (https://www.sciencedirect.com/science/article/abs/pii/S0149718912000213) can help grow the salience of evaluations as a useful tool in bridging problems and solutions during times of crisis.