As efforts at driving change become more diffuse, involve more actors, and have more transformational goals, we need a radically different approach to thinking about and assessing what effective advocacy looks like. Clear answers and simple tools are appealing, but they ultimately won't result in good representations of reality or provide the knowledge advocates need. This brief proposes adjustments for how we think about and approach advocacy monitoring, evaluation, and learning.
By Jim Coe and Rhonda Schlangen
In our brief, No Royal Road, we explore the tensions between wanting clear answers about investments in advocacy and the inherent uncertainties around how social and political change happens. We argue that monitoring and evaluation can better support effective advocacy by focusing on these tensions rather than trying to evade them.
The mathematician Euclid is said have responded to King Ptolemy’s command to provide a quick route to knowledge by saying there is no “royal road” to geometry. We are making the case that there’s no royal road to advocacy evaluation.
Like the king, we might want things to be simple and easy to navigate. Clear answers, certain judgments, and simple tools are appealing. But if they fail to reflect the realistic scope of influence of one actor within a complex web of interacting factors and relationships ecosystem, then they ultimately won’t result in good representations of reality.
We propose six ways to reorient advocacy monitoring and evaluation to help advocates, evaluators and funders make it more credible, reliable and useful.
1. Better factor in uncertainty.
Evidence related to advocacy is often subjective and rarely definitive, so we should accept and make explicit that there is an inevitable element of uncertainty in judgments reached. That means bringing rigor to interpretation as well as to information gathering, and, where appropriate, stating conclusions probabilistically.
2. Plan for unpredictability.
We should start from what needs to change and the barriers to change, and situate our efforts within that orbit, rather than imagining everything flows from what we do. From that we can identify a way forward. And then iterate constantly.
3. Redefine contribution as combinational and dispositional.
In reality, contribution is not singular and additive; instead multiple interacting causes make an effect more likely. Recognizing this, we should shift the lens from the “amount” of contribution a single actor makes to an understanding of the typologies of the different actors and how they combine to contribute to change.
4. Parse outcomes and their significance.
Advocacy happens in widely varying contexts, working against all sorts of different prevailing winds, so an outcome being achieved (or not) gives only a partial picture. We should place any assessment of outcomes in their context. And to ground this understanding, we should be looking across different dimensions of change, like shifts in power and changes in public agendas, as well as more specific policy changes.
5. Break down barriers to engaging advocates in monitoring and evaluation.
Advocates are expert observers but also not always enthusiastic contributors to monitoring and evaluation processes. For evaluators, the first question should be, “How can we equip practitioners to make good decisions?” That means finding the best ways to create space for busy people to reflect critically on their work and apply the learning. And addressing the cultural and management challenges that make it difficult to do.
6. Think differently about how we evaluate more transformational advocacy.
When it comes to more transformational advocacy, as with broad-based movements aimed at achieving systemic change, results are unpredictable and contingent on a whole range of factors and so focusing on the specific contribution of an individual or group of actors to a set of definable outcomes has questionable utility. We can, however, investigate an actor or group’s fitness for purpose by exploring whether they exhibit the characteristics that make them likely to be effective in such contexts.
We’re proposing a set of ideas that address the challenge of how best to support and learn from influencing work that leads down unpredictable pathways and has uncertain results. We hope these approaches can help ensure that monitoring and evaluation serves advocacy in the ways it should.