Contribution Analysis in Policy Work: Assessing Advocacy’s Influence

Published: November 2017

Type: Publication

How do we know what difference advocacy really makes? This brief explores the methodological application of contribution analysis to advocacy and offers guidance for evaluators considering this approach.

RobinKane
CarlisleLevine
CarlynOrians
ClaireReinelt

The fluid nature of advocacy places challenging demands on evaluators and raises legitimate questions about who is responsible and who gets credit for results. Advocacy evaluation as a field has lacked good, rigorous examples of how to examine the cause and-effect relationship between advocacy and policy change.

A non-experimental impact evaluation method that holds promise is contribution analysis.

Using specific examples from evaluation practice in a U.S. policy context over the past two years, this brief explores the suitability and application of contribution analysis for advocacy evaluation and offers guidance for evaluators considering this approach.

Contribution analysis provides rigor in thinking critically about the relationship between central actors and activities and other influences within a complex multi-actor environment. It also facilitates a more purposeful, deeper, and systematic consideration of the myriad potential influences on policy outcomes.

The brief provides background on contribution analysis and its applications, and introduces specific cases in which the method was applied to assess whether and how advocacy efforts made a difference. Six steps of the method are detailed, as well as a seventh step that focuses on use:

  1. Set out the cause-effect issue to be addressed.
  2. Develop the postulated theory of change and the risks to it, including rival (alternative) explanations.
  3. Gather the existing evidence on the theory of change.
  4. Assemble and assess the contribution claim, and challenges to it.
  5. Seek out additional evidence.
  6. Revise and strengthen the contribution story.
  7. Use the contribution claims to review, learn, and make program improvements.

The brief also highlights practical conclusions to help evaluators who embark on this kind of work.