Evaluating Community Organizing: What to Consider and Capture

Advocacy Evaluation Update | 2010-08

Catherine Crystal Foster and Justin Louie of Blueprint Research + Design, Inc. offer their perspective on what is unique about community organizing compared to policy advocacy, and the components that should be considered when evaluating organizing work.

Community organizing for social change shares many characteristics with policy advocacy, but it differs in significant ways, and the approaches to evaluating the two also differ. As evaluators, we have partnered with organizers, advocates, and their funders over the last five years, and have seen these differences firsthand.

What is Community Organizing?

Community organizing is a democratically-governed, values-driven process that catalyzes the power of individuals to work collectively to make the changes they want to see in their communities. Community organizers honor and develop the leadership potential in everyday people by helping them identify problems and solutions, and then by supporting them as they take action to make those solutions a reality. In so doing, organizing challenges the existing power structure.

Relationships lie at the heart of organizing, and the “one-to-one” relational conversation between an organizer and a community member is the building block of organizing. As those community members participate in social change work, build skills, and take on responsibilities, they become “leaders” within the organizing group. Developing these leaders and building the “base” of leaders and other community members is an ongoing focus of community organizing. How is Community Organizing Different from Policy Advocacy?

Organizing and advocacy differ at a core level. Community organizing is emphatically bottom-up. Community members select the issues, proffer the solutions, and drive strategy and execution. Most advocacy is fundamentally top-down, even if the work is authentically undertaken on behalf of community members. Advocates speak for others, while organizers inspire community leaders—everyday people—to speak for themselves. Organizers and leaders also believe that community members can be experts, and that expertise is not the sole domain of policy professionals.

The leader-focused lens also points to another difference from advocacy. In organizing, leadership development is a central concern and a key outcome in addition to policy change objectives. This has major implications for priorities and goals. It makes capacity development look different in organizing than in advocacy, since the capacities to attract and develop leaders are a top priority in organizing.

Finally, certain logistical aspects of organizing differ from advocacy in a significant way. Organizers operate in a predominantly oral culture, in contrast to the more archived, written culture of advocacy. Organizing often places a premium on process and ritual, particularly as it concerns base-building and direct actions. In addition, organizing takes place in a more diffuse setting: in homes, churches, schools, or community venues, rather than in a central office or the corridors of the state house.

What are the Implications for Evaluation?

For organizing, evaluation requires additional considerations that reflect the particular qualities of the work. Most important, the bottom-up nature of organizing—driven by the community, not by organizational managers or external professionals—creates a whole new set of complexities. This orientation collides with the inherently top-down nature of traditional third-party evaluation, in which outside experts ask the questions, set the terms, and make judgments. As we have noted, organizers have a fundamentally different view from advocates not only of how decisions are made and priorities are set but also in where expertise resides. That affects how organizers view evaluation generally, and what role they see for themselves and their leaders in that process.

If the community-defined, bottom-up goals for the work do not align completely with a funder’s goals, an evaluator measuring against those goals faces the difficult task of navigating between the two. When the work unfolds as part of a multi-site initiative in which multiple communities have been funded to work on an issue, those complications are compounded.

Since the goals, strategies, and tactics of organizing bubble up in ways that are highly context-specific, multisite evaluation of an organizing effort is particularly hard. It is quite difficult to standardize methodologies and roll up results when the work and processes are driven by the needs and approaches of each community.

Finally, certain practical considerations implicit in organizing work can impact evaluation. Many organizers value reflection quite highly, and incorporate it in their work more explicitly than some advocates. As a result, evaluation may be more about systematizing informal reflection and helping to focus it more on impact than process, not about teaching the value of it. Yet, while they do reflect regularly, organizers have very little time for formal evaluation and the rigorous, uniform, and documented processes of data collection and analysis that formal evaluation can imply. They pride themselves on never being in the office, instead spending their time in the community. Leaders who carry out the work are community members who may have entirely separate day jobs, making systematic evaluation far more challenging than when partnering with advocates working in a more traditional office environment.

What Can We Measure for Organizing?

The intense focus on leadership development in organizing, and the emphasis on process within some schools of organizing lead to identification of interim benchmarks and goals that often differ from those in an advocacy campaign targeting similar policy change objectives. Organizing requires additional benchmarks and goals related to the processes of growing leadership and power, and organizers may prioritize them differently from advocates.

When determining evaluation questions, setting benchmarks, and selecting data collection methods, it helps to categorize the work in a way that incorporates the unique values and orientation of organizing. One useful framework that lays out important categories to track has been developed by the Alliance for Justice (AFJ).1 The table below illustrates examples of benchmarks we have used to capture these categories, as well as data collection methods that can be used to track progress.

Table 2: Sample Benchmarks and Data Collection Methods for the Core Components of Organizing
Category Benchmarks Methods for Tracking
  • Changes in numbers, demographics or location of members
  • Changes in attendance (numbers, types of events, who attends)
  • Membership tracking (including demographic and geographic info)
  • Attendance tracking
  • Changes in attitudes, skills, and knowledge
  • Changes in self-esteem and self-efficacy
  • Changes in stature within community or among decision makers
  • Documenting elements of growth along leadership ladders
  • Organizer check-ins and debriefs
  • Documenting 1-to-1s
  • Journaling/portfolios
  • Focus groups
  • Development of relationships with decision makers, media, and influential figures
  • Changes in stature within community or among decision makers
  • Changes in membership
  • Changes in turnout
  • Power analysis
  • Relationship/champion scales and rubrics
  • Base-building/mobilization tracking
  • Media tracking
  • Policy developments tracking
  • Interviews
  • Critical incident debriefs or case studies
  • Policy wins
  • Shifts in norms or content of debate
  • Holding the line against negative actions
  • Policy tracking
  • Collection of archival documents
  • Media tracking
  • Critical incident debriefs or case studies
  • Implementation of policies
  • Changes in practices
  • Public accountability for action or inaction
  • Sustained shifts in norms or content of debate
  • Impact on community
  • Policy implementation tracking
  • Community indicators tracking
  • Action research (accountability surveys, interviews, focus groups)
  • Interviews
  • Critical incident debriefs or case studies
  • Changes staffing
  • Changes in infrastructure
  • Changes in skills
  • Changes in resources
  • Organizational capacity assessments
  • Most Significant Change2
  • Interviews and check-in calls or meetings
  • Building on and systematizing internal processes
  • Infusing data and documentation into reflection
  • Use of data in refinement of strategy or tactics
  • Interviews and check-in calls or meetings
  • Collection of assessment documents or examination of systems

2This is a form of participatory monitoring and evaluation developed by Rick Davies and Jess Dart that involves the participants’ collection and discussion of stories about the most significant changes resulting from a program or action. See http://www.mande.co.uk/docs/MSCGuide.pdf.

This article is excerpted from a longer brief on this topic published by the Center for Evaluation Innovation and Blueprint Research + Design, Inc. Download the brief at: http://www.innonet.org/client_docs/File/center_pubs/evaluating_community_organizing.pdf. Email Catherine at catherine@policyconsulting.org. Email Justin at justin@blueprintrd.com

Resources for Evaluating Community Organizing

Alliance for Justice maintains a web-based compendium of resources on evaluating community organizing at: www.afj.org/reco. Recently added resources include:

  • An Independent Governance Assessment of ACORN: The Path to Meaningful Reform by Scott Harshbarger and Amy Craft
  • Measuring the Impacts of Advocacy and Community Organizing: Application of a Methodology and Initial Findings by Lisa Ranghelli
  • Analyzing and Evaluating Organizing Strategies by University of Massachusetts Boston Labor Resource Center
  • Strengthening Democracy, Increasing Opportunities: Impacts of Advocacy, Organizing, and Civic Engagement in Minnesota, New Mexico, and North Carolina by Julia Craig, Gita Gulati-Partee, and Lisa Ranghelli
  • Evaluation of NPI’s Community Organizing Support Program
Author Name: 
Julia Coffman