Resources

  • publication Dec 2017

    Evaluating Coalitions and Networks: Frameworks, Needs, and Opportunities

    An impressive array of approaches, frameworks, and tools have been developed to evaluate coalitions and networks. This brief critically examines these resources, and points to challenges and opportunities that remain in efforts to assess their effectiveness and impact.

    By Mona Younis

    Evaluating Coalitions and Networks: Frameworks, Needs, and Opportunities
  • publication Nov 2017

    Contribution Analysis in Policy Work: Assessing Advocacy’s Influence

    How do we know what difference advocacy really makes? This brief explores the methodological application of contribution analysis, a non-experimental impact evaluation method, to advocacy and offers guidance for evaluators considering this approach.

    By Robin Kane, Carlisle Levine, Carlyn Orians, Claire Reinelt

    Contribution Analysis in Policy Work: Assessing Advocacy’s Influence
  • presentation Sep 2017

    Better, Faster, Results – Supporting Learning for Multiple Audiences

    How can foundations build an approach that is connected to robust data and information and supports the many learning needs of multiple actors? This presentation for the 2017 Evaluation Roundtable convening is on strategies for supporting learning in philanthropy. It presents questions, challenges, and implications for foundation practice.

    By Julia Coffman, Kat Athanasiades, Tanya Beer

    Better, Faster, Results – Supporting Learning for Multiple Audiences
  • publication Sep 2017

    On the Other Side of Complexity: The McKnight Foundation’s Collaborative Crop Research Program

    Under the direction of a new president, The McKnight Foundation embarked on a process to imbue learning at every level of one of its signature international programs: the Collaborative Crop Research Program (CCRP). This teaching case explores the creation, implementation, and ongoing refinement of an evaluation and learning approach for CCRP from 2008 to 2017. 

    By Susan Parker

    On the Other Side of Complexity: The McKnight Foundation’s Collaborative Crop Research Program
  • insight Jul 2017

    FaceTime, Open Workspace, and Electronic Calendars are a Few of My Least Favorite Things: Learning and the Way We Work

    Supporting learning is about much more than facilitation and what happens at learning-focused events or meetings. We have to expand our thinking about what it takes to support effective learning in philanthropy. This article on Medium explores some unexpected sources, such as time management, physical and virtual space, and technology.

    By Julia Coffman

    FaceTime, Open Workspace, and Electronic Calendars are a Few of My Least Favorite Things: Learning and the Way We Work
  • publication May 2017

    Systems Change Evaluation Forum

    Complex systems change efforts create practical challenges for evaluation. The Gordon and Betty Moore Foundation convened their full staff and a group of external evaluators who work in philanthropy for a Systems Change Evaluation Forum. This report provides observations, tools, approaches, and advice for addressing systems evaluation challenges.

    By Tanya Beer

    Systems Change Evaluation Forum
  • publication May 2017

    Strategy Behind the Design of Advocacy Communications Support

    Foundations that fund advocacy understand the critical role communications play in policy change efforts. How can foundations best support grantees’ advocacy communications? This brief offers lessons based on the experiences of advocacy grantmaking initiatives.

    By Meghann Flynn Beer, Ed Walz

    Strategy Behind the Design of Advocacy Communications Support
  • publication Nov 2016

    How Do You Measure Up? Finding Fit Between Foundations and Their Evaluation Functions

    Foundations need to think carefully about how the structure, position, focus, resources, and practices of their evaluation functions can best fit their own needs and aspirations. This Foundation Review article, based on 2015 benchmarking research, poses questions foundations can ask to assess what they need from an evaluation function, and common areas of misalignment between needs and approaches.

    By Julia Coffman, Tanya Beer

    How Do You Measure Up? Finding Fit Between Foundations and Their Evaluation Functions
  • publication Oct 2016

    No More Half Measures

    Funders can get better at designing grantmaking strategies that meet the dual demands of advocacy capacity needs and the realities of the policy and political landscape. This brief explores how funders can support policy campaigns while also building advocacy capacity, instead of just picking one approach or the other.

    By Scott Downes

    No More Half Measures
  • publication May 2016

    Benchmarking Foundation Evaluation Practices: 2015 Benchmarking Data

    The result of a partnership between the Center for Effective Philanthropy and Center for Evaluation Innovation, this report offers a comprehensive look at foundation evaluation practices, based on 2015 survey data from 127 senior evaluation and program staff at foundations. 

    By Julia Coffman, Tanya Beer

    Benchmarking Foundation Evaluation Practices: 2015 Benchmarking Data
  • insight Apr 2016

    Oh for the Love of Sticky Notes! The Changing Role of Evaluators Who Work with Foundations

    Julia Coffman, founder of the Center for Evaluation Innovation, illustrates evaluators’ changing roles in the context of her own experiences as a long-time evaluator in philanthropy. This article published on Medium examines the history of evaluation in philanthropy, and the implications of role changes for evaluator skills, expertise, and boundaries.

    By Julia Coffman

    Oh for the Love of Sticky Notes! The Changing Role of Evaluators Who Work with Foundations
  • publication Apr 2016

    Measuring Political Will: Lessons from Modifying the Policymaker Ratings Method

    Advocates often seek to move political will as a way to build champions who will drive their issue through the policy process. This brief shares three scenarios where a method for measuring political will was modified. It offers lessons about the tool, the process, and the kinds of products that can be created.

    By Sarah Stachowiak, Sara Afflerback, Melissa Howlett

    Measuring Political Will: Lessons from Modifying the Policymaker Ratings Method