While the thought of “evaluation” can be daunting, and even downright intimidating for many domestic violence (DV) programs, there are many good reasons to evaluate the job we are doing. The most important reason, of course, is to understand the impact of programs and practices on the lives of survivors and their children in order to build upon those efforts that survivors say are helpful to them and stop putting time and resources into efforts that are not helpful or relevant to them. Evaluation is also important because it provides “hard evidence” to present to funders, policymakers, and allied organizations, encouraging them to continue and increase the resources available to support effective programs and approaches.
The materials and tools highlighted below (and more will be added over the coming year) were selected to assist DV advocates and allied organizations understand the different types of evidence that can inform our work and provide practical considerations and strategies for approaching evaluation in domestic and sexual violence organizations. These resources also explore the unique challenges of evaluating and making the case for intervention and prevention initiatives.
NRCDV Evaluation Issue Briefs
Knowledge is power. The more DV service providers and advocates know about designing and conducting evaluation efforts the better those efforts will be. Well-structured evaluations of DV services can provide valuable information we need to continually improve our programs. The following Issue Briefs prepared by the NRCDV are short discussions of related conceptual and practical topics that can help DV advocates more fully understand the benefits of evaluating their services, and identify important considerations before engaging in evaluation.
- Outcome Evaluation for DV Programs #1 – Why Should We Want to Evaluate Our Work?
- Outcome Evaluation for DV Programs # 2 – What is the Difference Between Research and Evaluation and Between Process and Outcome Evaluation?
- Outcome Evaluation for DV Programs #3 – How Do We Attend to Safety, Confidentiality and Diversity?
- Outcome Evaluation for DV Programs #4 – Outcomes Evaluation – What Effects Are We Having?
- Outcomes Evaluation for DV Programs # 5 – How Do We Approach Gathering, Maintaining and Analyzing Data?
- Outcomes Evaluation for DV Programs #6 – How Can We Make Our Evaluation Findings Work for Us?
Outcome Evaluation Guides and Toolkits
This 1998 Guide was developed by Cris Sullivan, Ph.D., for the Pennsylvania Coalition Against Domestic Violence as a resource to assist domestic violence service providers in examining the effectiveness of their programs in a straight-forward manner. Although many programs are feeling external pressure from funding sources to conduct outcome evaluation, this guide is useful not just for convincing external sources of our importance but also in enhancing program effectivenes.’
This brief guide was created by Lisa Goodman, Ph.D., Kristie Thomas, Ph.D. and Deborah Heimel, M.S. in 2015 to assist domestic violence programs and other community based organizations interested in assessing program participants’ progress and outcomes. The development of MOVERS was a highly collaborative process involving academics, national experts, advocates, and survivors. Scales are available in both English and Spanish
Released in 2015 by the National Latin@ Network for Healthy Families and Communities, a project of Casa de Esperanza, this toolkit is the product of a team effort that includes other community based organizations looking for a way to respond to the need to evaluate their work but in ways that reflect the cultural aspect of their approach to engaging with the community. In other words, “there is a need to take into account what distinguishes us from others and what constitutes the essence of culturally specific work.”
This brief guide, available in both English and Spanish, was created in 2015 by Cris Sullivan, Ph.D. and Lisa Goodman, Ph.D. to assist nonprofit domestic violence programs and similar organizations interested in using the Trauma Informed Practices (TIP) Scales to examine and improve upon their work. The development of the TIP was a highly collaborative process involving academics, national experts, advocates, and survivors.
This guide, developed by Nancy Smith, Charity Hope, and Alice Chasen for the Vera Institute of Justice, in 2015 is designed to assist domestic and sexual violence service providers assess their evaluation capacity and identify areas of strength, as well as areas for improvement. In addition to the publication, Vera has created a resource hub on its website to provide domestic and sexual violence service providers with access to five webinars that explore a number of topics addressed in the guide and provide an inside look at how organizations have applied these lessons in the field.
This Guide was developed in 2000 by Cris Sullivan, Ph.D. and Suzanne Coats, MSW, for the Michigan Coalition Against Domestic and Sexual Violence as a resource to assist service providers in the state explore effective outcome evaluation strategies. Some of the content is adapted from the guide to outcome evaluation for the Pennsylvania Coalition Against Domestic Violence in 1998 (see below).
This 2009 manual, developed by the National Center on Injury Prevention and Control/Center for Disease Control, is designed to help violence prevention organizations hire an empowerment evaluator who will assist them in building their evaluation capacity through a learn-by-doing process of evaluating their own strategies. It is for state and local leaders and staff members of organizations, coalitions, government agencies, and/or partnerships working to prevent sexual violence, intimate partner violence, youth violence, suicide, and/or child maltreatment.
Tips and Tools contains practical and concrete guidance and tools to help you evaluate a variety of services commonly provided to and with survivors, as well as Train the Trainer (TOT) workshops.
For help in explaining the Theory of Change guiding domestic violence services as well as these related outcomes and measures, please click here.
The first three sections of Tips and Tools focus on services and programs for survivors of domestic assault.
- Section 1 contains handouts that you can copy and distribute to your staff. These handouts were designed to help you and your colleagues decide for yourselves how, when, and how often to evaluate your services.
- Section 2 provides some examples of actual outcome evaluation surveys you might use or modify to evaluate different types of services.
- Section 3 includes several standardized (and validated) measures of quality of life, social support, hope, life satisfaction, and housing instability. You do not need to use these for program evaluation, but may find them useful if you have access to resources for a program evaluation and are working with a program evaluator to examine whether any of these outcomes occur for survivors using your services.
As many DV programs and coalitions use Train the Trainer (TOT) approaches, Section 4 provides tools that can be used to measure their effectiveness.
You can copy and use these handouts with staff to help you decide how, when, and how often to evaluate your services.
- Handout 1 – Creating a Plan with Staff for Collecting Outcome Data
- Handout 2 – Inviting Program Participants to Complete Program Evaluation Forms
- Handout 3 – Example Outcomes for Domestic Violence Programs
SECTION 2: SAMPLE OUTCOME EVALUATION TOOLS
These tools have all been created by or modified from surveys first developed by Dr. Cris Sullivan. Cris has worked with advocates and survivors for over 20 years to create and modify outcome evaluation surveys that are meaningful to programs and accepted by external funders. The brief surveys provided here focus on shelter, counseling, support groups and advocacy but many of the survey items could be used for other services. Please feel free to modify any of these tools as needed by your own program, including removing the headers and footers and replacing them with your program-specific information.
- Advocacy Feedback Form for Survivors of Domestic Violence [Word Version]
- Evaluation Form for Shelter Residents [Word Version]
- Individual Counseling Evaluation Form for Survivors of Domestic Violence [Word Version]
- Support Group Evaluation Form for Survivors of Domestic Violence [Word Version]
- Parenting Support Group Evaluation Form for Survivors of Domestic Violence [Word Version]
- Individual Counseling Evaluation Form for Survivors of Sexual Assault/Abuse [Word Version]
- Support Group Evaluation Form for Survivors of Sexual Assault/Abuse [Word Version]
SECTION 3: STANDARDIZED MEASURES RELATED TO THE THEORY OF CHANGE
Included here are several standardized (and validated) measures of quality of life, social support, hope, life satisfaction, and housing instability. You may find them useful if you have access to resources for a program evaluation and are working with a program evaluator to examine whether any of these outcomes occur for survivors using your services. These are NOT necessary for you to use in your basic program evaluation.
- Quality of Life Questionnaire
- Satisfaction with Life Scale
- Social Support Scale
- General Self Efficacy Scale
- Hope Index
- Measure of Victim Empowerment Related to Safety (MOVERS) Scales, see Full MOVERS Report
- Financial Worries Scale
- Housing Instability Index
- Trauma Informed Practice (TIP) Scales
For those who provide Train the Trainer (TOT) workshops on any topic, you may find these three documents useful. The first includes some tips and options for you to consider when you need to evaluate the effectiveness of your TOT workshop. The second and third provide you with templates for creating Pre-Workshop surveys and Post-Workshop surveys.
- Tips on Evaluating “Train the Trainer” Workshops
- TOT Pre-Evaluation Survey Template [Word Version]
- TOT Post-Evaluation Survey Template [Word Version]
The Best Available Research Evidence enables researchers, practitioners, and policy-makers to determine whether or not a program, practice, or policy is actually achieving the outcomes it aims to and in the way it intends. The more rigorous a study’s research design, (e.g., randomized control trials, quasi-experimental designs), the more compelling the research evidence. However, literature also suggests that two other forms of evidence — experiential evidence (or practice-based evidence) and contextual evidence — which are both distinct and overlap.
This working protocol was drafted by a group of researchers and practitioners connected to Michigan State University and the Michigan Coalition Against Domestic and Sexual Violence. The protocol recognizes that while there is a growing need for domestic violence service providers to become active in assessing the value and process of research, these programs have an obligation to assess both the risks and benefits of each research study and to ensure that the safety and confidentiality of participants will be maintained