While the thought of “evaluation” can be daunting, if not downright intimidating for many domestic violence (DV) programs, there are many good reasons to evaluate the job we are doing. The most important reason, of course, is to understand the impact of programs and practices on the lives of survivors and their children in order to build upon those efforts that survivors say are helpful to them and stop putting time and resources into efforts that are not helpful or relevant to them. Evaluation is also important because it provides “hard evidence” to present to funders, policymakers, and allied organizations, encouraging them to continue and increase the resources available to support effective programs and approaches.
The materials and tools highlighted below (and more will be added over the coming year) were selected to assist DV advocates and allied organizations understand the different types of evidence that can inform our work and provide practical considerations and strategies for approaching evaluation in domestic and sexual violence organizations. These resources also explore the unique challenges of evaluating and making the case for intervention and prevention initiatives.
Evaluation Issue Briefs
Knowledge is power. The more DV service providers and advocates know about designing and conducting evaluation efforts the better those efforts will be. Well-structured evaluations of DV services can provide valuable information we need to continually improve our programs. The following Issue Briefs prepared by the NRCDV are short discussions of related conceptual and practical topics that can help DV advocates more fully understand the benefits of evaluating their services, and identify important considerations before engaging in evaluation.
- Outcome Evaluation for DV Programs #1 – Why Should We Want to Evaluate Our Work?
- Outcome Evaluation for DV Programs # 2 – What is the Difference Between Research and Evaluation and Between Process and Outcome Evaluation?
- Outcome Evaluation for DV Programs #3 – How Do We Attend to Safety, Confidentiality and Diversity?
- Outcome Evaluation for DV Programs #4 – Outcomes Evaluation – What Effects Are We Having?
- Outcomes Evaluation for DV Programs # 5 – How Do We Approach Gathering, Maintaining and Analyzing Data?
- Outcomes Evaluation for DV Programs #6 – How Can We Make Our Evaluation Findings Work for Us?
Outcome Evaluation Guides
This Guide was developed in 2000 by Cris Sullivan, Ph.D. and Suzanne Coats, MSW, for the Michigan Coalition Against Domestic and Sexual Violence as a resource to assist service providers in the state explore effective outcome evaluation strategies. Some of the content is adapted from the guide to outcome evaluation for the Pennsylvania Coalition Against Domestic Violence in 1998 (see below).
This 1998 Guide was developed by Cris Sullivan, Ph.D., for the Pennsylvania Coalition Against Domestic Violence as a resource to assist domestic violence service providers in examining the effectiveness of their programs in a straight-forward manner. Although many programs are feeling external pressure from funding sources to conduct outcome evaluation, this guide is useful not just for convincing external sources of our importance but also in enhancing program effectiveness.
This 2009 manual, developed by the National Center on Injury Prevention and Control/Center for Disease Control, is designed to help violence prevention organizations hire an empowerment evaluator who will assist them in building their evaluation capacity through a learn-by-doing process of evaluating their own strategies. It is for state and local leaders and staff members of organizations, coalitions, government agencies, and/or partnerships working to prevent sexual violence, intimate partner violence, youth violence, suicide, and/or child maltreatment.
This site contains practical and concrete information and tools to help you evaluate
(1) a variety of services commonly provided to and with survivors, and
(2) Train the Trainer workshops [section 4].
For help in explaining the Theory of Change guiding your services, please click here.
The first three sections focus on services and programs for survivors of domestic assault. The first section contains handouts that you can copy and distribute to your staff – these handouts were designed to help you and your colleagues decide for yourselves how, when, and how often to evaluate your services.
The second section provides some examples of actual surveys you might use or modify to evaluate different types of services.
And the third section includes some standardized (valid) measures of quality of life, social support, hope, life satisfaction, and housing instability. You do not need to use these for program evaluation, but may find them useful for various projects.
SECTION 1: TOOLS FOR DESIGNING THE EVALUATION PROCESS
- Handout #1 – Creating a Plan with Staff for Collecting Outcome Evaluation Data
- Handout #2 – Inviting Clients to Complete Program Evaluation Forms: Directions for Staff
- Handout #3 – Menu of Potential Outcomes To Choose From
SECTION 2: SAMPLE SURVEYS TO USE OR MODIFY
- Domestic violence shelter
- Domestic violence counseling
- Sexual assault counseling
- Domestic violence support group
- Domestic violence parenting support group
- Sexual assault support group
For Word versions of these surveys, please contact the NRCDV.
SECTION 3: STANDARDIZED MEASURES RELATED TO THE THEORY OF CHANGE
These standardized (valid) measures are NOT necessary for you to evaluate your programs. However, a number of advocates have requested to see surveys that measure such things as quality of life, social support, hope, life satisfaction, trauma-informed practice, and housing instability.
- Quality of life
- Social support
- Life satisfaction
- Trauma-informed practice
- Trauma-informed practice (spanish)
- Housing instability
For those who provide “training of the trainer” (TOT) workshops on any topic, you may find these three documents useful. The first includes some tips and examples for you to consider when you need to evaluate the effectiveness of your Train the Trainer workshop. The second and third provide you with templates for creating Pre-Workshop surveys and Post-Workshop surveys.
- Tips on Evaluating Train the Trainer Workshops
- TOT Pre-Evaluation Survey Template
- TOT Post-Evaluation Survey Template
The Best Available Research Evidence enables researchers, practitioners, and policy-makers to determine whether or not a program, practice, or policy is actually achieving the outcomes it aims to and in the way it intends. The more rigorous a study’s research design, (e.g., randomized control trials, quasi-experimental designs), the more compelling the research evidence. However, literature also suggests that two other forms of evidence — experiential evidence (or practice-based evidence) and contextual evidence — which are both distinct and overlap.
This working protocol was drafted by a group of researchers and practitioners connected to Michigan State University and the Michigan Coalition Against Domestic and Sexual Violence. The protocol recognizes that while there is a growing need for domestic violence service providers to become active in assessing the value and process of research, these programs have an obligation to assess both the risks and benefits of each research study and to ensure that the safety and confidentiality of participants will be maintained.