Ensuring Evidence-Based Results through Program Fidelity
Daniel Lee, PhD Indiana University of Pennsylvania
Evidence-based programming has been stressed within many disciplines, but its importance seems to be critical when considering the use of public funds to develop, implement, and evaluate therapeutic programs administered within the criminal justice system. Rehabilitation and therapeutic programs administered by criminal justice agencies focus on outcomes that bring measureable improvements to offenders’ lives, limit recidivism, and increase public safety. Evidence of meeting these goals is encouraging and can be used to support the continued administration and replication of existing programs, but administrators of criminal justice agencies need to ensure that evidence-based programs are implemented accurately by attending to the issue of program fidelity.
Program fidelity is the strict adherence to program design. The expectation is that by mirroring the design and implementation of successful programs, the success can be replicated, but not all results can be easily generalized to varied settings or clientele. These issues are discussed in a recent series of essays published in Criminology and
Public Policy, and those essays are reviewed here. The essays focus attention on the success of a cognitive behavioral therapy program administered to female inmates in Minnesota and how the success declined when administrators altered important programmatic characteristics.
Grant Duwe and Valerie Clark (2015) from the Minnesota Department of Corrections presented an evaluation of the Moving On program as it was administered to female prisoners in the Minnesota Correctional Facility-Shakopee. Moving On is described as a gender-responsive cognitive behavioral therapy program. For the first ten years the program was available, it was administered to prisoners in the last half of their sentence on a quarterly and voluntary basis with small classes. In more recent years, the program administration was altered substantively by offering it to prisoners at intake, trimming the length of the program from 12 weeks to 3 weeks, cutting the total number of hours of participation, and increasing enrollments in the program from small groups to large groups.
The changes to the structure and administration of the Moving On program allowed Duwe and Clark (2015) to conduct an analysis of the effect that program fidelity had on the success of the program using multiple measures of recidivism (i.e., rearrest, reconviction, reincarceration for a new arrest, and reincarceration for technical violations). In their analysis, offenders who completed the program during the first ten years of administration were considered to have been participants during a “high fidelity” session, whereas the participants in more recent years were considered to have participated in the program during a “low fidelity” session. By comparing the recidivism measures across the two program groups and against the recidivism of matched prisoners who did not participate in the Moving On program, Duwe and Clark (2015) determined that participants of the early, high-fidelity administration of the program had reductions in recidivism. Also, their analysis that compared directly the later, low-fidelity participants to nonparticipants suggested that there were no differences in recidivism. Duwe and Clark (2015) used this evidence to conclude that the Moving On program was effective, but its effect was limited to the time period when it was administered in accordance to the design of the program. More succinctly, they found that how the program was administered was important. They suggest that these findings provide evidence that CBT programs can be effective but that program integrity and fidelity matter. Similarly, they suggest that practitioners can take a good program and make it ineffective by altering the structure and administration.
Emily Salisbury (2015) provided a response to the Duwe and Clark (2015) evaluation, and she stressed the importance of identifying the context within which programs are administered and evaluated. While she lauded Duwe and Clark (2015) for stressing the importance of program fidelity, she argued for a greater elaboration on what program fidelity and integrity might be, and in particular, within programs considered to be gender-responsive. Duwe and Clark (2015) were able to isolate programmatic differences to specific time periods, but their identification of fidelity was limited to a subjective assessment of “high” and “low.” Salisbury suggested that evaluators measure the integrity of program administration with standardized assessment instruments such as the Gender Responsive Policy and Practice Assessment (GRPPA) protocol, the Gender Informed Practices Assessment (GIPA), the Gender-Responsive Program Assessment, or the Gender Responsive Correctional Program Assessment Inventory. These tools are designed to identify the theoretical and practical components of specific programs and if those components are being administered faithfully. The use of these tools in assessing program integrity can ensure that program administrators are doing what is necessary by design and that program evaluators are being mindful of which components might be driving (or limiting) the success of programs determined to be evidence-based.
In a separate response essay, J. Mitchell and Holly Miller (2015) elaborated on the abilities of researchers to identify the fidelity of a program. In an evaluation environment driven by a need to produce evidence of success through numerical outcomes, evaluators often neglect the importance of a process evaluation. Process evaluations examine the steps that administrators take to offer successful programs. By documenting the steps and strategies used to implement a program, successful programs can be replicated. Miller and Miller (2015) reference the use of the Justice Program Fidelity Scale as a customizable tool that can be used to measure systematically the fidelity of program adherence. Miller and Miller (2015) stated that these process evaluations must precede the outcome evaluations to provide the proper temporal sequence for evidence. That is, a process evaluation is a necessary step to confirming that a program has been offered as designed. Otherwise, the outcome evidence cannot be associated with the program accurately. To them, process evaluations are too often completed in conjunction with an outcome evaluation or done after the fact. Evaluators need to be mindful of the importance of thorough and appropriate process evaluations that are as necessary to the evidence-based philosophy as the more available outcome evaluations.
Increasingly, evidence-based programs are being identified and replicated. This is good for criminal justice administrators, offenders, and the general public. The importance of these programs in improving lives of offenders, enhancing public safety, and managing the efficient use of scarce funds cannot be understated. The significance of Duwe and Clark’s (2015) evaluation of Moving On in Minnesota and the responses to their report is that, collectively, they have shifted the discussion from being one limited to the identification of what works in criminal justice toward a discussion that includes the importance of how programs worked in certain situations and will they work others.
References
Duwe, G. & Clark, V. (2015). Importance of program integrity: Outcome evaluation of a gender-responsive, cognitive behavioral program for female offenders. Criminology
& Public Policy, 14:301-328.
Miller, J.M. &Miller, H. (2015). Rethinking program fidelity for criminal justice.
Criminology & Public Policy, 14:339-349.
Salisbury, E. (2015). Program integrity and the principles of gender-responsive interventions: Assessing the context for sustainable change. Criminology & Public
Policy, 14:329-338.