IIR 09-368
Comparison of Fidelity Assessment Methods
Angela L Rollins, PhD Richard L. Roudebush VA Medical Center, Indianapolis, IN Indianapolis, IN Funding Period: November 2010 - October 2013 |
BACKGROUND/RATIONALE:
National policy has dramatically increased the emphasis on implementing evidence-based mental health services to meet the needs of people with severe mental illness, and the VHA has made great strides at providing effective, community-based services. One of the cornerstones of the VHA approach is Mental Health Intensive Case Management (MHICM), a model that is based on one of the most well-defined and empirically supported approaches: assertive community treatment. Most recently, VHA policy shifts have resulted in a proposed set of uniform mental health services to ensure access to a standard set of high quality mental health services, such as MHICM, across the entire VHA. However, successful implementation of evidence-based practices on a broad scale requires psychometrically valid, yet practical, ways to assess and monitor degree of implementation (i.e., fidelity). Currently, the only rigorous "gold-standard" method to monitor implementation is an on-site fidelity visit, which is a very time-intensive and expensive approach for both the assessor and the program. OBJECTIVE(S): The primary objective of this study was to examine the effectiveness of innovative and potentially cost-effective methods to ensure the quality of mental health services for disabled veterans with mental illness. This study examined the reliability, concurrent validity, and incremental predictive validity of expert-rated self-report, phone, and onsite fidelity assessments for assertive community treatment. The study also explored the relative costs of each approach (cost identification). METHODS: We recruited 32 of VA's 111 MHICM teams to participate in our study. Volunteer sites participated in a phone-based and an on-site fidelity assessment with experienced fidelity assessors using the Dartmouth Assertive Community Treatment Scale (DACTS). The DACTS is a 28-item scale, rated on a scale from 1 to 5, where 1 indicates low adherence to the model and 5 indicates full adherence to the model. The order of phone and on-site assessments was counter-balanced, with blinded assessors, to reduce potential bias. Sites reported information about their team's functioning prior to the initial phone or on-site assessment that was subsequently used as the basis for making DACTS ratings by a separate pair of experienced DACTS raters, resulting in 3 assessment types: onsite, phone, and expert-rated self-report. For site recruitment, we used a stratified random sampling technique based on type of VA facility served and previous year's self-reported fidelity using global self-scoring. We examined level of agreement between fidelity approaches with intraclass correlations. To determine the incremental predictive validity between fidelity method and hospital reduction, we used zero-inflated binomial regression to compare reductions in hospitalization for veterans from the year prior to MHICM intake to the most recent year preceding the fidelity assessment. We compared costs across the three methods of assessment using personnel and travel costs. We also included a formative evaluation to inform future dissemination of fidelity assessment methods in the VHA and elsewhere. FINDINGS/RESULTS: Teams showed modest fidelity to the assertive community treatment using the on-site fidelity assessment method. DACTS score means were 3.38 (SD=.41) for the Human Services subscale, 3.76 (SD=.38) for the Organizational Boundaries subscale, 2.66 (SD=.33) for the Services Subscale, and 3.22 (SD=.28) for Total DACTS mean. Inter-rater reliability Analyses indicated good inter-rater agreement for both phone and expert-rated self-report assessments. For the Human Resources, Organizational Boundaries, Services subscale, and total DACTS score, respectively, intraclass correlation for inter-rater agreement between phone raters were .96, .81, .78, and .92, and were .92, .87, .84, and .91 for expert-rated self-report raters. Concurrent validity Agreement between phone, and expert-rated self-report, and on-site methods were high for total DACTS score and most subscales. The Services subscale for expert-rated self-report was the only subscale that did not reach our a priori cut-off of .70 for minimum agreement. Intraclass correlations indicating agreement between phone and onsite methods were .92, .85, .84, and .88 for the Human Resources, Organizational Boundaries, Services subscale, and total DACTS score, respectively. Intraclass correlations indicating agreement between self-report and onsite methods were .92, .67, .79, and .84 for the Human Resources, Organizational Boundaries, Services subscale, and total DACTS score, respectively. Intraclass correlations also indicated high agreement between phone and expert-rated self-report: .95, .86, .76, and .91 for the Human Resources, Organizational Boundaries, Services subscale, and total DACTS score, respectively. Predictive validity We found no differences in incremental validity between methods, with all programs in the study showing significant reductions in hospital days for consumers. Cost identification Costs were analyzed for on-site or phone assessments using the method administered first (phone or onsite) to reduce any biases to personnel effort in a site participating in two successive assessments. To estimate costs that would translate to real-world use of phone or expert-rated self-report using a single rater, we averaged the time devoted by the two assessors and used that effort for the assessor cost calculation. Costs for the on-site assessments that were administered first (n=19) averaged $2579, including an average of $1663 in personnel costs and $916 in travel costs. Costs for the phone assessments that were administered first (n=13) were $571 and all expert-rated self-report assessment methods (n=32) averaged $553. Formative evaluation results Despite the favorable results for the remote fidelity assessment methods, most respondents (75%) in follow-up interviews expressed a preference for the on-site assessment methods (over phone), citing assessor traits such as being knowledgeable regarding the clinical model, greater perceived accuracy for on-site assessments (e.g., easier to communicate about program in person, assessor able to "see" program in action), the personal contact it provided, and getting informal feedback throughout the visit, particularly from an "outsider." Negative feedback regarding on-site visits included the amount of time required to complete the assessment. Positive comments about phone assessments commonly included the advantage of minimal time spent away from clinical duties. IMPACT: Phone or expert-rated self-report fidelity assessments compared favorably to onsite methods in terms of reliability, concurrent validity, and cost. If used appropriately, these alternative protocols hold promise in monitoring large scale program fidelity with limited resources. This project addresses a critical need in the VA system to effectively and efficiently monitor mental health program adherence to model standards for quality. External Links for this ProjectNIH ReporterGrant Number: I01HX000208-01A1Link: https://reporter.nih.gov/project-details/7874099 Dimensions for VADimensions for VA is a web-based tool available to VA staff that enables detailed searches of published research and research projects.Learn more about Dimensions for VA. VA staff not currently on the VA network can access Dimensions by registering for an account using their VA email address. Search Dimensions for this project PUBLICATIONS:Journal Articles
DRA:
Health Systems
DRE: Research Infrastructure Keywords: none MeSH Terms: none |