Session number: 1083
Abstract title: Two software programs for providing training in Global Assessment of Functioning (GAF) Ratings
Author(s):
JW Davison - VA Puget Sound Health Care System, Seattle, WA
RK Blashfield - Auburn University, Auburn, AL
DR Kivlahan - VA Puget Sound Health Care System, Seattle, WA
Objectives: VHA Directive 97-059 mandates use of the Global Assessment of Functioning (GAF)
scale for all veterans receiving mental health services for such purposes as
measuring treatment outcomes and determining veteran benefit eligibility.
Ensuring acceptable reliability in using the GAF requires adequate training for
VA mental health clinicians, and the VA Office of Education recently conducted
nation-wide GAF training via satellite broadcast toward this end. Use of GAF
rating software programs as training tools also may improve GAF score reliability.
This study compared the performance of two computerized GAF rating instruments:
the GAF Report developed by Michael First and Multi-Health Systems, and the
Computerized-Modified GAF (CM-GAF), a Visual Basic program designed by the
principal investigator.
Methods: Over two rating sessions spaced two weeks apart, eight clinical psychology
graduate students (1) completed brief GAF orientation, (2) rated 10 vignettes
from DSM casebooks as part of a pretest measure, and (3) rated 34 vignettes
from the Health Sickness Rating Scale (a forerunner of the GAF) using the two
software programs. Intraclass correlation coefficients (ICCs) indicated
clinicians' pretest performance reliability across sessions and the reliability
of each program. Participants also provided qualitative and quantitative
end-user evaluations.
Results: Pretest rating reliability improved between sessions from .77 to .86, and
variability in ratings decreased for all clinicians except one. Software
programs demonstrated equal reliability in ratings (ICCs = .84). Rating time
for CM-GAF was significantly less than GAF Report (p < .05). No significant
differences were found between users' quantitative evaluations of the two
programs, but all clinicians subjectively indicated the software improved their
proficiency in using the GAF.
Conclusions: GAF-rating software programs may be useful tools for training individuals to
make ratings, and users' qualitative comments subjectively supported this
observation. Both programs were evaluated favorably and performed equally
well.
Impact statement: GAF rating presents a substantive systemic issue for VA mental health service
provision. The VA's effort to improve GAF rating reliability, already addressed
by a nation-wide satellite broadcast, may be augmented by using GAF-rating
software programs to provide clinicians with more structured practice in
assigning ratings.