Search | Search by Center | Search by Source | Keywords in Title
Meterko M, Restuccia J, Stolzmann K, Mohr D, Glasgow J, Brennan C, Kaboli PJ. Response Rates, Non-Response Bias, and Data Quality: Results from a National Survey of Senior Healthcare Leaders. Poster session presented at: AcademyHealth Annual Research Meeting; 2013 Jun 24; Baltimore, MD.
Research Objective: Survey response rate is widely regarded as a key indicator of data quality. However, a recent meta-analysis of 59 methodological studies of non-response concluded that response rate is not necessarily predictive of non-response bias. This brings into question the utility of response rate as a measure of survey quality. In the present study we assessed the relationship between response rate and non-response bias by investigating whether early responders from a survey with an unusually high response rate provided significantly different values on both objective and subjective measures compared to respondents who received multiple reminder messages. Study Design: We conducted a web-based self-report survey of hospital Chiefs of Medicine (COMs) with up to 4 follow-ups of non-respondents after initial invitation to participate. We compared demographic and facility characteristics of the resulting five waves of respondents as well as their average proportion of survey items completed. We also compared respondent waves on the distribution of answers to three key types of survey questions: factual reports, single-item evaluations, and multi-item scales. The latter two types involved Likert response scales. Respondent waves were compared using chi-square for categorical outcomes and analysis of variance (ANOVA) for continuous outcomes. Population Studied: COMs at all Department of Veterans Affairs (VA) acute care hospitals. Principal Findings: Of 124 COMs, 118 (95%) responded, 35 (29.7%) to the initial contact, followed by 23 (19.5%), 14 (11.9%), 12 (10.2%) and 34 (28.8%) in response to the four subsequent reminders, respectively. Respondent waves did not differ with regard to demographic or facility characteristics, or proportion of missing data. The response distributions on two categorical factual report questions did differ by wave, but the differences were not systematic. No significant differences were observed on either the single or multi-item scale measures of attitudes by wave; "what if" analyses of successive cumulative results by wave indicated that the same conclusions would have been reached if the data collection had been halted at any point. However, as expected, the precision and statistical power of the survey results increased steadily as the number of respondents accumulated over the course of the study. Conclusions: The almost perfect response rate achieved on this survey made it ideal for studying the relationship between response rate and non-response bias. High response rates are certainly desirable because of their important effect on precision and power of survey results. However, as survey fatigue increases among potential respondents in all fields, absolute thresholds representing "adequate" survey response rates may be unrealistic, and survey results should be considered on their merits rather than being uniformly disqualified for failing to meet a threshold rate. Implications for Policy, Delivery, or Practice: While it has a direct incremental effect on the precision of survey data, response rate may be overestimated as an indicator of data quality as surveys may accurately represent the attitudes of the target population even if response rates are below levels typically believed to be desirable. Certainly efforts should always be made assess the degree of non-response bias in any survey dataset, but response rate alone should not be used to dismiss results as uninformative.