首页 | 本学科首页   官方微博 | 高级检索  
     


Attention by design: Using attention checks to detect inattentive respondents and improve data quality
Affiliation:1. Department of Information & Operations Management, Mays Business School, Texas A&M University, 320 Wehner Building, 4217 TAMU, College Station, TX 77843-4217, United States;2. Calvin E. and Pamela T. Zimmerman University Endowed Fellow, Department of Marketing, Smeal College of Business, The Pennsylvania State University, 444 Business Building, University Park, PA 16802, United States;1. DTU Compute, Statistical section, Technical University of Denmark, Richard Petersens Plads, Building 324, DK-2800 Kongens Lyngby, Denmark;2. Groupe ESA, UPSP GRAPPE 55, rue Rabelais BP30748, 49007 Angers, Cedex 01, France;1. Oregon State University, USA;2. Florida State University, USA;1. Florida State University, United States;2. University of South Carolina, United States;1. Aalto University and Nordic Healthcare Group, Finland;2. IE Business School, IE University, Spain;3. University of Louisville, United States;4. Aalto University, Finland;1. Department of Supply Chain Management, W. P. Carey School of Business, Arizona State University, P.O. Box 874706, Tempe, AZ, 85287-4706, USA;2. Department of Management Sciences, Fisher College of Business, The Ohio State University, 600 Fisher Hall, 2100 Neil Avenue, Columbus, OH, 43210, USA;3. Department of Marketing and Logistics, Fisher College of Business, The Ohio State University, 500 Fisher Hall, 2100 Neil Avenue, Columbus, OH 43210, USA
Abstract:This paper examines attention checks and manipulation validations to detect inattentive respondents in primary empirical data collection. These prima facie attention checks range from the simple such as reverse scaling first proposed a century ago to more recent and involved methods such as evaluating response patterns and timed responses via online data capture tools. The attention check validations also range from easily implemented mechanisms such as automatic detection through directed queries to highly intensive investigation of responses by the researcher. The latter has the potential to introduce inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The empirical findings of the present work reveal that construct and scale validations show consistently significant improvement in the fit statistics—a finding of great use for researchers working predominantly with scales and constructs for their empirical models. However, based on the rudimentary experimental models employed in the analysis, attention checks generally do not show a consistent, systematic improvement in the significance of test statistics for experimental manipulations. This latter result indicates that, by their very nature, attention checks may trigger an inherent trade-off between loss of sample subjects—lowered power and increased Type II error—and the potential of capitalizing on chance alone—the possibility that the previously significant results were in fact the result of Type I error. The analysis also shows that the attrition rates due to attention checks—upwards of 70% in some observed samples—are far larger than typically assumed. Such loss rates raise the specter that studies not validating attention may inadvertently increase their Type I error rate. The manuscript provides general guidelines for various attention checks, discusses the psychological nuances of the methods, and highlights the delicate balance among incentive alignment, monetary compensation, and the subsequently triggered mood of respondents.
Keywords:Data validation  Attention checks  Manipulation checks  Response validation
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号