Abstract: When Robots and Fraudulent Participants Enter Your Qualitative Research Study Recruitment: How to Screen and Mitigate Issues (Society for Social Work and Research 30th Annual Conference Anniversary)

When Robots and Fraudulent Participants Enter Your Qualitative Research Study Recruitment: How to Screen and Mitigate Issues

Schedule:
Friday, January 16, 2026
Congress, ML 4 (Marriott Marquis Washington DC)
* noted as presenting author
Sean Erreger, MSW, Program Evaluator / DSW Student, Kutztown University of Pennsylvania, Saratoga Springs, New York, United States, NY
Background and Purpose:

Little has been written about the impact of robot and fraudulent responses in qualitative research, specifically in the social work context. During the recruitment phase of research, robot responses to qualitative research are becoming increasingly problematic. Robot responses can skew results and question the validity of a study. After recruitment, fraudulent responses or those posing to be subjects to attempt to get compensation created complications. An exploration of the researchers’ experiences with attempting to detect and mitigate both robot and fraudulent responses within qualitative research will be presented. The presenter hopes to reduce the chances that others experience the significant delays this caused.

Methods:

Research was descriptive to demonstrate how recruitment was impacted by robot responses to a pre-screening survey. Additionally, two of the participants got past the screening process and made fraudulent attempts to answer questions to interviews. Multiple steps were taken to mitigate these issues, including moving platforms twice that had better security, such as hidden questions, verification, editing the questions to demonstrate their expertise in the field. Lastly this also included the use of “paradata” such as geolocation and IP Addresses that helped decern between robot and fraudulent responses. In addition, a more targeted email strategy was employed in favor of use of social media.

Results:

The researcher used various strategies to reduce robot responses from 165 responses in the first 48 hours of recruitment for qualitative interviews to 1 robot responses per week in the prescreening about two months later. A survey platform within the first 48 hours promised additional security but this was not effective and yielded another approximately 100 responses in the next 48 hours. After the password request intervention this was reduced to about 5-12 emails per day. Attempting to screen these out via email became untenable. A description of how the researcher could tell these answers to interview questions was fraudulent will be presented. There were long pauses between responses, there was a lack of general knowledge about the field, and the answers were frequently the result of searching the internet.

After this several week pause to increase countermeasures, a combination of strategies reduced robot or fraudulent responses to 1 to 3 per week. Counter measures included including a change of platform to increase security, increased prescreening questions that reflected insider knowledge, use of hidden questions, attempts at further communication to determine if the participant is real, the use of “paradata” and discouraging the use of social media.

Conclusions and Implications:

As social work researchers, we run the risk of receiving robot or fraudulent responses. This researcher's experiences are becoming increasingly common. The development of countermeasures is important early on as this process caused significant delays in the presenter's research. This presentation provides opportunities to learn more about strategies to reduce robot and fraudulent responses in qualitative research. These countermeasures were an effective means of increasing research integrity.