Methods: Data were collected through four focus groups with a total of 24 undergraduate and graduate students enrolled in programs such as social work, psychology, education, and human development at a large public university in the Southern United States. Participants reflected on their exposure to and engagement with AI within academic, professional, and personal contexts. Thematic analysis revealed several intersecting themes related to both institutional and socioeconomic influences on students’ AI literacy.
Results: Students highlighted significant institutional barriers including punitive policies, inconsistent faculty guidance, and stigmatization around AI, creating uncertainty about ethically appropriate use. Socioeconomic and systemic issues, such as living in rural areas, limited financial resources, and generational divides, further compounded difficulties, affecting students' familiarity, comfort, and confidence with AI. Graduate students particularly noted institutional pressures to adopt AI without corresponding ethical discussions or support, amplifying ethical discomfort around confidentiality, biases, exploitative labor practices, and environmental sustainability. On the other hand, identified facilitators included transparent institutional guidance, supportive faculty mentorship, interdisciplinary dialogues, peer collaboration, and structured experiential learning opportunities. Students explicitly requested education addressing equity concerns, digital divides, and institutional transparency to strengthen their ethical confidence in using AI.
Conclusions and Implications: This study demonstrates the urgent need to address institutional, socioeconomic, and systemic barriers to AI literacy in the social sciences. Participants’ concerns about punitive policies, unclear guidance, and unequal access reveal how current approaches fall short, particularly for students in ethically driven fields like social work and psychology. Recommendations for social work and related fields include 1) establishing clear, supportive institutional policies replacing punitive measures with informed ethical guidance (Garrett et al., 2020); 2) prioritizing AI educational models that incorporate exploration and discussion around socioeconomic and generational digital divides, 3) ensuring equitable technology access, 4) structuring peer mentorship and supportive faculty guidance (Celik, 2023; Khan & Paliwal, 2023), and 5) designing interdisciplinary curricula that critically examines AI’s broader societal, environmental, and ethical impacts (Long & Magerko, 2020; Laupichler et al., 2023). Future research should assess educational interventions designed to overcome these barriers, ensuring ethically responsible and socially equitable AI literacy practices in social science education.
![[ Visit Client Website ]](images/banner.gif)