Schedule:
Friday, January 16, 2026: 8:00 AM-9:30 AM
Independence BR H, ML 4 (Marriott Marquis Washington DC)
Cluster: Research Design and Measurement
Organizer:
Nari Yoo, MA, New York University
Speakers/Presenters:
Nari Yoo, MA, University of Michigan-Ann Arbor,
Cheng Ren, PhD, State University of New York at Albany and
Gaurav Sinha, PhD, University of Georgia
Large Language Models (LLMs) are a type of artificial intelligence (AI) model trained on vast amounts of text data to understand and generate human language. Currently, popular LLMs include ChatGPT, Gemini, Co-pilot, and Claude, which are being used for a variety of purposes, including writing assistance, code assistance, and translation and multilingual tasks. The integration of LLMs into social work research offers potential. LLMs have been used to conduct systematic reviews by automating literature screening and summarization, reducing the time required for evidence synthesis. In clinical settings, LLMs assist with clinical note summarization and classification, helping clinicians save time to write notes (Lee et al., 2024). Further, social policy researchers have utilized large language models for multilingual policy topic classification (Sebok et al., 2024). In client engagement strategies, LLMs can enhance communication tools, provided they are integrated responsibly and ethically. Despite these benefits, LLMs are not yet aligned with the Standards for Technology in Social Work Practice. Concerns regarding data privacy, accessibility, and ethical considerations necessitate a shift towards open-source, locally deployable models. This workshop provides participants with practical knowledge and skills to deploy and utilize open-source LLMs on their personal computers, enabling research advancements while maintaining data integrity in accordance with data privacy ethics. The workshop will provide Python tutorials to social work researchers and practitioners interested in deploying open-source LLMs using tools such as Ollama and LM Studio on standard hardware. Practical considerations, including data privacy, algorithmic bias, and responsible/explainable AI, will be discussed throughout.
Part 1: Foundations of LLMs In this introductory segment, participants will develop a foundational understanding of LLMs, covering the structure and training processes of large language models, key parameters that influence model behavior (temperature, top-p sampling, max tokens, context window, and fine-tuning), and how these parameters can be adjusted to optimize model performance for specific research needs.
Part 2: Technical Implementation and Deployment This session will guide attendees through installation and configuration of open-source LLM tools, including Ollama or LM Studio, setting up and running models such as Mistral 7B and Llama 3.1 8B on standard personal computers, with step-by-step demonstrations of local deployment processes. The session will cover hardware requirements (CPU/GPU specifications, minimum RAM, and storage needs).
Part 3: Practical Considerations The final segment will focus on responsible implementation of LLMs in social work research, addressing practical considerations including data privacy, algorithmic bias, and responsible/explainable AI, while exploring best practices for aligning LLM applications with social work ethics and ensuring data-sharing practices remain secure and transparent.
Deploying LLMs locally allows social work researchers to customize models to their specific needs while enhancing data privacy and security. Given the importance of confidentiality in social work, running LLMs on local systems helps protect participants' and clients' data. Beyond privacy and customization, local deployment streamlines workflows and supports better decision-making. This also contributes to setting ethical standards for technology use in social work research.