The Gap Between Knowing and Doing in OUD Care: Our Poster at ASAM 2026

The American Society of Addiction Medicine (ASAM) accepted 180 posters for their 2026 annual conference in San Diego. Ours was one of them. Most of the program focused on clinical care, treatment access, policy, harm reduction, and special populations. Clinician education came up, but usually as a side note. Out of 180 posters, maybe five dealt directly with how clinicians are trained or how their performance changes after education. We were one of those five. And nobody else was using AI.

Project Overview

The poster, co-authored with Dr. Aylin Madore from Pri-Med, presents outcomes from an AI virtual patient simulation embedded in a REMS-aligned CME curriculum for opioid use disorder.

The simulation put learners in a room with Jeremy Thompson, a 34-year-old warehouse supervisor back for a chronic pain follow-up who mentions his opioid medication isn’t lasting the whole month.

From there, clinicians had to do everything: build rapport, screen for non-medical opioid use, apply DSM-5 criteria, have a motivational interviewing conversation about MOUD, and build a treatment plan with harm reduction. Five steps. No scripts. Just a conversation with an AI patient.

Knowledge and Confidence Improved

390 primary care clinicians finished the activity in the first five weeks.

59% had never or rarely screened for OUD. Only 7.4% felt confident offering same-day buprenorphine.

After the didactic content, knowledge went from 58.4% to 90.7%. High confidence jumped by 29%.

But The Simulated Conversations Revealed A Different Story

When those same clinicians sat down with Jeremy and tried to use what they’d learned, average performance was 2.8 out of 5. The structured tasks went fine. DSM-5 assessment scored 4.5. Rapport and screening, 3.3.

But when the conversation got harder, the scores dropped. Brief intervention and MOUD discussion, 2.7. Treatment planning and harm reduction, 2.0.

Clinicians could identify OUD, but they struggled to talk about it.

We analyzed every transcript using AI, identified what clinicians said to Jeremy at each step, and pinpointed where the conversations fell apart. That kind of detail doesn’t come from a pre/post test.

Majority of Clinicians Found The Experience Valuable

52% of learners rated the simulation very or extremely valuable. Another third said moderately valuable. 349 out of 390 clinicians left written comments, and they were real responses. People wrote about what surprised them, what they felt during the encounter, what they planned to change.

36% said they were likely or very likely to offer same-day buprenorphine after the activity. Up from 7.4%.

Understanding the guidelines and being able to use them in a patient conversation are different skills. Knowledge matters, but it won’t tell you whether someone can sit across from a patient who’s scared of buprenorphine and have the conversation that gets them into treatment.

So What Does This All Mean?

Teaching motivational interviewing the right way requires more than 16 hours of workshop time plus ongoing coaching. Most clinicians will never get access to that. Not in primary care, not in addiction medicine. Too scarce and too expensive. AI makes it possible to replicate the core of that model, realistic practice with structured feedback, and run it at scale inside accredited education. 

This study was about OUD, but the approach fits anywhere in behavioral health where the skill that matters most is a conversation.

We’ve spent decades teaching clinicians what to say. It might be time to give them a place to practice saying it.

The Gap Between Knowing and Doing in OUD Care: Our Poster at ASAM 2026

Published:
April 30, 2026
Read Time:
3 min read
Tags: