Friday, May 23, 2014

"Do Educational Conferences Work?" And Other Follies in Published Research

Impact of Formal Continuing Medical Education: Do Conferences, Workshops, Rounds, and Other Traditional Continuing Education Activities Change Physician Behavior or Health Care Outcomes? Davis D, O'Brien M, Freemantle N, Wolf F, Mazmanian P, Taylor-Vaisey A. JAMA. 1999; 282(9): 867-874. 

Available online from the Baystate Health Sciences Library or from PubMed at your institution.


Many of you design or attend educational conferences and events, yes? And, you dutifully evaluate your educational conferences and events, yes? (What!?  OK, just nod - but email me. We can talk later.) This article - resurrected from the late 90's - asks if stepping away from our practice to attend an educational event is, well, worth it. 

Turns out, when Davis and colleagues asked if educational conferences changed physician behavior, they found that - SPOILER ALERT - that yeah, um,.. well, sure - you know, sometimes. Say it with me: "Well, I could have told you that."

Look, Davis and his colleagues ask a really important question, and they answer this question about the literature appropriately by searching the literature. But the meat of this article is not the study. In fact, I might argue that this is the second time this week I've seen a qualitative discussion answered with a quantitative solution. 

I could go into a diatribe on the Limitations and Implications sections of this article (For example, I could ask "you did a meta-analysis on RCTs evaluating educational events? Were you TRYING not to find anything?" But I won't.) 

Instead, I draw your attention to the Conclusions. They argue that formal CME interventions seeking to change provider behavior and patient outcomes must focus on "the complex intrapersonal, interpersonal, and professional educational variables that affect the [provider]-learner." In other words, your educational event is not occurring in a vacuum. 

They go on to say, in a nice way, that didactics don't work. 

And that's where I think they contradict themselves. 

Sure, some speakers are just better than others. But to say they don't work? I've attended many grand rounds where the speaker has inspired me, or motivated me, or introduced me to new collaborators or new concepts or prompted intellectual conversation with my neighbor. Did my practice improve that day? Perhaps not. 

But educational events don't occur in a vacuum, remember? And it is perhaps not the didactic itself - or the fact that it was a didactic - that generates the effect. Instead, it's the cumulative effect of these educational pauses - these times for collegial reflection and intellectual conversation across roles and professions - that improves practice. 

In research speak, this means that the didactic is not the intervention that should be studied. Rather, it is the practice of coming together as a group of educators to talk about teaching as it affects all of us and our patients that is the true intervention to be studied. Educational research is funny like that - RCTs just aren't always the best design. 

Bottom Line:

Educational events don't occur in a vacuum, and studying them is not as simple as picking from a menu of clinical research designs. Evaluate your educational event as an educational event, and study it using educational research tools; you'll have a better chance at finding the impact on provider and patient outcomes if you know what you're looking for.