Wednesday, December 24, 2014

From the Field: Reconceptualizing the Feedback Dilemma in Clinical Education

This guest post comes to us from Jack R. Scott, EdD, MPH; Winthrop University Hospital – Stony Brook School of Medicine. Dr. Scott leads faculty development in teaching and educational research and scholarship, so you can rest assured that your next 2 minutes will be well spent in reading his thoughts. Now, put your feet up and dig in!


The "educational alliance" as a framework for reconceptualizing feedback in medical education. Telio S, Aijawi R, Regehr G. Acad Med. 2014.


Available online from the Baystate Health Sciences Library or from PubMed at your institution.


The Educational Alliance offers keen insights for our long-standing, historic assessment of students’ clinical performance, namely formative feedback. Most will agree that our assessment methods have been ineffective, infrequent and even haphazard when measuring observed clinical performance. While students often report that receiving feedback is among the most defining moments in their clinical rotations they are quick to recognize its inefficiencies that we seem to stubbornly ignore. Perhaps this is due to our own lack of standardized approaches, infrequent observation opportunities, subjectivity, fear of giving negative comments, complex procedural logbook ‘sign-offs’ or even intimidating learning environments. Yet much like in apprenticeships, we believe that our judgments and advice are important no matter how flawed.  Whatever the extant disconnect in authentic formative assessment, we need a method that approaches consistent reliability. The solution may be in an adaptation of the therapeutic alliance in psychotherapy, namely the ‘educational alliance’.

These authors purport an innovative process by a clinical-educator sufficiently trained in giving constructive feedback in appropriate clinical/surgical education settings. This educational alliance offers a mentoring role-model that is best applied at multiple assessment points. Incorporating technology may likewise add authentic assessment opportunities (there must be a feedback algorithm app for this).

We are all guilty of grade inflation when we eschew standardized rubric measures or behavioral anchors that award a grade for students that is not truly deserved.  Extrapolating our brief encounters into a grade is limiting. So that is why multi-source feedback is ever more in vogue. Clerkship Directors should assign an experienced faculty member who aspires to excellence in accurately assessing students’ clinical performance.  Such a consistent approach ensures that ‘we act according to what we believe’ (Latin: agree sequitar credere). Let’s recognize, optimize and reward exemplary learner-centered assessments that are coherent with the credible relationship-based therapeutic alliance model. 

Bottom Line:
Being a great teacher means giving great feedback (in all its forms) to students. They are inextricably linked. This article offers one way that an alliance can start the process. Carpe diem

Monday, October 20, 2014

Cultures, Identities, and Teams, Oh My!

Professional Differences in Interprofessional Working. Baxter SK, Brumfitt SM. J Interprof Care. 2008; 22(3): 239-251. 

Available online from the Baystate Health Sciences Library or from PubMed at your institution.


Quick game.. What do you think of when I say "Culture"?!

Now, close your eyes and visualize it. 

If I close my eyes, I will have to stop reading your blog. 

Don't do that. Just close your eyes, make note of what image pops into your head, and then continue reading. 

If you are one of our friends from microbiology, you may have imagined a Petri dish. Otherwise, you may have visualized a group of people. Were those people in scrubs? 

Baxter and Brumfitt, in this article, note that "culture" can refer to physicians and nurses and PCTs and OTs and PTs and PAs, etc.. 

Crediting the word "tribalism" to Beattie (1995), Baxter and Brumfitt write "Professional differences [between health care professions] have been described as "tribalism," developed as a result of professions evolving separately, with deeply rooted boundaries between them."

So, all of us in healthcare developed - through our educational experiences and our association with our own professional groups - a professional identity. 

But, if we want to work as effective teams for our patients, we have to include in that professional identity, a secure understanding of what we bring to the team and an appreciation for the skills brought by our colleagues. 

This article takes us through a multiple-case study, exploring "the nature of joint working practice" among health care providers. In other words, they looked at how different professions were able to work together. 

And? 

And, they found a lot to suggest that some of us closely see our "team" as being a multi-professional group that cares for a patient and others who see their "team" as the colleagues in their same profession. 

Ultimately, it's complex. Way too complex for a blog post. 

Well, you tried. 

But not too complex that we shouldn't all read this article to get a sense of how to better create effective interprofessional care teams. We need to start defining our "team" the same way that our patients do - as an interprofessional group of providers, each responsible for a different and essential element of their care. 

Bottom Line:
Interprofessional practice requires the acknowledgement of our skills in concert with the skills of other practitioners. Read this article and reflect on how you can contribute to your TEAM.  

Wednesday, September 10, 2014

Did You Know There's a Map for That?

Adult learning theories: Implications for learning and teaching in medical education: AMEE Guide No. 83. Taylor DCM, Hamdy H. Med Teach. 2013; 35:e1561-e1572. 

Available online from the Baystate Health Sciences Library or from PubMed at your institution.


For all the work you, as clinical teachers, do for your patients, your students, and your colleagues, it's nice to know that there are awesome resources just for you, isn't it? What? No. I'm not talking about this blog. This blog is for me. See, I'm talking about something much more serious - the AMEE Guides. 

Who's AMEE?  All you need to know about AMEE (Association for Medical Education in Europe) is that they are an international group of powerhouse brains in medical education that hold their annual conference in places like Vienna, Austria; Prague, Czech Republic; and Milan, Italy. Their membership represents some 90 countries and many of the top dogs of medical education research. 

Not impressed yet? Fine, well how about some of the workshops that were offered at their most recent gathering:

Through the eyes of experts: What do "rich pictures" add to the understanding of surgical judgment?

The elephant in the room: Benchmarking the assessment of clinical competence

Influence of music on the teaching, learning processes in medical students

And my personal favorite:

Where is the line between sloppy and scientifically irresponsible? A discussion to promote excellence in medical education research 

AMEE is no joke. And, for those of us who can't quite make it to Milan, Italy, to learn about effective feedback and music and elephants, at least we can benefit from the work of this powerhouse organization while on break, right in the comfort of our very own Danskos: we can read the AMEE Guides.

AMEE Guides provide "information, practical guidance, and support." They are written for teachers and staff, and while AMEE is a group focused on medical education, their principles extend well into all health professions. Think of them like maps specifically designed for all of us teachers. 

And the best part? There's all kinds. Top rate information, packed into a delicious, short paper, and available online. And in this format, AMEE teaches us about scholarship, teaching, feedback, research, learning theories, etc. Hungry for more? Chances are, there's a map for that, too. 

Bottom Line:

Health professions education can be an overwhelming gig. Good to know that AMEE's got your back. Use the AMEE Guides like maps towards best practice. And advance your role as a teacher toward success. 

Friday, July 18, 2014

Happy New Year! Now, Again... About Those Resolutions...

Looking back to the future: A message for a new generation of medical educators. Harden RM. Med Educ. 2011; 45:777-784. 

Available online from the Baystate Health Sciences Library or from PubMed at your institution.


Ah, July...  A new crop of students or residents for many of us educators and, even if your academic year is different, July is a fantastic time to make professional resolutions. You might remember I tried to encroach on your resolutions last year, and I enjoyed invading your personal goal-setting so much, that I decided to make it an annual event. 

SO, that brings me to Ronald Harden. Harden brings us "back to the future" in this article by reflecting on his career in medical education in order to offer his list of Lessons Learned.  

I know it's tempting, but try NOT to jump straight to the Lessons Learned. "But, Rebecca!" you'll say, "It's a list with short paragraphs and bolded headings!" I know, I know. But trust me when I pull out my inner qualitative researcher to tell you that "context matters." Read where he has been to recognize the value in his words.

And the value is great. Harden was an endocrinologist in Glasgow when he started as junior doctor. His passion for medical education grew and his perception of the influence of his colleagues, students, and his environment are palpable. His reflection is part medical education history, part UK medical education yearbook, and part graduation speech, balancing a determination to improve education with the good fortune to be in a position to do it. 

But let's be honest. The Lessons Learned in this article are what you're going to read on the elevator. 

And here's where your resolutions will evolve. Take heed that you, as a clinical educator, regardless of your profession, can be just a bit more awesome than you already are by reflecting on what Harden's lessons mean for you. Innovation. Nudges. Practicality. Collaborators. Funding. Publishing. Fun. These are not a menu of choices for educators: they are priorities for success. Make a resolution for each of these. For example.

Lesson 3: Nudges are important. What small, powerful innovation can you promote in your area?

Lesson 6: There is always something to learn outside of your practice. Too true. Go to a lecture or talk that is NOT given by your profession or your department. Crash someone else's grand rounds. 

Lesson 10: Have fun! Well, I do not condone this one, BUT if you must enjoy what you do, take your work, not yourself, very seriously. 

Ron Harden has a successful, international career based on advancing medical education. He's been senior editor of journals, keynote speaker at international conferences, and has held several leadership roles professionally. But Ron Harden started one July, many years ago, as a junior clinician. Like you. 

So Happy New Year! Read this article and make some resolutions for your own career. Then write about your path, and I'll link to it. Promise. 

Bottom Line:

It's the New Year for someone right now, which means resolutions are ripe for the making. Grab a glass of champagne, your educational enthusiasm, and this article by Ron Harden. Then set some resolutions to make yourself a better teacher, a better scholar, a better clinician. Your students and your patients will benefit. 

Wednesday, June 11, 2014

Conference Inspiration without the Side Effects: A Rejuvenating Article for the "Off-Season"

The expert patient as teacher: an interprofessional Health Mentors programme. Towle A, Brown H, Hofley C, Kerton RP, Lyons H, Walsh C. Clin Teach. 2014; 11:301-306. 

Available online from the Baystate Health Sciences Library or from PubMed at your institution.


Attending professional conferences is a great way to get inspired. Agendas are usually filled with a gluttonous amount of great ideas and innovations that I end up consuming ferociously for my practice. 

Hm. 

Disturbing visual image aside, conferences are equal parts overwhelming and rejuvenating, yet since we're months away from conference season, I thought this article - which reads like a conference-bound, broad-sweeping, well-rehearsed oral presentation of an innovation - might similarly inspire us in the "off season" (and from the comfort of our own living room couches). 

The authors present an innovative program in which patients are mentors for a small, inter-professional group of students. Is it novel that patients teach students how to care for patients? Not necessarily. 

But driving the innovation of this program are some considerations of program development that I have repurposed in the hopes that they inspire you.   

1. ReflectionWhether you view it as the icing on the cake or the cake itself, reflection is a key part of learning. Journaling and debriefing encourage both students and mentors to pause and draw meaning from their experiences and discussions. Anyone who has ever journaled or read the journal entries of others can attest to the power of reflective practice on learning. 

2. Demand creativity. Students in this program are asked to cap off two-semesters of conversations not with a test or an essay but with a "tweet" and a "creative, visual representation of their learning." Remember that pesky leveling of learning objectives? "Creative" exercises are up there with synthesizing new knowledge. Not a bad way to "test" the material...

3. Data, data, data. Note the ways that the program facilitators gather data and review it for  program effectiveness. Facilitators here are keyed in to the qualities that make this program work and are diligently monitoring the program to see if it meets goals, using both quantitative and qualitative data. In fact, I would argue that they could probably stand to collect some more quantitative data on students' development of competencies. I said it. But (and here's the confusing part), see #4... 

4. Think hard before overthinking it. In their reflections and advice to other programs, the authors write; "Minimal instructions: keep it simple, trust the process, and resist demands for more structure and instructions." Who doesn't love more structure? Their sentiment is helpful, though, that while you're collecting all of this great data, pause before you act too aggressively on it. (Just don't ask me how to do that.)   

Bottom Line:

Let this article infuse you with the inspiration that normally only comes from the burnt coffee and beige hotel chairs at your professional conference. Their innovation is interesting, but the real reason to break out the highlighter is the advice around their program development; a true boost of professional development without that pesky name tag. 

Friday, May 23, 2014

"Do Educational Conferences Work?" And Other Follies in Published Research

Impact of Formal Continuing Medical Education: Do Conferences, Workshops, Rounds, and Other Traditional Continuing Education Activities Change Physician Behavior or Health Care Outcomes? Davis D, O'Brien M, Freemantle N, Wolf F, Mazmanian P, Taylor-Vaisey A. JAMA. 1999; 282(9): 867-874. 

Available online from the Baystate Health Sciences Library or from PubMed at your institution.


Many of you design or attend educational conferences and events, yes? And, you dutifully evaluate your educational conferences and events, yes? (What!?  OK, just nod - but email me. We can talk later.) This article - resurrected from the late 90's - asks if stepping away from our practice to attend an educational event is, well, worth it. 

Turns out, when Davis and colleagues asked if educational conferences changed physician behavior, they found that - SPOILER ALERT - that yeah, um,.. well, sure - you know, sometimes. Say it with me: "Well, I could have told you that."

Look, Davis and his colleagues ask a really important question, and they answer this question about the literature appropriately by searching the literature. But the meat of this article is not the study. In fact, I might argue that this is the second time this week I've seen a qualitative discussion answered with a quantitative solution. 

I could go into a diatribe on the Limitations and Implications sections of this article (For example, I could ask "you did a meta-analysis on RCTs evaluating educational events? Were you TRYING not to find anything?" But I won't.) 

Instead, I draw your attention to the Conclusions. They argue that formal CME interventions seeking to change provider behavior and patient outcomes must focus on "the complex intrapersonal, interpersonal, and professional educational variables that affect the [provider]-learner." In other words, your educational event is not occurring in a vacuum. 

They go on to say, in a nice way, that didactics don't work. 

And that's where I think they contradict themselves. 

Sure, some speakers are just better than others. But to say they don't work? I've attended many grand rounds where the speaker has inspired me, or motivated me, or introduced me to new collaborators or new concepts or prompted intellectual conversation with my neighbor. Did my practice improve that day? Perhaps not. 

But educational events don't occur in a vacuum, remember? And it is perhaps not the didactic itself - or the fact that it was a didactic - that generates the effect. Instead, it's the cumulative effect of these educational pauses - these times for collegial reflection and intellectual conversation across roles and professions - that improves practice. 

In research speak, this means that the didactic is not the intervention that should be studied. Rather, it is the practice of coming together as a group of educators to talk about teaching as it affects all of us and our patients that is the true intervention to be studied. Educational research is funny like that - RCTs just aren't always the best design. 

Bottom Line:

Educational events don't occur in a vacuum, and studying them is not as simple as picking from a menu of clinical research designs. Evaluate your educational event as an educational event, and study it using educational research tools; you'll have a better chance at finding the impact on provider and patient outcomes if you know what you're looking for.

Wednesday, April 16, 2014

Mmmmmm, Low Hanging Fruit

Various.  Clinical Teacher, Medical Education, Medical Teacher, Teaching and Learning in Medicine, JAMA, Journal of Graduate Medical Education, Academic Medicine, American Journal of Nursing, Journal of Interprofessional Care. Available online from the Baystate Health Sciences Library or from PubMed at your institution.


Introduction. Methods. Results. Discussion. 

These four words! They're so... so... strict, yes? I mean, why do these four words hold so much power? That great idea of yours - no really, it's great! - is going to propel you to stardom in health professions education, but not if it can't be published. And it can't be published if it doesn't have an Introduction. Methods. Results. Discussion. 

Right? 

Wrong. See, your great idea? It's more of a...perspective, no? Or, maybe it's more like....like....a viewpoint. No, it's really a souped-up list: twelve tips on achieving stardom in health professions education! 

Well, my friend, there are outlets for you. There are opportunities where your writing can magically transform your brilliance into low hanging fruit. Yum. Before you embark on a full-scale research study (which you absolutely should consider) (with proper mentorship, of course) (call me), check out these peer-reviewed opportunities for writing up your truly brilliant idea into a form other than I-M-R-D. 

  • Insights (Clin Teach) - 800 words of "structured reflection"
  • How We... (Med Teach) - 2,500 words on how you did something; "what is involved in implementing a practical idea or topic in medical education" with a reflection
  • 12 Tips (Med Teach) - 3,200 words max giving practical tips or advice on something
  • Perspectives (JGME) - 1,200 words of an evidence-based opinion
  • On Teaching/On Learning (JGME) - 1,200 words of a personal reflection or essay of the physician experience
  • Really Good Stuff (Med Educ) - semi-annual opportunity for a 500-word write-up of a new innovation or idea
  • Short Report (J Interprofessional Care) - 1,000 words about an innovation or research in progress (before it becomes I-M-R-D)
  • Last Page (Acad Med) - one-page visual display of a concept, trend, program, policy, or person that is timely or timeless. (Guess where you'll find it in the journal?)
  • Developments (Teach Learn Med) - 2,000 words of a new innovation or, yes, a development
  • Viewpoint (JAMA) - About 1,000 words, well focused and evidence-based on an important topic
  • Viewpoint (Am J Nursing) - 700 words of a topic - may or may not be controversial (roll those sleeves up!)
  • Reflections (Am J Nursing) - 850-words of a reflection on a personal nursing story (must be good writing - some people are so picky)

These are just a few. There are plenty more opportunities out there. Don't let the burden of a I-M-R-D article keep you from getting your word out there! Search the author guidelines of your favorite journal and find some of these other opportunities. You have a date with stardom! 

Bottom Line:

Look, if your project was meant to be published as I-M-R-D, you wouldn't still be reading this. Step back, think about what you want to say. We may not all have one good novel in us, but we all have a great idea (wink, wink - yours is awesome!) waiting to see daylight. So, free yourself of the I-M-R-D doldrum and grab that low-hanging fruit. 

Friday, January 24, 2014

From the Field: Relating Education Theory to Milestones/EPAs for Resident ‘Master Learners’

Check out this post "From the Field" by Jack R Scott, EdD, MPH, -- Assistant Dean, Winthrop University Hospital on Long Island; Stony Brook SOM Clinical Campus and a partner in educational scholarship. Use it as a springboard to think about your practice, and how much you allow - gulp! - learning theory to influence your teaching. Enjoy!

Developing the Master Learner: Applying Learning Theory to the Learner; the Teacher and the Learning Environment. Schumacher DJ, Englander R, Caraccio C. Acad Med. 2013; 88: 1635-45. Available online from the Baystate Health Sciences Library or from PubMed at your institution.


When we think of competency-based learning for medical students and residents the concept of Self-Directed Learning (SDL) naturally comes to mind. After all, this is how they become ‘master learners’. These authors pose an appealing and appropriate correlation between theory and the inherent factors of SDL, namely accuracy of self-assessment and self-efficacy that gauge one’s achievements to mastery.

OK, before you stop here and dismiss the banal aspects of education theory, please consider that well-accepted constructs in adult learning are at the cornerstone of medical education – collaborative and contextual learning, simulation practice and individualized learning plans, among others.

The article’s section on self-determination theory relates well to our specific expectation for residents’ success in Entrustable Professional Activities (EPAs). Furthermore, reliance on self-assessment (another prime adult learning principle) must isolate reflection on action from the more critical and accurate reflection in action. So, kindly consider the practical applications inherent in education theory that create credible teacher-learner relationships, supportive learning environments and above all – reliable self-directed master learners to attain the explicit goals in our comprehensive resident training.

Bottom Line:

Read the article and share a meaningful discussion on achieving mastery in teaching and learning. Discover the meta-cognitive practice of thinking about our teaching practices as we develop each resident’s unique competencies.

Tuesday, January 14, 2014

Megalodon Meets Rejection

Transforming teaching into scholarship.  Turner T, Palazzi D, Ward M, Lorin M. Clin Teach 2012; 9: 363-367. Available online from the Baystate Health Sciences Library or from PubMed at your institution.


I got a paper rejected today. There are two possible reasons for this. A) The enormity of my awesomeness and unique perspective are too immense for this journal, and accepting the article would have been too overwhelming of an experience for the editors to handle, the equivalent of putting Megalodon into an aquarium. Or, B) I am a loathsome mess of failure in academic medicine (and life) whose never had an interesting thought or perspective on anything ever, and the world is collectively sighing now that someone has finally made me aware. 

Of course, it's possible there's a third option. Something in the middle: A 1/2) The feedback I got from the reviewers was good feedback and advice and that, if I take this advice, I can put together a better manuscript that has a solid chance of being published. Not at this journal, mind you, but at a lesser, lower impact journal whose editors are monkeys or people who write professional blogs. 

And this is somewhat comforting. In fact, mentors guide us to meet rejection with some kind of perseverance. But the initial get-up-and-go required for putting together and submitting your clinical teaching as scholarly work is an immense animal (Megalodon, perhaps) to be tamed even before the publication decision. And, if the rejection comes, what are some other considerations? 

This article by Turner and colleagues is a useful read; in a 5-page article (don't worry - there are at least two tables in there, only 9 references, and two photos of clinical teaching. Well, one photo of clinical teaching and one photo of a woman writing near a coffee, which must be clinical scholarship). 

The authors bring up many things that should form the backdrop for a clinical educator seeking to disseminate their good work: Boyer and Glassick and the scholarship of teaching, working collaboratively, peer review, outlets for scholarship, and the educator's portfolio. This article is brief - like being offered one bite of an entire Carnival Cruise buffet - but it still may be enough to figure out how hungry you are. And having this sense of the possibilities for educational scholarship is helpful for keeping your projects going (around A 1/2).   

Bottom Line:

Take ten minutes out of your day to read this article and make sure it's nothing new for you. This snapshot of educational scholarship could help frame your perspective so that the immense project taking up space on your To Do List won't die there.

Monday, January 6, 2014

Instrument Construction, or "If It Was Easy, You Did It Wrong"

Thriving in long-term care facilities: instrument development, correspondence between proxy and residents' self ratings and internal consistency in the Norwegian version. Bergland A, Kirkevold M, Sandman PO, Hofoss D, Vassbo T, Edvardsson D. J Adv Nursing. 2013; Early Online. Available online from the Baystate Health Sciences Library, or from PubMed at your institution.

In educational research, we measure things that Paul Visintainer might call "soft." These things are constructs (a term which might seem familiar to you because Tony Artino gave us some insight to writing surveys not too long ago). Constructs help us describe our learners and our patients. Well, this article demonstrates a comprehensive look at developing instruments to measure constructs (no, no - stay with me!).

There are many ways to develop and validate your instrument (yes, I know, the instrument itself is not valid, it produces valid DATA. I know this. I preach this. But in the essence of shortening the sentence, I said "validate your instrument." As long as we're all on board that an instrument which produces valid scores for one group doesn't automatically produce valid scores for another, you'll please bear with me on the details). 

So, there are many ways to develop an instrument with rigor that set it up to produce valid scores. These ways all have some similarities, which I think are identified well in this article. They boil down to:

First, the "thing" being measured is defined. This is simple, yet critical, and often skipped. (Bad. No skipping.) In this article, the authors measure "thriving" which they define by what it is and what it is not. In fact, they have a nice background on how it has been defined previously. Even if we think that everyone knows what the construct is that we're measuring, we still define it. (Quick example: Try to measure "success." From whose perspective? Based on financial security? Based on happiness? Wait - how do we measure "happiness"? Exactly.)

Second, question items are developed in a meaningful and thoughtful way. Think theoretical or conceptual framework. Perhaps you conducted focus groups or in-depth interviews and then analyzed them for themes which became the backbone of your instrument. Or, like in this article, you used interviews and a structured lit review. Note that "talked it over with your friend" does not appear here.  

Third, you report on some measures of consistency/reliability and validity. Side note here about validity and reliability: my two-year-old sleeps with a stuffed monkey. He needs this monkey to survive. From what I can tell, the monkey does not feel the same way. This is sort of the deal with validity (2-year-old) and reliability (monkey). Validity needs reliability to exist. But reliability can exist just fine without validity. So to demonstrate validity, you must accumulate evidence that it is valid. An accumulation of reliability metrics (see this article) and your theoretically and conceptually sound process for item development (see above) can help here. 

Fourth, and Tony Artino wrote about this, instruments need to be piloted. In the highlighted article, a pilot instrument was designed for three different groups of respondents. Statistical analysis helped to guide item reduction. 

Even with their comprehensive approach to instrument design in this article, the authors still end with a tool that they claim needs "further psychometric evaluation and refinement." In other words, even after all this work, it's still not done! And, of course, that's the rub with developing an instrument from scratch - it's a lot tougher than using what already exists (thwarting the wheel reinvention). And, if it's not tough, check to see if you missed something, or start a blog to explain it to us. 

Bottom Line:

This article is a good view of the basic processes of instrument design, and presents the basic foundation - from defining the construct to a theoretical framework to pilot testing. Sure, it's a lot to digest. But, the impact of our clinical practice isn't just apparent in physiological measures, and good instruments - new instruments - can help us uncover some of the most important ways that we, as caregivers, have true impact