Friday, January 24, 2014

From the Field: Relating Education Theory to Milestones/EPAs for Resident ‘Master Learners’

Check out this post "From the Field" by Jack R Scott, EdD, MPH, -- Assistant Dean, Winthrop University Hospital on Long Island; Stony Brook SOM Clinical Campus and a partner in educational scholarship. Use it as a springboard to think about your practice, and how much you allow - gulp! - learning theory to influence your teaching. Enjoy!

Developing the Master Learner: Applying Learning Theory to the Learner; the Teacher and the Learning Environment. Schumacher DJ, Englander R, Caraccio C. Acad Med. 2013; 88: 1635-45. Available online from the Baystate Health Sciences Library or from PubMed at your institution.


When we think of competency-based learning for medical students and residents the concept of Self-Directed Learning (SDL) naturally comes to mind. After all, this is how they become ‘master learners’. These authors pose an appealing and appropriate correlation between theory and the inherent factors of SDL, namely accuracy of self-assessment and self-efficacy that gauge one’s achievements to mastery.

OK, before you stop here and dismiss the banal aspects of education theory, please consider that well-accepted constructs in adult learning are at the cornerstone of medical education – collaborative and contextual learning, simulation practice and individualized learning plans, among others.

The article’s section on self-determination theory relates well to our specific expectation for residents’ success in Entrustable Professional Activities (EPAs). Furthermore, reliance on self-assessment (another prime adult learning principle) must isolate reflection on action from the more critical and accurate reflection in action. So, kindly consider the practical applications inherent in education theory that create credible teacher-learner relationships, supportive learning environments and above all – reliable self-directed master learners to attain the explicit goals in our comprehensive resident training.

Bottom Line:

Read the article and share a meaningful discussion on achieving mastery in teaching and learning. Discover the meta-cognitive practice of thinking about our teaching practices as we develop each resident’s unique competencies.

Tuesday, January 14, 2014

Megalodon Meets Rejection

Transforming teaching into scholarship.  Turner T, Palazzi D, Ward M, Lorin M. Clin Teach 2012; 9: 363-367. Available online from the Baystate Health Sciences Library or from PubMed at your institution.


I got a paper rejected today. There are two possible reasons for this. A) The enormity of my awesomeness and unique perspective are too immense for this journal, and accepting the article would have been too overwhelming of an experience for the editors to handle, the equivalent of putting Megalodon into an aquarium. Or, B) I am a loathsome mess of failure in academic medicine (and life) whose never had an interesting thought or perspective on anything ever, and the world is collectively sighing now that someone has finally made me aware. 

Of course, it's possible there's a third option. Something in the middle: A 1/2) The feedback I got from the reviewers was good feedback and advice and that, if I take this advice, I can put together a better manuscript that has a solid chance of being published. Not at this journal, mind you, but at a lesser, lower impact journal whose editors are monkeys or people who write professional blogs. 

And this is somewhat comforting. In fact, mentors guide us to meet rejection with some kind of perseverance. But the initial get-up-and-go required for putting together and submitting your clinical teaching as scholarly work is an immense animal (Megalodon, perhaps) to be tamed even before the publication decision. And, if the rejection comes, what are some other considerations? 

This article by Turner and colleagues is a useful read; in a 5-page article (don't worry - there are at least two tables in there, only 9 references, and two photos of clinical teaching. Well, one photo of clinical teaching and one photo of a woman writing near a coffee, which must be clinical scholarship). 

The authors bring up many things that should form the backdrop for a clinical educator seeking to disseminate their good work: Boyer and Glassick and the scholarship of teaching, working collaboratively, peer review, outlets for scholarship, and the educator's portfolio. This article is brief - like being offered one bite of an entire Carnival Cruise buffet - but it still may be enough to figure out how hungry you are. And having this sense of the possibilities for educational scholarship is helpful for keeping your projects going (around A 1/2).   

Bottom Line:

Take ten minutes out of your day to read this article and make sure it's nothing new for you. This snapshot of educational scholarship could help frame your perspective so that the immense project taking up space on your To Do List won't die there.

Monday, January 6, 2014

Instrument Construction, or "If It Was Easy, You Did It Wrong"

Thriving in long-term care facilities: instrument development, correspondence between proxy and residents' self ratings and internal consistency in the Norwegian version. Bergland A, Kirkevold M, Sandman PO, Hofoss D, Vassbo T, Edvardsson D. J Adv Nursing. 2013; Early Online. Available online from the Baystate Health Sciences Library, or from PubMed at your institution.

In educational research, we measure things that Paul Visintainer might call "soft." These things are constructs (a term which might seem familiar to you because Tony Artino gave us some insight to writing surveys not too long ago). Constructs help us describe our learners and our patients. Well, this article demonstrates a comprehensive look at developing instruments to measure constructs (no, no - stay with me!).

There are many ways to develop and validate your instrument (yes, I know, the instrument itself is not valid, it produces valid DATA. I know this. I preach this. But in the essence of shortening the sentence, I said "validate your instrument." As long as we're all on board that an instrument which produces valid scores for one group doesn't automatically produce valid scores for another, you'll please bear with me on the details). 

So, there are many ways to develop an instrument with rigor that set it up to produce valid scores. These ways all have some similarities, which I think are identified well in this article. They boil down to:

First, the "thing" being measured is defined. This is simple, yet critical, and often skipped. (Bad. No skipping.) In this article, the authors measure "thriving" which they define by what it is and what it is not. In fact, they have a nice background on how it has been defined previously. Even if we think that everyone knows what the construct is that we're measuring, we still define it. (Quick example: Try to measure "success." From whose perspective? Based on financial security? Based on happiness? Wait - how do we measure "happiness"? Exactly.)

Second, question items are developed in a meaningful and thoughtful way. Think theoretical or conceptual framework. Perhaps you conducted focus groups or in-depth interviews and then analyzed them for themes which became the backbone of your instrument. Or, like in this article, you used interviews and a structured lit review. Note that "talked it over with your friend" does not appear here.  

Third, you report on some measures of consistency/reliability and validity. Side note here about validity and reliability: my two-year-old sleeps with a stuffed monkey. He needs this monkey to survive. From what I can tell, the monkey does not feel the same way. This is sort of the deal with validity (2-year-old) and reliability (monkey). Validity needs reliability to exist. But reliability can exist just fine without validity. So to demonstrate validity, you must accumulate evidence that it is valid. An accumulation of reliability metrics (see this article) and your theoretically and conceptually sound process for item development (see above) can help here. 

Fourth, and Tony Artino wrote about this, instruments need to be piloted. In the highlighted article, a pilot instrument was designed for three different groups of respondents. Statistical analysis helped to guide item reduction. 

Even with their comprehensive approach to instrument design in this article, the authors still end with a tool that they claim needs "further psychometric evaluation and refinement." In other words, even after all this work, it's still not done! And, of course, that's the rub with developing an instrument from scratch - it's a lot tougher than using what already exists (thwarting the wheel reinvention). And, if it's not tough, check to see if you missed something, or start a blog to explain it to us. 

Bottom Line:

This article is a good view of the basic processes of instrument design, and presents the basic foundation - from defining the construct to a theoretical framework to pilot testing. Sure, it's a lot to digest. But, the impact of our clinical practice isn't just apparent in physiological measures, and good instruments - new instruments - can help us uncover some of the most important ways that we, as caregivers, have true impact