Wednesday, December 11, 2013

From the Field: Can You Write an Effective Questionnaire? A. Yes, B. Always, C. Read this Post!

Check out this post "From the Field" by Anthony Artino, Jr, PhD, - Associate Professor at Uniformed Services University of the Health Sciences in Bethesda, survey design connoisseur, and, most recently, guest blogger! Apply Dr. Artino's points to research, quality improvement, or your everyday opinion survey. Dig in!

You can't fix by analysis what you've spoiled by design. Rickards G, Magee C, Artino AR. JGME. 2012; 4(4): 407-410. Available online from the Baystate Health Sciences Library and at your institution. 

Tracing the steps of survey design: A graduate medical education research example. Magee C, Rickards G, Byars LA, Artino AR. JGME. 2013; 5(1): 1-5. Available online from the Baystate Health Sciences Library and at your institution. 

What do our respondents think we're asking? Using cognitive interviewing to improve medical education surveys.  Willis GB, Artino AR. JGME. 2013; 5(3): 353-356. Available online from the Baystate Health Sciences Library and at your institution. 

If you’re anything like me, you’ve completed more questionnaires than you care to count. Whether it’s an end-of-course evaluation of a workshop you attended or a satisfaction survey from a recent visit to the clinic, questionnaires are ubiquitous in education and health care. There’s one problem though, and it’s a problem you’ve surely encountered – many questionnaires are poorly designed and often times they fail to capture the very thing they’re attempting to measure. Some common problems with questionnaires include confusing or biased language, baffling visual layout and design, and unclear instructions. Unfortunately, in the age of email, the Internet, and online tools such as SurveyMonkey, the number of survey requests grows exponentially with each passing day. 

Despite the plethora of bad questionnaires that exist in education and health care, there is a wealth of evidence-based knowledge regarding the “best practices” in survey design. Much of this knowledge is detailed in the highlighted articles and briefly summarized below as three principles:

1. You can’t fix by analysis what you’ve spoiled by design.  Even though this principle is true for all types of research and evaluation, it is especially true in questionnaire design for one simple reason – when creating a survey, we’re often trying to assess things that are traditionally hard to measure (so-called “fuzzy” or non-observable constructs). These fuzzy constructs include things like student anxiety, resident confidence, and faculty job satisfaction. As such, it’s critically important that survey designers take the time to carefully design and pretest their questionnaires prior to implementation.

One way to pretest a questionnaire is to have a group of experts review the items and then have a group of potential respondents complete the survey while you observe. Having experts review your draft ensures, among other things, that the content of your survey is relevant and clear; whereas having potential respondents complete your survey verifies that the way they interpret your items aligns with what you had in mind when you designed the questionnaire.

2. The questions guide the answers. People often underestimate the degree to which the precise wording of a question plays a critical role in determining the answers provided by respondents. Take, for example, the following two questions about health insurance: “Are you fairly treated by your health insurance company?” versus “Does your health insurance company resort to deception in order to cheat you of covered benefits?” These two questions would likely elicit very different responses, and it probably wouldn’t surprise you to find that an advocate for health insurance reform asked the second question. Clearly, words like “deception” and “cheat” are strong indications that the author of the question doesn’t have a high opinion of the health insurance industry. Thus, as this principle implies, the wording of a question largely determines the answers people provide.

And while this principle is true in everyday life, when it comes to questionnaires, the effect is even more pronounced; most surveys don’t give respondents the chance to provide feedback about misunderstandings and ambiguities. Therefore, when it comes to questionnaire design, small wording changes can often make big differences, which is another reason to pretest your survey before sending it out to 3,000 respondents.

3. Think of it as a conversation. At the end of the day, a questionnaire is really just a conversation between you (the skillful survey designer) and your respondents. As such, you should consider the implicit assumptions that underlie the conduct of conversations in everyday life. These conversational “rules” include the idea that speakers should try to be informative, truthful, relevant, and clear. If you break these rules as a survey designer, you shouldn’t be surprised (or upset) if your respondents, in turn, provide you with poor-quality answers.

An important implication of this principle is that you should ask questions when you want to learn something from your respondents, as opposed to asking them to agree or disagree with a list of statements. Asking people to rate a bunch of statements is not very conversational – when’s the last time you went up to a friend in the hallway and asked her to “rate the following statements on a scale of 1 to 10”? At the end of the day, people are more familiar and adept at answering questions – not rating statements – so, as an informed survey designer, you should ask well-thought-out questions and pretest those questions on experts and potential respondents prior to implementing your survey.
  
Notwithstanding the temptation to think of survey design as “more art than science,” there’s actually quite a bit of scientific evidence to guide you through the survey design process. Following these evidence-based best practices will not only save you time and effort during data analysis and interpretation, but they will also improve the chances that your survey will actually measure what you intend it to measure.

Bottom Line:

A. Put some effort and thought into the development of your questionnaire and pretest your survey items before implementation. 
B. Use these articles as a way to develop good practice. 
C. All of the above!

Monday, November 4, 2013

Tokenistic or Authentic: What exactly do you mean by "Let's collaborate"?

Patients as educators: Interprofessional learning for patient-centered care.  Towle A & Godolphin W. Med Teach. 2013; 35:219-225. Available online from the Baystate Health Sciences Library, or from PubMed at your institution. 

I write this post from the coffee station at the Association of American Medical Colleges (AAMC) annual meeting. The coffee is being refilled after spending so many days fueling the conversations of leaders, followers, educators, contributors, and stakeholders in American medical education. No doubt this coffee fuels the exchange of many introductions and handshakes, business cards, and emphatic opportunistic collaborations. 

Against that backdrop is the perspective outlined by Towle & Godolphin in this article. Their phrase "tokenistic..." keeps coming to mind. As in, "Professionals have difficulty letting go of their expert role, leading to tokenistic involvement rather than partnership which requires a reduction in the power difference between [insert your profession here] and [their profession here]." 

Consider the dynamics that underscore this sentence: Power! Collaboration! Expertise! Control! Interprofessional partnerships! If these weren't the makings of an article in Medical Teacher, they would certainly be so for a Lifetime Original Movie for clinician educators. 

Interprofessional education and practice is the obvious alignment of these constructs: do we know enough about the roles of our colleagues in order to relinquish power in decision-making appropriately and in order to make collaborative decisions that depend on the expertise of multiple people? 

The not-so-obvious and more common application of these constructs might be in the everyday collaborations; the educational programs (as described in this article) and the manuscript-writing partnerships. And, perhaps in the committee formation and the policy revisions. When do we truly expect and ask for authentic collaboration, and when are we comfortable with tokenistic involvement? For ourselves and for our colleagues? Do we always know which is being given? 

Interprofessional collaboration is the authentic application of involvement from many professions to the care of the patient. But how are we trained to do this in our other professional roles, and how do we encourage and expect it of our colleagues? 

Bottom Line:

Read this article to prompt a discussion of meaningful collaboration, but apply the concept to other professional areas. Perhaps some reflection here might set a higher bar for communication with our patients, our learners, our colleagues, and ourselves.

Wednesday, October 23, 2013

From the Field: Is Your Research Powerful?

We're lucky enough to have this "From the Field" post from Paul Visintainer, PhD, Director of Epidemiology and Biostatistics here in Academic Affairs at Baystate. Grab a pencil, a notebook, and your most mathematically-oriented friend and read this post as your own personal lesson in error, statistical power, and sample size. Because - of course - even educational research requires a power analysis!  

Statistics Review 4: Sample Size Calculations. Whitley E, Ball J. Critical Care. 2002; 6: 335-341. Available online from the Baystate Health Sciences Library, or from PubMed at your institution. 

I am always pleased when I see articles expounding the importance of sample size and statistical power in clinical research – particularly when they appear in clinical journals.  I was happy to see the Whitley and Ball article in Critical Care.  Although the article is several years old, the issues discussed are still – and will continue to be – relevant.  I just want to reinforce and expand on a couple of the concepts.

First, among some disciplines, there exists an idea – a myth, really – that sample size and statistical power are only relevant for randomized clinical trials.  I don’t know where this idea got started or how it continues, but it is completely untrue.  Any time a researcher wishes to describe a comparison with a p-value, he must also address statistical power.  This applies to all study types – randomized controlled trials, cohort studies, case-control studies, retrospective chart reviews (this last is actually not a study type, but rather a method of data collection), etc.  If the study uses a data analytic approach that generates a p-value, then statistical power should be addressed. 

The above issue becomes clear when one considers the reasoning underlying statistical analysis and sample size.  Suppose a researcher posits a question about some clinical effect -- e.g., Doesan exposure cause disease? Does the treatment reduce morbidity?  Does a medication reduce pain?  Since he doesn’t know the correct answer, he designs a study to answer the question.  (If he knew the right answer, he wouldn’t have to conduct the study, right?)  He wants to make a “generalizable” statement, (e.g., the effect is generally real for all patients).  However, he isn’t studying all patients – he is only studying a sample of patients.   

So, his conclusion will have some error because he is basing his results on only one sample.  In designing his study (before he collects any data), there are two possible errors he could make in his conclusion. 
A) He could conclude that there is an effect in his sample, when one truly doesn’t exist in the population.  This is the α-error (“alpha” or Type 1) and this is what the p-value reflects. 
B) He could conclude from his sample that there is no effect, when one truly exists in population.  This is the β-error (“beta” or Type 2) and this is what power reflects – actually, 1-β = power.

It is important to note that these errors are “conditional”.  That is, α-error is the probability of rejecting the null hypothesis if the null hypothesis is true. β-error is the probability of NOT finding an effect if the null hypothesis is false.  At the start of the study, the investigator doesn’t know the true status of the null hypothesis, so BOTH errors have to be addressed.  Among other things, this is precisely what a sample size calculation does.  It incorporates estimates of both errors into the computation of sample size.

Once the study is conducted, only one state for the null hypothesis will exist in the sample:
A)  If the results reject the null hypothesis with a p < 0.05,  then the investigator either has found a true effect or has a Type 1 error
B)  If the results fail to reject the null hypothesis, then the investigator has either demonstrated that there is truly no effect or he has a Type 2 error.

Notice that because the results are based on a sample and not the population, there will always be some error in the interpretation, regardless of what the p-value is.  Notice also that nowhere in this discussion does the approach apply only to an RCT type study.  The errors and uncertainty facing the investigator are present regardless of study type

The second point about statistical power discussed by the authors is the effect size.  I think this is a fundamental issue because when sample size and statistical power are addressed, the investigator establishes the clinical context within which study results will be interpreted.  How does he do that with sample size?  Consider the factors that go into computing a sample size:  type 1 error, type 2 error, some measure of variability (e.g., variance or standard deviation) and the specified difference in the groups or the treatment/exposure effect. Once these four factors are specified, the sample size can be computed.   

It is the last component – the specified difference in the groups or treatment/exposure effect – where the investigator defines what is clinically important.  After all, isn’t that the goal of clinical research?  To comment on clinically important effects?  It has been the eternal frustration of statisticians to have an investigator ask, “How many patients do I need to find a statistically significant result?”  Asking a question in this manner indicates that the investigator has not considered his study within a clinical context.  Similarly, a protocol that proposes a review 100 charts without any sample size justification, essentially suffers the same limitation – the investigator has not begun to think of his study (or the subsequent results) within a clinical context.

Luckily, at Baystate there are people in Academic Affairs that can help clinicians work through the issues of sample size and statistical power.  A discussion with a statistician is a great way to address sample size.  There are a lot of options and configurations to consider and clinicians should be aware how these options may affect your study. 


P.S.  (To answer the question, “How many charts should I review to find statistical significance?” Well, if clinical relevance is . . . irrelevant . . . , then I think it is safe to say if one reviews 10,000 charts, something will turn up statistically significant.  Better yet, review 20,000 charts just to be sure.)

Bottom Line:

Ask a statistician. 

IPE and VTE: An Educator's Portion of Alphabet Soup

Reduction of venous thromboembolism (VTE) in hospitalized patients: aligning continuing education with interprofessional team-based quality improvement in an academic medical center.  Pingleton SK, Carlton E, Wilkinson S, Beasley J, King T, Wittkopp C, Moncure M, Williamson T. Acad Med. 2013; 88(10):1454-1459. Available online from the Baystate Health Sciences Library, or from PubMed at your institution. 

I am an educator. Aside from observing some of my clinical colleagues, my only real clinical experience involves helping to restrain my young sons for flu shots. However, when my colleague suggested that this article - about an interprofessional effort to decrease incidence of venous thromboembolism (VTE) - is a view of how clinicians might think of interprofessional education, I rolled up my sleeves and prepared to muscle through clinical-ease to find the nuggets of educational insight.

To my delight, the clinical world's fondness for acronyms has once again eased the burden of a taxing vocabulary, as we read in this article about KU's VTE data, the KUH intranet, PICCs, and BPAs. Served up with this alphabet soup is the real gem of this article - the planning matrix, described as a way of mapping the "types of interventions ... on the learners' stages of acceptance..." Take a look at Table 1, and you'll that this is a very neat and souped-up way of saying they took time to design a curriculum. This thoughtful approach was also evident in how they treated their interprofessional group of learners - by considering the breakdown of responsibilities in decreasing VTE incidence. 

The article lacks in a basic way; their outcome is a view of the decrease in VTE incidence. Clinical education is designed to improve patient outcomes by changing provider behaviors which then change their approach to patient care. As educators, we should be measuring the extent to which our efforts change behavior as well as the change in patient outcomes. This offers a clearer picture of the link between educational efforts and VTE incidence. 

Overall, this article is a good view of the way that educational efforts - particularly interprofessional ones - are being designed to improve patient outcomes. The secret of their success? Having a clearly defined problem and a thoughtful, interprofessional curriculum designed to fix it. Now that's a recipe you'll want to steal. 

Bottom Line:

IPE is used to decrease VTE - view this as a window into the link between educators and patients. For a fun activity, apply the points from Kanter's editorial outlining a better process for writing about innovations onto this article to see how the authors' successfully put the spotlight on the problem before their innovative solution takes center stage. 

Tuesday, September 3, 2013

Human Capital Theory, or Whether or Not We Show Up for the Party

Child care responsibilities and participation in continuing education and training: issues relating to motivation, funding and domestic roles.  Dowswell T, Bradshaw G, Hewison J. J Advanced Nursing. 2000; 32(2):445-453. Available online from the Baystate Health Sciences Library, or from PubMed at your institution.

Picture this: you've cleaned the house. The cheese tray is out. Cocktail napkins are staged, and your neighbor's teenage son programmed the perfect soundtrack on your iPhone and got it to play throughout your house. The only problem? The guests never show. And so it often goes with our professional development: the best laid intentions are delivered to vastly empty rooms. 

The problem is not you. Well, it might be you. This article by Dowswell, Bradshaw and Hewison offers some insight to your guests--er, colleagues' motivation underlying their participation (or not) in your professional development opportunities. 

If you do not read this article (what!??), at least take away this: human capital theory underlies much of what constrains or increases our access to educational opportunities. Sure, you know that moving a staff meeting from 8am to 7am to accommodate a spectacular guest speaker now competes with potential child care coverage concerns and transportation issues. But what you may not have known is that your educational opportunity may be viewed by some as "training to do the job better...[or] training to get a better job." And this is the distinction that may impact our perceived cost of attending.

Human capital theory, cultural capital, and habitus are concepts that suggest our life circumstances influence a great deal more of our motivations and intentions in our decisions than we might initially think. How can you help your colleagues navigate these decisions? Read this article for a quick dip into the literature around this topic. No one likes to see a cheese tray go to waste.  

Bottom Line:

Terms like human capital, cultural capital, and habitus are old news in the educational literature. Perhaps its time they found a new home here in health education. We'll start by looking at professional development.

Wednesday, July 31, 2013

Happy New Year! About those resolutions...

Succeeding as a Clinician Educator: Useful tips and resources.  Castiglioni A, Aagaard E, Spencer A, Nicholson L, Karani R, Bates CK, Willett LL, Chheda SG. J Gen Intern Med. 2012; 28(1):136-140. Available online from the Baystate Health Sciences Library, or from PubMed at your institution.

Oh, August: A new crop of residents. Tired, sunburned faculty and staff. All reminders that another annual cycle is beginning, which in itself is a reminder that your annual cycle doesn't ever seem to end. August brings an academic's New Year's celebration, and so it is with that silver-lining perspective that I present to you this article, with its built-in list of resolutions that I will condense and paraphrase: 

Resolution 1: Take control of your academic destiny.

Castiglioni and colleagues present this article as a plan of attack for junior faculty. Their protagonist, Dr. Enthusiast, seems like a great guy. He seems to be doing everything that he can. But these authors have the gall to suggest that he could - and should! - do more. 

The authors present six action areas for Dr. Enthusiast: 1) set goals, 2) seek mentors, 3) find your niche, 4) network, 5) move your work to scholarship, and 6) seek funding. Why should you want to take action in these areas? The authors present literature on how critical such actions are for advancement. To this I add that there is no more frustrating feeling than sitting in a conference listening to someone present a project you implemented LAST YEAR. And yours was much better, I might add. 

We do great things at this institution and because of that, we attract great learners and great teachers. But we can be even better. We can put the enthusiasm of our teaching into our careers and the careers of our junior faculty. We can seek mentors and serve as mentors, and we can become the type of mentees that we want of our learners. We can network and we can help others to network. And we can all move our work to scholarship. 

So this year, I grant you permission to move "floss daily" further down the list; make it a priority to advance your academic career instead. Then, flash those pearly whites during your own conference presentation.

Bottom Line:

The New Year is a clean slate with the opportunity to set new resolutions. Capitalize on that opportunity by setting a SMART goal (or two), and keep this article handy as your outline for success. 

Wednesday, June 19, 2013

For Your Review: Mixed Bag of Mixed Methods?

Impact of interprofessional education on collaboration attitudes, skills, and behavior among primary care professionals.  Robben S, Perry M, van Nieuwenhuijzen L, van Achterberg T, Rikkert MO, Schers H, Heinen M, Melis R. J Continuing Ed Health Prof. 2012; 32(3): 196-204. Available online from the Baystate Health Sciences Library, or from PubMed at your institution.

Robben and colleagues offer a perfect platform for discussion with this article outlining a program evaluation of an interprofessional educational intervention in the Netherlands. As if the application of social cognitive theory and Kirkpatrick's levels of program evaluation outcomes weren't enough, we are also invited to enjoy the design and analysis of a mixed-methods approach to evaluation. 

So we're good, right? Not quite. Similar to the way that my computer's Spellcheck--er, Spell-check feature still questions me when I write "interprofessional" as a non-hyphenated word even though I do it on purpose, so too must we continue to question educational research even though it is published. 

I do not profess to say that this study is flawed (or that it isn't). But it does require exploration into key concepts. For example, the tools used for the quantitative exploration of this study are noted to produce valid scores; but where's the evidence? Also, the qualitative data support many themes in the results, but no data (direct quotes) are provided? And I dare you to read this article without Googling "human movement scientist" and "Hawthorne effect."  All in all, this article is a stone on which to sharpen your critical analysis teeth. Dig in.  

Bottom Line:

Excellent example of a mixed-methods program evaluation or novice term paper filled with fancy words but little substance? You decide. The interprofessional inter-professional nature of the content is just the cherry on top. 

Thursday, May 23, 2013

How Does Your Research Measure Up?

Association between funding and quality of published medical education research.  Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. JAMA. 2007; 298(9): 1002-1009. Available online from the Baystate Health Sciences Library, or from PubMed at your institution. 

In their quest for determining the quality of funded versus non-funded educational research, Reed and his colleagues could not find in the literature a tool to measure the quality of medical education research. SO, they did what each of us would have done in the same situation - they developed and validated one. 

Thus, this article double dips as a call for funding in medical education research (yay!) and a tool that holds us researchers accountable for producing higher quality research (wait, what?). 

Those of us who are familiar with Reed and Cook in the literature have likely already dog-eared and highlighted this synopsis of rigorous quantitative research. The tool (medical education research study quality instrument, or MERSQI) and the description of its development boil down elements of rigorous research that should guide our work. 

In fact, what the MERSQI lacks in snappy name recognition, it makes up for in utility. Consider the emphasis on study design, data analysis, and validity. Admittedly, the authors omit "subjective" criteria from their MERSQI, such as relevance of the research question and appropriateness (and use!) of a conceptual framework. Also, the MERSQI should not be used with qualitative research. Additional side effects may include the urge to rely too heavily on a recipe that may not be appropriate for your research question. 

Ultimately, having access to a tool like this and knowing how to use it are the aspects that make research difficult (and great). 

Bottom Line:  

The medical education research study quality instrument (MERSQI) - developed to help the authors answer their research question - is the real meat of this article. Use it like guard rails on the highway to keep your research on the right path. 

Tuesday, May 7, 2013

From the Field: Put Your Troops in Groups

ABC of teaching and learning in medicine: Teaching small groups. Jaques D. BMJ. 2003; 326(1): 492-494. Available online. 

This guest post (a feature I'm calling "from the field") was written by Lauri Meade, associate program director for internal medicine at Baystate, and her colleagues from the department of internal medicine; Reham Shaaban, Chris LaChance, Jasmine Padaam, Siva Natanasabapathy, Raquel Belforti, Michael Picchioni, and Christine Bryson.  
This group of medical educators offered their residents and faculty some great ways to spruce up small groups, and they have graciously allowed me to share their tips with you. Consider engaging your colleagues with these tricks during other faculty development opportunities, such as grand rounds or faculty meetings. Cheers to the medicine faculty - now dig in!

As teaching faculty, we are often teaching in small groups (i.e. attending rounds and noon lecture). Here are some great basic tips to small group teaching:
  * conduct dialogue rather than give a lecture
  * get the learner talking more than the teacher
  * get the learners to talk to each other
  * learners should be able to prepare for the learning session
  * be wary of one learner dominating the discussion

In addition, we can also use innovative methods in small groups:
  * Show a TED Talk during rounds – such as this one from Sal Khan, an educator who spoke at the AAMC conference this year
  * Get everyone to ‘race to the correct answer' on their iPhone
  * Use Google images to illustrate a point
  * Orchestrate abstract browsing: Browse recent or topical literature. Together in the session read an abstract silently for 1 minute then talk for 4 minutes.  Repeat this for 6 abstracts and then choose the full paper you want to read.
  * Use Twitter to engage the learner
  * Take a field trip (go as a group to a hospital area, such as radiology or the micro lab)

Homework: 
  We challenge you to go to TED Talks healthcare’ and learn in 8-12 minutes.
  Try splitting into pairs to answer a clinical question, then have the pairs teach the group.
  How do you help your resident engage the learner at attending rounds?

Bottom Line:

A group of internal medicine faculty from Baystate offer these tips to get your learners engaged in their small groups. A little technology and some creative teaching strategies go a long way. 

Wednesday, May 1, 2013

Reconsidering "Scholarship Reconsidered"

Developing scholarly projects in education: A primer for medical teachers. Beckman TJ, Cook, DA. Medical Teacher. 2007; 29: 210-218. Available online. 

What does it mean to be an educational scholar? In 1990, Ernest Boyer published "Scholarship Reconsidered" from the Carnegie Foundation for the Advancement of Teaching. This text began a conversation that changed the definition of faculty scholarship in higher education to recognize the contribution of educational scholarly projects. In 1997, Glassick's "Scholarship Assessed" continued the conversation by identifying six standards with which to evaluate high quality educational scholarship.

In this article, Beckman and Cook keep the conversation rolling. This primer outlines a framework for designing high quality scholarly education projects such as curriculum, review articles, or research. The authors' approach is based on the four domains presented by Boyer and the six standards by Glassick, but - as appropriate for an audience of busy clinicians - has been neatly condensed into only three steps. 

These three steps (Refine study question, Identify study design & methods, and Select outcomes) guide educational scholars along a path to high quality scholarly work. The article is a salacious guide to developing rigorous research. With its excerpts from novel educational research textbooks (yes, these exist) to a quick yet comprehensive section on validity, this article should be a staple in any research curriculum or resource file. This article - along with the work of Boyer and Glassick - are must reads for budding educators, researchers, and - at the very least - those that will ultimately promote them.

Bottom Line:

Beckman and Cook harness the power of Boyer and Glassick (some higher education household names) to deliver a comprehensive framework for building (and evaluating!) high quality educational scholarship in three perfectly-portioned steps. Print this article, grab your highlighter, and delve in.