Wednesday, 23 June 2010

Don't get sick in July?

ResearchBlogging.org

June is almost over. If you work in an academic medical center, as I do, that can mean only one thing.

The new interns are coming, and existing residents will soon be advancing to the next level. The joy! The excitement! The trepidation! And it's not all just the senior residents and the faculty feeling these emotions. It's the patients too. At least, it's the patients feeling the trepidation. The reason is the longstanding belief in academic medical centers, a belief that has diffused out of them and into "common wisdom," that you really, really don't want to get sick in July.

But is there any truth to this common wisdom, passed down from hoary emeritus faculty to professor to assistant professor to resident to medical student every year? Is there any truth to the belief commonly held by the public that care deteriorates in July? After all, this is something I've been taught as though it were fact ever since I first set trembling foot on the wards way back in 1986. So it must be true, right? Well, maybe. It turns out that a recent study published in the Journal of General Internal Medicine has tried once again to answer this question and come to a rather disturbing answer.

Imagine, if you will, that you want to determine whether there really is a "July effect," that quality of care really does plummet precipitously as common wisdom claims. How would you approach it? Mortality rates? That's actually fairly hard, because mortality rates fluctuate according to the time of year. For example, trauma admissions tend to spike in the summer. Well do I remember during my residency the fear of the fourth of July weekend, because it was usually the busiest trauma weekend of the year--and we had new residents to have to deal with it all. It was an attending's and senior resident's worst nightmare. In any case, if a hospital has an active trauma program it would naturally be expected that it would have more deaths during the summer regardless of resident status, quite simply because there is more trauma. Complication rates? That might also be a useful thing to look at, but that's actually not as easy as it seems either. How about comparing morbidity and mortality rates between teaching hospitals and community hospitals throughout the year and test whether mortality rates increase in academic hospitals relative to community hospitals. That won't work very well, either, mainly because there tends to be a huge difference in case mix and severity between academic institutions and community hospitals. Community hospitals tend to see more routine cases of lower severity than teaching hospitals do.

Yes, the probem in doing such studies is that it's not as straightforward as it seems. Choosing appropriate surrogate endpoints that indicate quality of care attributable to resident care is not easy. It's been tried in multiple studies, and the results have been conflicting. One reason is that existing quality metrics in medicine have not been sufficiently standardized and risk-adjusted to allow for reliable month-to-month comparisons on a large scale basis. In surgery, we are trying to develop such metrics in the form of the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP), but these measures don't always apply to nonsurgical specialties and there are multiple competing measures of quality. It's true that we're getting much better at assessing quality than we used to be, but it's also true that we have a long way to go before we have a reliable, standardized, validated set of quality measures that can be applied over a large range of specialties.

That leaves investigators to pick and choose surrogates that suit their purposes, and that's exactly what the investigators of this most recent study, hailing from the University of Southern California and UCLA, have done. The surrogate that they chose is medication error-related deaths:

Inexperienced medical staff are often considered a possible source of medical errors.1-6 One way to examine the relation-ship between inexperience and medical error is to study changes in the number of medical errors in July, when thousands begin medical residencies and fellowships.1,7-11 This approach allows one to test the hypothesis that inexperienced residents are associated with increased medical errors1,8,9,11-15--the so-called "July Effect."

Previous attempts to detect the July Effect have mostly failed,1,8-17 perhaps because these studies examined small,8,10-13,15-17 non-geographically representative samples,8-17 spanning a limited period,11-16 although a study of anaesthesia trainees at one Australian hospital over a 5-year period did demonstrate an increase in the rate of undesirable events in February--the first month of their academic year.1 In contrast, our study examines a large, nationwide mortality dataset spanning 28 years. Unlike many other studies,18 we focus on fatal medication errors--an indicator of important medical mistakes. We use these errors to test the "New Resident Hypothesis"--the arrival of new medical residents in July is associated with increased fatal medication errors.

To test this hypothesis of the "July effect," the investigators examined the database of computerized United States death certificates from 1979 to 2006 containing the records of 62,338,584 deaths. The authors then looked for deaths for which a medication was listed as the primary cause of death. Their results are summarized below:

One thing that irritates me about this graph is that it does something I really, really hate in a graph. It cuts off the bottom, which, because the graph doesn't go to zero, makess the differences between the values seem a whole lot larger than they really are. That "July spike" plotted on this graph is an increase in the number of deaths due to medications over expected by maybe 7%, but it looks like a whole lot more. In fairness, though, the investigators analyzed: (1) only preventable adverse effects; (2) only medication errors (rather than combining several types of medical errors like medicinal and surgical); (3) only fatal medication errors; (4) only those medication errors coded as the primary cause of death (rather than medication errors coded as primary, secondary, and/or tertiary). Still, one always have to wonder about how the denominator is calculated; i.e., how the "expected" number of deaths for each month is calculated. Basically, the investigators used a simple least-squares regression analysis to estimate the "expected" number of deaths.

If this is where the investigators had stopped, I might not have been as annoyed by this study. Sure, it's questionable whether assuming that deaths due to medication errors are strongly correlated with new, inexperienced residents. After all, if there's one thing we're starting to appreciate more and more, it's that medication errors tend to be a system problem, rather than a problem of any single practitioner or group of practitioners. But the above graph does appear to show an anomaly in July.

Unfortunately the investigators did something that always disturbs me when I see it in a paper. They faced a problem. Death certificates didn't show whether the death occurred in a teaching hospital or not. So, in order to get at whether there was a correlation between a greater "July effect" and teaching hospitals, as would be expected, they looked at county-level data for hospital deaths due to medication errors. Then they determined whether each of these counties had at least one teaching hospital and estimated the percentage of the hospitals in each county that are teaching hospitals, the rationale being the higher the proportion of teaching hospitals in a county, the larger the July effect is likely to be. This is the graph they came up with:

Holy ecological fallacy, Batman! The investigators appear to be implying that a relationship found in group level data applies to individual level data; i.e., individual hospitals. it almost reminds me of a Geier study. In any case, why didn't surgical errors increase if the "July effect" exists? Wouldn't this be expected? I mean, we surgeons are totally awesome and all, but we're only human, too. If the July effect exists, I have no reason to believe that we would be immune to it.

The existence of a "July effect" is not implausible. After all, in late June and early July every year, we flood teaching hospitals with a new crop of young, eager, freshly minted doctors. I can feel the anticipation at my own institution right now. It's a veritable yearly rite that we go through in academia. Countering the likelihood of a "July effect" is the seasonally tightened anal sphincters of attendings and senior residents that lead them to keep a tight rein on these new residents--which is as it should be. In any case, this particular study is mildly suggestive, but hardly strong evidence for the existence of the "July effect." Personally, I find the previous study on this issue that I blogged about three years ago to be far more convincing; its results suggested a much more complex interplay of factors.

In the end, I have some serious problems with this study, not the least of which is the assumption that medication errors are correlated so strongly with inexperienced residents when we now know that they are far more a systems issue than they are due to any individual physicians or groups. There are many steps in the chain from a medication order all the way down to actually administering the medication to the patient where something can go wrong, and, in fact, these days the vast majority of the effort that goes into preventing medication errors is expended on putting systems in place that catch these errors before the medication ever makes it to the patient, either through computerized ordering systems that question orders with incorrect doses or medications, systems where pharmacists and then nurses check and double check the order, and then systems where the actual medication order is checked against the medication to be given using computerized bar code scanning systems. It's really a huge stretch to conclude that fatal medication errors are a good surrogate marker for quality of care attributable to the resident staff, the pontifications and bloviations of the authors to justify their choice in the Introduction and Discussion sections of this study notwithstanding. The other problem is the pooling of county level data into a heapin' helpin' of the ecological fallacy. Is there a July effect? I don't know. It wouldn't surprise me if there were. If the July effect does exist, however, this study is pretty thin gruel to support its existence and estimate its severity.

Posted via email from Specialist Pain Physio

No comments:

Post a Comment