Tuesday, September 18, 2012

ACOG's dysmenorrhea FAQs: Evidence of propaganda?

I have been looking up information on endometriosis for a friend of mine, and came upon this from the American College of Obstetricians and Gynecologists:

So I bit and started reading. And about half way through my reading it I realized that this really reminds me of how they taught literature in the my native USSR. The teaching consisted of stock interpretations of the great authors' works through the prism of Communist Party propaganda. In this interpretation all of the writers' messages railed against the monarchy, and all exhortations were for the purpose of freeing the proletariat. No teacher ever dared to disagree, and no student was expected to question.

Why, you ask, do these ACOG FAQs on dysmenorrhea remind me of my schooling in the old country? Well, glad you asked. Check out this gem, for example:
That's it. No follow-up questions? Good!

But really let's take it from the top. So, OK, there is the pelvic exam. I can deal with that because I am used to that as the default for anything going on "down there." Then there is the ultrasounds exam. I guess I can deal with that too because there has been so much in the news about pelvic ultrasound, and that seems to be what is done to get a better look at what is down there. A laparoscopy? Wait, isn't that a surgical procedure? Yeah, they even say it's a surgery, and it's done to get a "look inside the pelvic region." Hmmm, this sounds pretty serious. How come they don't say anything here, in these FAQs, about what they are looking for, how good this surgery is at finding it, what the chances that what they find is responsible for my dysmenorrhea, what is the treatment and how successful it is at alleviating my symptoms of dysmenorrhea, and whether or not there are alternative interventions?

(Does anyone really ask the patients what their FAQs are or are they generated by the clinicians based on what they think should be important to the patient? Or even worse, based on what they think they can give a perfunctory answer to? Just from reading these Qs and As I think it's the latter.)

You get my point. This formulation of information is beyond useless. It seems paternalistic in its "there there, dear, we will take care of everything" attitude. Perhaps I am out of touch. Perhaps women, patients in general, don't want to go beyond what their doctor tells them to do. But I happen to think that it is these FAQs that are out of touch. Granted, I am a "difficult" patient, as even a pelvic exam, let alone ultrasound and surgery, meets with questions around the evidence of its effectiveness. But even if you have only completed ePatient 101, you should know enough to ask about something as serious as a laparoscopy! How can anyone be expected to just acquiesce and, sighing, say "yes, I guess I have to have surgery." This "FAQ" is completely absurd in its willful lack of useful information. And if you read the rest of the document, you will find many places where this is true as well.

I know that some of you will read this and click away saying "oh, there she goes again." But I think you need to rethink your apathy. After all, there are well over 200,000 deaths (and possibly even more than 400,000) annually in the US that happen unnecessarily just from contact with our "healthcare" system. If you can avoid the avoidable, is it not incumbent upon you to be fully informed? You may think that all these recommendations are evidence-based, and there is not a whole lot of wiggle room in how to proceed. Well you are wrong if you think so, since the evidence, even when it is available, is rarely, if ever, unequivocal. And furthermore, in medicine no benefit comes without a risk. Are you sure you want your doctor to make these decisions for you? How is it that people who are not even willing to take wardrobe advice from their mothers wade so enthusiastically into these high-risk medical adventures with their eyes and ears closed?

I wrote Between the Lines to show just how imprecise and uncertain the science of clinical medicine is. But beyond that, I wanted to provide you with tools at least to ask the right questions. So, please, go and ask. And insist that you be included in the FAQ processes. Otherwise, we are just wasting terabytes on propaganda.            

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Friday, September 7, 2012

What does $750 billion in wasted spending look like?

Here is an infographic (I know) from the Institute of Medicine who just released this report. According to it, we are wasting $750 billion annually in unnecessary healthcare costs, and here is the breakdown. Note the ~$250 billion on overdiagnosis and overtreatment. Now,what are we going to do about it?




If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Tuesday, August 14, 2012

BTL reader question: How do you get to 2%?

I have started a FAQ page on the BTL book web site here, and I will cross-post the discussion here on the blog. This will give us an opportunity to have a more interactive discussion, if necessary, with additional comments and questions.

Here is the inaugural installment.

On August 13, 2012, this question came in via Twitter:




Well, here is the answer (and thank you for the question, Tia!)

First the problem. At the bottom of page 74 and going on to the top of page 75 I discuss the question posed in a 1978 New England Journal of Medicine paper by Casscells and colleagues to 60 physicians and physicians-in-training at Harvard Medical School. The problem went like this:
 
"If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 per cent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs?"

The question clearly mimics a disease screening situation. The answer is simple yet elusive. Let us assume that 1,000 people are tested. Among them only 1 person has the actual disease. However, given that the false positive rate is 5%, we also know that out of the 1,000 people tested, 50 will have a false positive test. Assuming that the single person with the disease also has a positive test, we can expect 51 people to test positive. But since only 1 out of these 51 people with a positive test has the disease, the answer to the question above is 1/51=2%. This is a pretty shocking realization, given that a large plurality of the Harvard doctors and trainees chose 95% as their answer. 

So, be careful not to let your intuition override the data when making medical decisions!


If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Wednesday, July 25, 2012

Medicine as the trolley problem

Are you familiar with the trolley problem? It is an ethics dilemma first formulated by the great Philippa Foot as a part of a series of such dilemmas. Her formulation goes roughly like this. Imagine there is a tram hurtling down a track. If it keeps going straight, it will hit and kill 5 people who are working on that track. The conductor is able to throw a switch and divert the train to another part of the track, where 1 single worker will be killed by the trolley. The question is what should the conductor do? Most people when asked respond that yes, he should throw the switch and sacrifice 1 life to save 5. After all, the net benefit is n=4.

There are literally thousands of alternative formulations of this problem, but one of them from the philosopher Judith Jarvis Thomson merits special consideration. The problem starts out similarly, with 5 lives on a track in potential peril. The vantage point and the solution are quite different, though. Now there is a bridge over the rail track, and a very large man is looking at the tracks from the bridge. One way to stop the train is to throw a heavy object in its path, like this large man, for example. You are on the bridge standing behind the man. Would you be justified in pushing him off the bridge in front of the tram to meet his death in order to spare the 5 workers down the tracks? Most people when faced with this formulation say an emphatic "no." This is somehow puzzling, since the net benefit is the same, n=4, as in the original Foot formulation.

Philosophy professors have puzzled over this difference for decades, and there are several potential explanations for why we respond differently to the two scenarios. One explanation has to do with the proximity of the operator (conductor in the first case and the person doing the pushing in the second) to the sacrificial lamb -- in the first case one is enough removed from the action of killing by merely redirecting the tram, whereas in the second the action is, well, more active, and the operator is actually pushing an innocent person to his death.

Though in some ways the scenarios seem to bear no practical distinction from one another, we see the morals and ethics of each differently. This difference in the view point is instructive to the field of medicine, where it has implications to how policy relates to the individual patient encounter. Here is what I mean.

Suppose you are a policy maker, and you recommend that every woman at age 40 start to receive an annual screening mammogram to reduce deaths from breast cancer. At the population level, if we screen 1,000 women for about 30 years, we will save approximately 8 of them from a breast cancer death. (Yes, it's 8, not 80, and not 800). At the same time, among these 1,000 women, there will be over 2,000 false alarms, and over 150 of these will result in an unnecessary biopsy. Some of these biopsies will incur further complications, though currently we  do not seem to have the data to quantify this risk. But what if even one of these biopsies were to lead to death of or another dire lasting complication in a woman who turned out not to have cancer? And by the way the accounting is not all that different when applying the new USPSTF mammography screening recommendations. Well, then we have the trolley problem, don't we? We are potentially sacrificing 1 individual to save 8. And who does the sacrificing is where the variations of the trolley problem come in.

Payers levy financial penalties on primary care physicians when they fail to comply with screening recommendations in their patient panels. The payer certainly sees this issue as the original formulation of the problem: Why not throw this financial switch to achieve net life savings? But for a clinician who deals with the individual patient this may be akin to pushing her over the bridge toward a potentially fatal event. Because we don't have a crystal ball, we cannot say which woman will die or incur a terrible complication. But the same population data that tell us about benefits must also give us pause when reflecting on the risks. Add the ubiquitous uncertainty (and lack of data) into this equation, and the implications are even more shocking. So, while making policy recommendations based on population data is sensible, policing uniform application of these recommendations to individual patients is fraught: of course, clinicians and patients need to be cautious about making individual decisions even when in population data benefits outweigh risks.

On the surface risk-benefit equations for many interventions may appear favorable, leading to blanket policy recommendations to employ them on everyone who qualifies. In the office, the clinician, caught in a tug of war between mountains of new literature and the ever-shrinking appointment times, is hard-pressed to take the time to consider these recommendations in the context of the individual patient. And furthermore, financial incentives from payers act as a short-hand justification, a "nudge," for doing as recommended rather than for giving it thought. So, who must look out for the patient's interest? The patient, that's who. Who understands the patient's attitude toward the risks and the benefits? The patient, that's who. Who now has to be responsible for making the ultimate informed decision about which track to stand on? The patient, that's who.

For me the trolley problem gives clarity to the reservations that I walk around with every day. I have done a lot of soul searching about why it is that, even if the benefits seem to outweigh the risks, I am still more often than not skeptical about whether a particular intervention is right for me. And since every intervention in medicine has a real risk, though mostly quite low, of going terribly awry, my skepticism is justified. This is my approach to evaluating these risks and benefits, based on my values and my understanding of the data as it is today.

What's the answer to this ethical conundrum in medicine? I cannot see that policy makers will stop throwing the switch in the near future, and so as a society we will be forced to accept the tram's collateral damage. And while this may make sense in an area such as vaccination, where thousands of lives can be saved by sacrificing a very few by throwing the switch, in most everyday less clear-cut medical decisions the answer is less clear-cut. Will doctors rebel against being forced to throw some patients on the tracks in order to save some marginally larger number of others? I don't think that they have the time or the energy or the incentive to do this, since the framing of the switch-throwing is through the rhetoric of "evidence." Right or wrong, doctors are shackled by the stigma of ignorance that comes with not following evidence-based guidelines, and this may act to perpetuate blind compliance. This leaves the patients, for some of whom the right thing will be just to get themselves off the tracks altogether, far away from the hurtling trolley until its brakes are fixed.                        

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Friday, July 20, 2012

Early radical prostatectomy trial: Does it mean what you think it means?

Another study this week added to the controversy about early prostate cancer treatment. The press, as usual, stopped at citing the conclusion: Early prostatectomy does not reduce all-cause mortality. But the really interesting stuff is buried in the paper. Let's deconstruct.

This was a randomized controlled trial of early radical prostatectomy versus observation. The study was done mostly within the Veterans' Affairs system and took 8 years to enroll a little over 700 men. This alone should give us pause. Figure 1 of the paper gives the breakdown of the enrollment process: 5,023 men were eligible for the study, yet 4,292 declined participation, leaving 731 (15% of those who were eligible) to participate. This is a problem, since there is no way of knowing whether these 731 men are actually representative of the 5,023 that were eligible. Perhaps there was something unusual about them that made them and their physicians agree to enroll in this trial. Perhaps they were generally sicker than those who declined and were apprehensive about the prospect of observation. Or perhaps it was the opposite, and they felt confident in either treatment. We can make up all kinds of stories about those who did and those who did not agree to participate, but the reality is that we just don't know. This creates a problem with the generalizability of the data, raising the question of who are the patients that these data actually apply to.

The next issue was what might be called "protocol violation," though I don't believe the investigators actually called it that. Here is what I mean. 364 men were randomized to the prostatectomy group, and of them only 281 actually underwent a prostatectomy, leaving nearly one-quarter of the group free of the main exposure of interest. Similarly, of the 367 men randomized to observation, 36 (10%) underwent a radical prostatectomy. We might call this inadvertent cross-over, which does tend to happen in RCTs, but needs to be minimized in order to get at the real answer. What this type of cross-over does is, as is pretty intuitively obvious, blend the groups' differences in exposure, resulting in a smaller difference in the outcome, if there is in fact a difference. So, when you don't get a difference, as happened in this trial, you don't know if it is because of these protocol violations or because these treatments are essentially equivalent.

And indeed, the study results indicated that there is really no difference between the two approaches in terms of the primary endpoint (all-cause mortality over a substantially long follow-up period was 47% in the prostatectomy and 50% in the control groups [hazard ratio 0.88, 95% confidence interval 0.71 to 1.08, p=0.22]). This means that the 12% relative difference in this outcome between the groups was more likely due to chance than to any benefit of the surgery. "But how can cancer surgery impact all-cause mortality?" you say. "It only claims to alter what happens to the cancer, no?" Well, yes that is true. However, can you really call a treatment like that successful if all it does is give you the opportunity to die of something else within the same period of time? I thought not. And anyway, looking at the prostate cancer mortality, there really was no difference there either: 5.8% attributable mortality in surgery group compared to 8.4% in the observation group (hazard ratio 0.63, 95% confidence interval 0.36 to 1.09, p=0.09).  

The editorial accompanying this study raised some very interesting points (thanks to Dr. Bradley Flansbaum for pointing me to it). He and I both puzzled over this one particularly unclear statement:
...only 15% of the deaths were attributed to prostate cancer or its treatment. Although overall mortality is an appealing end point, in this context, the majority of end points would be noninformative for the comparison of interest. The expectation of a 25% relative reduction in mortality when 85% of the events are noninformative implies an enormous treatment effect with respect to the informative end points.
Huh? What does "noninformative" mean in this context? After thinking about it quite a bit, I came to the conclusion that the editorialists are saying that, since prostate cancer caused such a small proportion of all deaths, one cannot expect this treatment to impact all-cause mortality (certainly not the 25% relative reduction that the investigators targeted), the majority of the causes being non-prostate cancer related. Yeah, well, but then see my statement above about the problematic aspects of disease-specific mortality as an outcome measure.

The editorial authors did have a valid point, though, when it came to evaluating the precision of the effects. Directionally, there certainly seemed to be a reduction in both all-cause and prostate cancer mortality in the group randomized to surgery. On the other hand, the confidence intervals both crossed unity (I have an in-depth discussion of this in the book). On the third hand (erp!) the portion of the 95% CI below 1.0 was far greater than that above 1.0. This may imply that with a study that could have achieved greater precision (that is, narrower confidence intervals) we might have gotten a statistical difference between the groups. But to get at higher precision we would have needed either 1) a larger sample size (which the investigators were unable to obtain even over an 8-year enrollment period), or 2) fewer treatment cross-overs (which is clearly a difficult proposition, even in the context of a RCT), or 3) both. On the other hand (the fourth?), the 3% absolute reduction in all-cause mortality amounts to the number needed to treat of roughly 33, which may be clinically acceptable.

So what does this study tell us? Not a whole lot, unfortunately. It throws an additional pinch of confusion into the cauldron already boiling over with contradiction and uncertainty. Will we ever get the definitive answer to the question raised in this work? I doubt it, given the obvious difficulties implementing this RCT.  
                  
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Tuesday, July 17, 2012

House appropriations bill to terminate AHRQ and prohibit funding patient-centered research

Update 7/18/12, 3:30 PM eastern:

The Hill has reported here that the bill has cleared the subcommittee. It will be going to the full committee next week.
The $150 billion bill cuts $6.3 billion from current levels of spending in the Labor, Health and Human Services and Education Departments and is part of Republican efforts to rein in government spending – an important message for the GOP on the campaign trail.
[...]
But other areas are slashed. The bill ends President Obama’s signature Race to the Top education initiative and cuts millions from advanced appropriations for the Corporation for Public Broadcasting, which funds NPR and PBS. The agency that monitors child labor abroad is cut by 68 percent and the agency that distributes Social Security payments gets cut by $764 million. It also would cut funding for Planned Parenthood if the organization continued to provide abortions.
(Hat tip to Michael Millenson for the above link) 


Yes, folks, you read that right: The House of Representatives has drafted an appropriations bill that will dissolve the AHRQ and prohibit any funding for patient-centered outcomes research (PCOR). The AHRQ is an agency that spearheads and funds healthcare safety and quality research, as well as ways to rein in the costs while expanding access. If it is eliminated, there will be no one to focus on these critical issues. This bill is truly anti-patient and the reps must be informed that they have gone too far.

Here are the names of the Appropriations Committee members, with the subcommittee members responsible for this bill in bold (via STFM):

Democratic Members

  • Norman D. Dicks, Washington
  • Marcy Kaptur, Ohio
  • Peter J. Visclosky, Indiana
  • Nita M. Lowey, New York
  • José E. Serrano, New York
  • Rosa L. DeLauro, Connecticut
  • James P. Moran, Virginia
  • John W. Olver, Massachusetts
  • Ed Pastor, Arizona
  • David E. Price, North Carolina
  • Maurice D. Hinchey, New York
  • Lucille Roybal-Allard, California
  • Sam Farr, California
  • Jesse L. Jackson, Jr., Illinois
  • Chaka Fattah, Pennsylvania
  • Steven R. Rothman, New Jersey
  • Sanford D. Bishop, Jr., Georgia
  • Barbara Lee, California
  • Adam B. Schiff, California
  • Michael M. Honda, California
  • Betty McCollum, Minnesota

  • Republican Members


  • Harold Rogers, Kentucky, Chairman
  • C.W. Bill Young, Florida
  • Jerry Lewis, California
  • Frank R. Wolf, Virginia
  • Jack Kingston, Georgia
  • Rodney P. Frelinghuysen, New Jersey
  • Tom Latham, Iowa
  • Robert B. Aderholt, Alabama
  • Jo Ann Emerson, Missouri
  • Kay Granger, Texas
  • Michael K. Simpson, Idaho
  • John Abney Culberson, Texas
  • Ander Crenshaw, Florida
  • Denny Rehberg, Montana
  • John R. Carter, Texas
  • Rodney Alexander, Louisiana
  • Ken Calvert, California
  • Jo Bonner, Alabama
  • Steven C. LaTourette, Ohio
  • Tom Cole, Oklahoma
  • Jeff Flake, Arizona
  • Mario Diaz-Balart, Florida
  • Charles W. Dent, Pennsylvania
  • Steve Austria, Ohio
  • Cynthia M. Lummis, Wyoming
  • Tom Graves, Georgia
  • Kevin Yoder, Kansas
  • Steve Womack, Arkansas
  • Alan Nunnelee, Mississippi
 Call yours at 202-225-3121!

Here are some pertinent links, courtesy of Kenny Lin, MD, and others:
-The House press release (note they brag about defunding ObamaCare and "protecting" life in the same breath)
-The draft of the bill (see page 90)
-AcademyHealth announcement (where I learned about the PCOR prohibition)
-Statement from Mary Wooley, the President of Research!America about why this is a stupid move
-The Incidental Economist blog is compiling a list of useful projects funded by the AHRQ here

So please please please call your reps to stop this insanity!


If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Tuesday, July 10, 2012

DHHS: Does this lie make me look stupid?

Update, July 12, 4:30 PM Eastern:
Just got this extra lame reply from healthfinder:

Dear Ms. Zilberberg, Thank you for contacting healthfinder.gov.  healthfinder is a government Web site featuring prevention and wellness information and tools to help you and those you care about stay healthy. At healthfinder.gov, you will find:
 ·         interactive tools like menu planners and health calculators
·         online checkups
·         printable information that you can share with a family member or take to the doctor.
 healthfinder.gov is coordinated by the Office of Disease Prevention and Health Promotion (ODPHP), U.S. Department of Health and Human Services and the National Health Information Center (NHIC). NHIC links people to organizations that provide reliable health information. All of healthfinder.gov’s topics and tools go through subject matter expert reviews. As a result of these reviews, sentences and wording sometimes get updated and/or changed. This particular topic has already been reviewed, and the content team will be rewording the language; the word “best” will be removed from that sentence. This change will be reflected on the site in the next scheduled healthfinder.gov update. Sincerely, 
healthfinder.gov TeamNational Health Information Centerhealthfinder.gov is coordinated by the Office of Disease Prevention and Health Promotion (ODPHP), U.S. Department of Health and Human Services and the National Health Information Center (NHIC).

HOW ABOUT INCLUDING A DISCUSSION OF SAFE SEX?!!!!!!! Idiotic.




Update July 11, 10:50 AM Eastern
I have just sent the following e-mail to healthfinder.gov at the address healthfinder@nhic.org. I urge everyone who reads this to send them the same or a similar message. And if you do, please, leave a comment below to let everyone know.
Hello, 
I wanted to let you know that the information you posted on this web page on Pap testing is erroneous and misleading. Telling women that the "best" way to prevent cervical cancer is through a regular Pap test is not supported by evidence. The "best" way is to prevent HPV infection by engaging in safe sexual intercourse. As a public health communicator you are doing a tremendous disservice to the public.  
I urge you to change this message to reflect reality. 
Thank you. 
Marya Zilberberg, MD, MPH, FCCP 

There is pounding in my temples, my back muscles are in a spasm, and I might even be turning green and busting out of my clothes. What caused all this? This innocent-looking tweet from the Department of Health and Human Services:


I had to do a double take. My blood started to boil almost immediately. But I persisted, clicked on the link, and saw this:


The first sentence really says "The best way to prevent cervical cancer is to get regular Pap tests." Jaw, meet floor. What does the word "prevent" really mean? I went to The Free Dictionary for enlightenment:



Just as I had suspected: to avert, to keep from happening. And how does a Pap test keep the cancer away? It finds "abnormal cells before they turn into cancer." And where do abnormal cells come from? God, right? Well, no, they are mostly associated with an HPV infection, which comes from exposing yourself to unprotected sexual intercourse, usually with someone whose HPV status you don't know. You see where I am going with this? The message here is that there is nothing more effective at preventing cervical cancer than having a Pap test to detect early changes and lop out the misbehaving piece of your cervix. Are they serious? Is this really the "best way"? Let's examine the meaning of "best":


   
I guess beauty (and value) are in the eye of the beholder. Does subjecting yourself to a surgical procedure that may leave your cervix unable to help your uterus to maintain a pregnancy qualify as "surpassing all others in excellence" or as "most desirable"? Not in my book, not when a little advanced planning and a nickel for a condom could could keep that horse from leaving the barn in the first place. True prevention does not take place in a doctor's office, and it is a mistake to equate screening to prevention.

Come on, DHHS, who writes your stuff? Fire them! You are risking your credibility. What's next? "Bulimia is the best way to prevent obesity"?    

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Monday, July 2, 2012

Drugs and devices: expensive. Hubris: priceless.

This morning I was listening to the Morning Edition on NPR, and heard a story about the tax on medical devices that is written into the healthcare law. As you can imagine, there is opposition to such a tax by the manufacturers, as they are concerned about the usual, "stifling innovation" (yawn). It's hard to be amused by anything to do with healthcare these days, but here is a part of the conversation that had me in LOLZ (my 14-yo's expression):
ARNOLD: Okay. So here's how this new tax works. When a medical device gets sold, there will be a 2.3 percent sales or excise tax. Now, people who support this tax say that the medical device makers are exaggerating about the impact. Paul Van de Water is an economist with the left-leaning Center on Budget and Policy Priorities. He says that this tax is basically the same as a sales tax that you pay at the grocery store.
PAUL VAN DE WATER: The grocery store is collecting the tax. The grocery store is the institution that sends the tax to the state government, just the way the medical device manufacturer is going to write the check to the Treasury.
ARNOLD: Van de Water says that the tax doesn't really target the medical device makers that much. They'll just pass most of the cost along to their customers, who are mostly big hospitals, the same way a grocery store charges their customers. But the industry disagrees. David Nexon is with the medical device trade group called AdvaMed.
DAVID NEXON: There's a difference between a tax that, you know, an individual consumer pays as opposed to one that you're negotiating a price with a large, sophisticated buyer.
ARNOLD: In other words, Nexon says a hospital chain will push back and resist paying anything extra.
NEXON: In this very competitive market, it's extremely difficult for our members to raise prices.
Hah! Is he saying what I think he is saying? That because individual consumers are too dumb to understand about externalizing additional expenses, such as taxes, it is easier to put one over on them than on the savvy hospitals? Could he possibly mean that only "large, sophisticated buyers," and not ordinary consumers, would never stand for the information asymmetry they thrive on? That the individual consumers just don't have the power that hospitals do to push up against potentially predatory pricing? 

The last time I heard or read anything this blatant was in this New York Times piece from December 2009. This is a company executive talking about the rationale for the company's cancer drug's disproportionately steep price:
Mr. Caruso also said the price of Folotyn was not out of line with that of other drugs for rare cancers. Patients, moreover, are likely to use the drug for only a couple of months because the tumor worsens so quickly, he said. So the total cost of using Folotyn will be less than for many other drugs with lower monthly prices.
Wow, do these people get paid to advance their organizations' agendas? For my money they are not doing such a hot job at anything other than confirming all the societal views of them. Are they too stupid to realize that, even if you think stuff like this, you shouldn't say it out loud? How embarrassing.

Bottom line? Their drugs and devices: expensive. Their hubris: priceless.


If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Friday, June 29, 2012

Molecular diagnostics: Making the uncertainties more certain?

Scott Hensley over at the NPR's Shots blog posted a story about the recently approved molecular diagnostic test that can rapidly identify several gram-positive bacteria that cause blood stream infections.  This is indeed important, since conventional microbiologic techniques rely on bacterial growth, which can take up to 2 to 3 days. This is too long to wait to identify the bug that is the cause of a serious infection. What doctors have done to date is make the best guess based on several factors, including the type of a patient, the source of the infection and the patterns of bacterial resistance at their site, to tailor empiric antibiotic coverage. The sicker the patient, the broader the coverage, until the culture results come back, when the doctor is meant to alter this treatment accordingly, either by narrowing or broadening the spectrum. The pitfalls of this work flow are obvious -- too many points where error can enter the equation. So on the surface the new tests are a great advance. And they actually are, but they are not free of problems, and we need to be very explicit confronting them.

Each diagnostic test can be evaluated on its sensitivity (how well it identifies the problem when the problem exists), specificity (how rarely it identifies a problem when it does NOT exist), and positive (what proportion of all positive tests represents true problem) and negative (what proportion of all negative tests represents a true absence of problem). Sensitivity and specificity are intrinsic properties of the test and can be altered only by how the test is performed. Positive and negative predictive values are dependent not only on the test and how it is done, but also on the population that is getting tested.

Let's take Nanosphere's test in Scott's story. If you trawl the company's web site, you will find that the sensitivity and specificity of this technology is close to 100%, if not 100%, the "gold standard" for comparison being conventional microbiology culture. And perhaps this is really the case in these very specialized hands that were testing the diagnostic. If these characteristics remain at 100%, disregard the rest of this post, please. However, the odds that they will remain at 100% in the wild of clinical practice are slim. But I am willing to give them 99% on each of these characteristics nevertheless.

OK, so now we have a near-perfect test that is available for anyone to use. Imagine that you are an ED doc at the beginning of your shift. An ambulance pulls up and rolls a septic patient into an empty bay. The astute ED nurses rush into settle the patient, and, as a part of the protocol, take a sample of blood for determining the pathogen that is making your patient sick. You quickly start the patient on broad spectrum antibiotics and walk away to take care of the next patient that has just rolled in with a heart attack. A few hours later, the septic patient, who is still in the ED because there are no ICU beds for him yet, is pretty stable, and you get the lab result back: he has MRSA sepsis. You pat yourself on the back because one of the antibiotics that you ordered was vancomycin, which should cover this bug quite adequately. You had also put him on ceftazidime to cover any potential gram-negative critters that may be lurking within as well. Now that you have the data, though, you can stop ceftaz and just continue vanc. The patient finally gets a bed upstairs, and your shift is over and you go home withe a sense of accomplishment.

The next morning you come in refreshed with your double-venti iced macchiato in your hand, sit at the computer and check on the septic patient. You are shocked to find out that last night he decompensated, went into shock and is now requiring breathing assistance and 3 vasopressors to maintain his blood pressure. You scratch your head wondering what happened. Then you come upon this crazy blog post that tells you.

Here is what happened. What you (and these tests) did not take into account is the likelihood of MRSA being the real problem rather than just a decoy false positive. Let's run some numbers. The literature tells us that the likelihood of MRSA causing sepsis is on the order of 5%. Let's create a 2x2 square to figure out what this means for the value of a positive test, shall we?


MRSA present
MRSA absent
Total
Test +
495
95
590
Test -
5
9405
9410
Total
500
9,500
10,000

What this says is the following. We have 10,000 patients roll into our ED with sepsis (in reality there are about 1/2 million to 1 million sepsis cases in the US annually), and we test them all with this great new test that has 99% sensitivity and 99% specificity. Of these 10,000, fifty five hundred (thanks, Brad, for noticing this error!) are expected to have MRSA. Given this situation, we are likely to get 590 positive tests, of which 95, or 16%, will be false positive. Face-palm, you drop your head on the desk realizing that Mr. Sepsis from yesterday was probably one of these 16 per 100 false positives, and MRSA is probably not the cause of his infection.

You begin to wonder what if your lab really did not get the sensitivity and specificity of 99%, but more like 98%? Still pretty generous, but what if? You start writing madly on a napkin that you grabbed at Starbucks, and your jaw drops when you see your 2x2:


MRSA present
MRSA absent
Total
Test +
490
190
680
Test -
10
9310
9320
Total
500
9,500
10,000

Wow, you think, this false positive rate is now nearly 30% (190/680)! You can't believe that you could be jeopardizing your patients' lives 3 times out of 10 because you are under the mistaken impression that they have MRSA sepsis. This is unacceptable. But can you really trust yourself with these calculation? You have to do one more thing to convince yourself. What if your lab only gets 97% specificity and sensitivity? What then? You choke when you see the numbers:




MRSA present
MRSA absent
Total
Test +
485
285
770
Test -
15
9215
9230
Total
500
9,500
10,000


It's and OMG moment -- nearly 40% would be treated for MRSA when they potentially have something else.

But you, my dear reader, realize that in the real world docs are not that likely to remove gram-negative coverage if MRSA shows up as the culprit pathogen. Why should you think otherwise, when there is so much evidence that people are not that great about de-escalating antimicrobial coverage in response to culture data? But then I have to ask you what's the use of this new test if no one will act on it anyway? In other words, how is it expected to help curb the rise of resistance? In fact, given the false positive MRSA rates we see above, might there not even be a paradoxical increase in the proliferation of resistance?

The point is this: We are about to see many new molecular diagnostic technologies on the market that have really really high sensitivity and specificity. The fly in this ointment, of course is the pre-test probability of the bug causing the problem. Look how in a very low risk group (5% MRSA) even a near-perfect test's value of a positive is reduced by almost a ridiculous magnitude. Do feel free to check my math.

So you trudge into the hospital the next day for your shift and check on Mr. Sepsis one more time. Sure enough, his conventional blood culture grew out E. coli, a gram-negative bug. You notice that he is turning around, though, ceftazidime having been restarted by the astute intensivist (well, I am a bit biased here, of course). All is well in the world once again. Except you hear an ambulance pull up and the nurse talking on the phone to the EMTs -- it's another sepsis on your hands. What are you going to do now?
              

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Thursday, June 28, 2012

ACA: The reports of its death were greatly exaggerated

This is a big day for President Obama's signature legislation, the Affordable Care Act. The Supreme court upheld its constitutionality, and the punditdom thinks that further challenges are unlikely. On the other hand, if Romney takes the White House in the next election... Well, you can guess what will happen then.

It has been interesting to watch the run-up to this decision. Most recently I have been amused by surveys finding that on the one hand many American people are in favor of the pre-existing condition inclusion (this part of the bills forbids insurance companies to discriminate against people with prior health conditions), as well as the provision that allows young adults to stay on their parents' insurance policies through a certain age. On the other hand, reportedly the majority of Americans are against the healthcare law, and most also oppose the individual mandate provision (this is the part where everyone has to buy insurance or pay a tax). Given this imbalance in the public opinion, a more pertinent survey should have assessed how well people understand these provisions in the first place. And this would have had to establish how well the public gets our whole healthcare "system."

To start from the beginning, any healthcare system can be judged on three criteria:
1. How accessible is it?
2. Is it of adequate quality?
3. How expensive is it?

The answer to the first question provides one of the rationales for the individual mandate. Currently there are about 50 million people without health insurance in the US, and, hence, without adequate access to the system. Many of these people are the young and the healthy who gamble on staying young and healthy. And many are consigned to relying on expensive emergency care when this gamble fails. Some of them go bankrupt trying to pay for it, while others become "safety net" cases, where the institution that cares for them swallows the costs. These institutions do get some public dollars for providing safety net care, but not nearly enough to break even. Since many of the 50 million don't buy health insurance because they cannot afford it, the healthcare bill provides a way to create more affordable insurance products.

The answer to the second question is not related directly to the individual mandate. Since much of this blog is devoted to the issues of healthcare-associated harm, I do not wish to belabor this point here. Suffice it to say that the bill does try to address this catastrophic situation, though it remains to be seen if it will succeed.

The third question is the crux of the story. Many have said that the escalation of healthcare costs is unsustainable, and I subscribe to this notion: I am not sure how much more than $2.6 trillion/year we want to pay for this insatiable beast. Yet judging by the near-revolt that "death panels" rhetoric caused, the citizenry is not interested in being thoughtful about what services make sense. The vehement knee-jerk to the "R"word shuts down the discussion before it even starts. So, OK, how do we pay this ever-increasing bill? Moreover, since we are all happy with the government mandate for all insurance to pay for pre-existing conditions, how do we propose to pay for this additional coverage? Short of printing money (not generally a good idea) or creating a single-party payer system that regulates these expenditures, the only way is to broaden the pool of revenue. The way the ACA has proposed to broaden this pool is through the very individual mandate that is anathema to our American way of life. But without it, there is no broadening of coverage, and there is no paying for every intervention that we seem to feel entitled to.

I doubt very much that the ACA will substantively contain healthcare costs. I even doubt that it will solve the quality problems, but I am willing to wait and see on that. This bill is but s band-aid on an arterial bleed. However, I do believe that upholding this legislation allows us to take the first steps toward a reasonable national dialog about the kind of healthcare system we need. This dialog will not be helped by stupid surveys that reinforce our willful ignorance. We have the opportunity to move this conversation to a higher level, where people begin to understand the issues we are up against more deeply. Let's take it.                      

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Tuesday, June 26, 2012

Peeling the cabbage of "works" in treatment interventions

What exactly does it mean when we say that a treatment works? Do we mean the same thing for all treatments? Are there different ways of assessing whether and how well a treatment works? I am sure you've guessed that I wouldn't be asking this question if the answer were simple. And indeed, the answer is "it depends."

What I am talking about is examining outcomes. I did a post a couple of years ago here, where I use the following quote from a Pharma scientist:
"The vast majority of drugs - more than 90 per cent - only work in 30 or 50 per cent of the people," Dr Roses said. "I wouldn't say that most drugs don't work. I would say that most drugs work in 30 to 50 per cent of people. Drugs out there on the market work, but they don't work in everybody."
Here is that word "work" again. What does this mean? well, let's take such common condition as heart disease. What does heart disease do to a person? Well, it can do many things, including give him/her such symptoms as a chest pain, shortness of breath, dizziness and palpitations, to name a few. These symptoms may have at least two sets of implications: 1) they are bothersome to the individual, and in this way may impair his/her enjoyment of life, and 2) they may signal either a present or a future risk of a heart attack. Why are heart attacks important? Well, they are important because one may kill the person who is having it, or one (or several) may weaken the heart to the point of a substantial disability and thus a deterioration in the quality of life. So, there certainly seems to be a good rationale to prevent heart disease either from happening in the first place or from at least worsening when it's already established.


Now, what's available to us to prevent heart disease? Well, some think that lowering one's cholesterol is a good thing. OK, let's go with that. What is the sign that the statins (cholesterol-lowering drugs) "work"? What would it look like if it was about lowering the cholesterol? Say, your total cholesterol is 240. You go on a statin and in 6 months your total cholesterol is 238. Your cholesterol was lowered, it worked! Well, yes, but if you are asking what this 2-point drop really accomplishes, you are beginning to understand the meaning of "work." So, just intuitively we can say that there needs to be a certain, perhaps "clinically significant," drop in the total cholesterol in order for us to say that the drug "worked." 


Great! Now we are sidling up to the real issue: What constitutes a "clinically significant" drop in cholesterol? Is it some arbitrary number that looks high enough? Probably not. How about some drop that correlates to a drop in the risk of the actual condition we are trying to impact, heart disease? Say, a 40-point drop, or getting to below 200, may be the right threshold for the "works" judgment. Ah, but there is yet another question to ask: How often does this type of a drop lead to a reduction in heart disease? Is it always (not likely), or is it the majority of the time (rarely) or at least some of the time (most likely in clinical medicine)? And what portion of that time do we consider satisfactory -- 60%? 40%? 20%? 2%? 


Let me bring just one more layer into this discussion. Many people walk with heart disease and don't know that they have it. Some of these people are destined to have a heart attack and/or die from it. Many others are likely to die from something else before they ever experience any symptoms or signs of their heart disease. This raises the question of whether the statins' ability to reduce cholesterol and hence reduce the risk of heart disease is enough to say that the drugs "work." Perhaps "work" means that by lowering cholesterol (say in the majority of those who take it) they reduce the risk of hear disease in some proportion of those who are at risk for it, and among that proportion whose risk is reduced they also reduce the risk of a heart attack in a few, and of death in even fewer. 


So, to sum up, "works" is a loaded term. For the case we are discussing, there is what I call a "dwindle" effect, where the main outcome, cholesterol lowering, is likely to show a somewhat robust result. On the other hand, this (surrogate) outcome itself is not all that interesting when divorced from what we really care about -- symptoms, heart attacks and death. And I haven't even gone into the side of the equation where the patient gets to decide what "work" means for him/herself. The layers of the possible "works" are a cabbage that we all need to peel when discussing treatment plans with our clinicians and when reading news about new technologies.                     
 

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!