Publication
Article
Internal Medicine World Report
An Interview with Robert M. Wachter, MD Dr Wachter is Professor and Associate Chairman, Department of Medicine, University of California, San Francisco, and Chief of the Medical Service, UCSF Medical Center.
An Interview with Robert M. Wachter, MD
Dr Wachter is Professor and Associate Chairman, Department of Medicine, University of California, San Francisco, and Chief of the Medical Service, UCSF Medical Center.
Robert M. Wachter, MD
Internal Bleeding
Dr Wachter has been a major force in the hospitalist movement and the Society of Hospital Medicine (SHM) and has written extensively on a variety of topics in medicine. He coauthored the bestselling book on patient safety, , and is a recipient of the John M. Eisenberg Award, the nation's top honor in safety and quality.
JAMA
IMWR
His most recent article that was just published in (2006;295:2780-2783) focuses on the consequences of quality measurement and information technology. asked Dr Wachter to discuss his views on quality measurement and the implications of this new trend for the future of clinical practice.
What do you mean by quality measurement, and how would it affect the practice of medicine?
JAMA
Quality measurement and information technology are absolutely consuming issues today, but allow me to focus on quality measurement for the moment. In my article on quality and unforeseen consequences, I was trying to make the point that with any new trend, especially any trend that is as complex and as important as this, there are always going to be unforeseen consequences, and it is therefore worth trying to think about them to make sure that we do not misstep.
I am a big believer in quality measurement. I think it is absolutely appropriate to measure performance, it is important, and the way it has been structured so far represents a reasonable start. To me, these early measures are our training wheels-starting with things like Pneumovax administration and time to antibiotics is reasonable, because we are not yet good enough at trying to answer the question, "Did you give high-quality care to a really complicated patient?" It is just too difficult.
I am on clinical service right now at my hospital, and my team admitted 8 patients the other day. Each one had 5 to 10 different problems, all tremendously complex, and I sat down with my team of residents and students today and asked, "How could anyone possibly figure out whether we did a good job?" And the answer is-there is no way. The science of sorting that out is too hard, and therefore the quality measurement movement had to start somewhere.
Although these early efforts represent a reasonable starting point, understanding the consequences is important. For example, we need to recognize that there will inevitably be playing for the test-if this is what you are being measured on, this is what you will focus on. To some extent that is appropriate, but if we're not careful, it can be a distraction from other unmeasured things that may be even more important.
So right now, in these early days of the quality revolution, everybody is doing what they can to try and do better. We-both doctors and institutions-are also beginning to gather the skills to figure out the answers to some very complex questions: How do you measure quality? How do you improve quality? What is the right set of carrots and sticks? We have got to figure out these things, and we might as well start out on relatively straightforward things, such as giving vaccinations and counseling people about smoking, before we move on to much more complex things like the management of the inpatient or outpatient with multiple overlapping illnesses.
Can you explain the difference between process and outcome measurement, and why most of today's measures seem to be process measures?
The present state of the science in quality measurement is not ready for prime time when it comes to outcome measurement. We do not know enough about case-mix adjustment to allow us to just look at how many patients lived or died when I took care of them versus my colleague, or how many patients lived or died if they were in my hospital versus in the hospital across town-because if my patients are sicker, we cannot adequately adjust for that, and therefore it is not a fair comparison.
Accordingly, most of today's quality measurement looks at, "Did I do the right things?" rather than "How did the patients do?" because the former does not depend so much on case-mix adjustment. Also, it is pretty easy to measure most processes: Did you do the right thing-yes or no? That list of the right things comes from smart scientists looking at the evidence and declaring that there is a link between these processes and meaningful outcomes, such as mortality.
Does quality measurement limit the art of medicine?
Of course, but much of the art of medicine was wrong. We do not want people practicing the "art of medicine" when there is one demonstrably right way of doing something. And every bit of evidence we have from studies by Jack Wennberg, MD, PhD, director, Center for Evaluative Clinical Sciences, Dartmouth Medical School, and others says that the art of medicine is often applied inappropriately, because when you look at the way patients are cared for from community to community or from hospital to hospital, it is all over the map. With all this variation, it cannot all be right. So there is a place for cookbooks, there is a place for checklists, there is a place for "there is only one right way to do this"?when the evidence is clear that the right drug to give in this circumstance is drug x, there should not be "art of medicine." You should just do the right thing.
The challenge is that the amount of medicine we understand to that degree of precision is certainly no more than 50%, which means that the rest of it does needs to be art, and even in those areas where there is evidence, sometimes patients do not neatly fit the template. And so, the big question is: How do we create a system in which physicians are predictably doing the things that we know are right, every time? To me, this is not art-it is system. And an equally important question is: How do we, at the same time, preserve physicians' art and their ability to improvise when the patient does not fit the guideline, or when we do not have the science? There are a lot of patients who are like that.
As I said in my article on the hospitalists (see page 1), I admitted 8 patients to our hospital the other day, and none of them neatly fit any guideline, because they each had 5 different things wrong, and all those things interact with each other. That is the tension, and I believe that is a major challenge for those of us in the training business, because you train people a certain way if you are training them to be artists, and you train them another way if you are training them to be system-players who follow rules and regulations. We need to somehow find the middle ground, where our trainees are happy doing both. Unfortunately, right now we have no idea how to do that.
I would much rather have a system that guided me to do the right thing without me having to remember everything but allowed me the autonomy to make my own call when there was not good enough evidence. I think that is achievable, and it is a system that will be better for patients and ultimately better for doctors.
It is very exciting to be an educator as we try to figure out how to hit that sweet spot, figuring out how we train people that way to be leaders and active participants in systems that are set up for improvement, while preserving their ability to think on their feet and apply the art of medicine. To me, this is interesting, it is engaging, it is challenging, and the end result will be higher quality, safer, and more efficient care.