Of scientific physicians and evidence-based medicine

In regard to Glatstein, IJROBP 2001;49:619–621 

Huib M. Vriesendorp M.D., Ph.D.
Marshfield Clinic Marshfield, WI, USA

Quand on n’a pas du charactere, il vaut bien prendre une methode.

[Lack of Character is best compensated by a method] Albert Camus, La Chute [The Fall].

To the Editor: Obviously, Dr. Glatstein has character. He enjoys the Socratic method. Through a series of questions in his recent editorial in this journal, entitled “On Scientific Physicians and Evidence-Based Medicine” (really more a column than an editorial), Glatstein demystifies a number of ponderous buzzwords used often, and sometimes inappropriately at that, in our profession [1]. Due to the limitations of the format he is using, Dr. Glatstein proceeds to answer his own questions, and invites [provokes] others, for the sake of discussion, to provide different answers. I take on his challenge gladly, but run into some difficulty, because I whole-heartedly agree with almost all of his answers to his own questions.

I agree with Dr. Glatstein that no method should be used on patients with progressive malignancies. In his own words: “[The scientific physician will] look directly at the patient and deal with him and his family openly what to do next when the recommended treatments have not been successful. He has to go where the evidence is not.” Instead of applying the “scientific” method, the physician needs to apply his creativity and “the art of medicine.” Humility, compassion, intuition, and up-to-date medical knowledge and experience are important ingredients of this art, and present in abundance for Dr. Glatstein, with his liberating sense of humor as an added bonus.

When a cancer patient is in trouble, and decisions for new treatments need to be made quickly, a scientific physician does well to consult one of his physician/scientist colleagues. Personally, I do not like to hear of physician/scientists’ suggestion to stop worrying and just use a new experimental treatment codified for use on a large, national, scale in a very detailed protocol (just as bulky and as difficult to read cover-to-cover as a law book). However, according to Glatstein, one needs to read the whole article, not only the abstract. The scientific physician will discover in his reading of such protocol books, that frequently creativity has been replaced by method. For example, most studies of patients with solid tumors compare different treatment options in different study “arms;” all arms are toxic and anticipate long-term survival below 50%. Mechanistic information (why does a treatment work or fail) is usually not obtained. Other concerns decreasing the interest of scientific physicians to participate in large national studies are:

  1. Sponsors of such protocols (pharmaceutical firms, national collaborative groups) and regulatory federal agencies have a tendency to find fault with the execution of their protocols. Scientific physicians are subjected to disciplinary measures for creative behavior (in the words of the regulatory agencies, “protocol violations”) whenever they deviate from protocol in the interest of their patient. A mean-spirited group of the FDA, proudly calling themselves “The Enforcers,” happily reports to the media on an almost monthly basis that they have been able to close down a large and prestigious clinical research institute or university because of protocol violations.
  2. Physicians or their institutions receive payment for study participation. This creates a conflict of interest for the scientific physician. Obviously, the interest of the patient should prevail over the financial interests of physicians or institutions.
  3. The physician/patient-relationship suffers when the scientific physician has to provide informed consent to the patient on issues for which the patient has no affinity, and for which she/he wants to be protected by his physician, for example, randomization procedures or painful diagnostic procedures beneficial to the study but not to the patient. Patients want their physician to be in charge of their care, not a remote “Big Brother” who is sponsoring the protocol study.
  4. The time it takes to complete the analysis of the protocol study is usually 5 years or longer. The best possible answer at that time is that study arm A is less toxic or slightly more effective than arm B. The majority of the patients in both arms die.

My preferred response of physician/scientists on my search for help for one of my patients is their enthusiastic description of something new and small that they are working on themselves. They want to know if this new therapy works, and include a mechanistic analysis in their study. Study duration and analysis is between 2–3 years. There are no financial incentives. The physician scientist and the scientific physician interact on such a study as a translational scientist and patient advocate, respectively, in a small format in the same city or state. The interactions are usually exciting and also provide peer review. Patients need to be protected from the “mad” scientist, who blindly believes in the superiority of the new proposed therapy. The incidence of mad scientists is greatly exaggerated by regulatory agencies; the shoe is on the other foot. Most unethical clinical research since World War II was initiated and maintained by large governmental organizations. Dr. Glatstein was a member of the Presidential Advisory Committee on Human Radiation Experiments, which provides thorough historic information on this gruesome subject [2].

After listening to their physician/scientist recommendations, scientific physicians are usually inclined to present a treatment to their patient that, in their evaluation, has the highest possible therapeutic ratio. The scientific physician, as a patient advocate, obtains informed consent from the patient using the Belmont principles, buzzwords underlined:

  1. Respect for the patient.
  2. Treatment should have a reasonable chance for benificience. Patient cannot be put in harm’s way for the sake of other patients. [This, in my opinion, excludes most Phase 1 chemotherapy studies].
  3. All patients must have the same access to studies (justice).

In some countries (e.g., France, Agence de Medicaments), regulatory agencies allow new treatments to proceed if the physician and the patient sign a statement describing the specific new treatment they want to try. In the United States, there is obviously much more regulation going on at the local, state, as well as federal, level. Unfortunately, mandatory local regulation through Institutional Review Boards (a term George Orwell would have predicted) has become “dose limiting.” In a recent publication on Institutional Review Boards; A Crises of Confidence, Levine complains about the crushing workload and the tendency of governmental regulatory agencies to compel IRBs to be “agents of the government (and) enforce compliance with sometimes faulty interpretations of regulations.”[3]

The National Institutes of Health have started an initiative to reduce regulatory burden. (http://www.nih.gov/grants/policy/regulatoryburden/human subjectsprotection.htm), as it is clear that for most clinical research studies, more than 60 often mutually conflicting rules and regulations apply from different organizations. Every investigator will transgress at least one of such regulations for each patient treated and can be found to be at fault on review by regulatory agencies.

I disagree with Dr. Glatstein (finally!) on the need for more high-quality randomized studies in clinical oncology. Patients with advanced-stage solid tumors have cure and survival rates of 50% or less, notwithstanding toxic multi modality therapies. The design of a discriminating randomized study is difficult under those circumstances. Dr. Glatstein might ask: How many arms in the study? Which sequence of treatment modalities? What therapy intensity for each modality? And so on and so forth. Expected differences in survival response rates between study arms will be small for most studies; for example, 10–20%. Larger anticipated differences should probably not be tested in human patients for obvious ethical reasons. Small differences given to eager statisticians will translate into recommendations for high numbers of patients per treatment arm. This, in turn, will indicate the need for a large budget and the use of a collaborative group to secure timely patient accrual. Committees will be asked to design the study. Often this will have the same effect as asking a committee to design a horse: After lengthy deliberations, the committee will proudly present a camel. Very few randomized studies in patients with advanced solid tumors have identified treatment advances. Important advances have been documented in large randomized studies of patients with early breast cancer. For patients with advanced cervix cancer or non-small-cell lung cancers, large randomized studies and mega analyses have provided answers that were already available from small single-arm studies, or should have been available if such studies had been properly done [4 and 5].

We are all aware that resources for clinical research are limited. The budget needed for one large randomized trial would suffice for 5–10 single-arm, single-modality studies in patients with incurable solid tumors. I contend that with appropriate prior studies in translational, preclinical models, such studies have a much higher creativity quotient and a much higher chance of putting useful therapies in the hands of patients and the scientific physician more rapidly. If real therapeutic advances are made, small numbers of patients will demonstrate this, without the need for ever doing large randomized studies, comparing the new treatment to old inferior treatment. I do not believe penicillin treatment was even randomized to a placebo arm. Human patients should not be subjected to all kinds of procedures, which are totally acceptable in the animal laboratory, such as large numbers, low p-values, and randomization to less effective treatment arms.

Not all scientific physicians will be as nice to listen to and talk to as Dr. Glatstein. Still, I firmly believe that creators of new ideas, e.g., physician/scientists, rather interact with scientific physicians than large pharmaceutical firms or punitive regulatory agencies. Discussions between physicians, with the right mix of scientific creativity and patient advocacy, continue to hold great promise for solving important clinical oncology problems in the near future. In contrast, the undiscriminating application of buzzwords, such as: evidence-based medicine, Kaplan-Meier plots, meta-analysis, p-value less than 0.05, and small beta errors, is mainly a costly, uninspiring, and ultimately unproductive method.

We will all enjoy our important patient work more thoroughly if we know that Eli Socrates Glatstein is always lending us his critical ear and is ready to ask at least one more penetrating question any time.

References

  1. E. Glatstein, Of scientific physicians and evidence-based medicine. Int J Radiat Oncol Biol Phys49(2001), pp. 619–621. Abstract | Article | PDF (41 K) | View Record in Scopus | Cited By in Scopus (4)

2.The human radiation experiments. Final report of the President’s Advisory Committee, Oxford University Press, New York/Oxford (1996).

  1. R. Levine, Institutional review boards: A crises of confidence. Ann Int Med134(2001), pp. 161–163. View Record in Scopus | Cited By in Scopus (23)
  2. P. Eifel, Chemoradiation for carcinoma of the cervix: Advances and opportunities. Radiat Res154 (2000), pp. 229–236. View Record in Scopus | Cited By in Scopus (27)
  3. D. Carney and H. Hansen