Thursday, July 30, 2009

On the Struggles of Cancer Centers
Vincent T. DeVita Jr. and Elizabeth DeVita Raeburn
My daughter, Elizabeth, and I are writing a book about the War on Cancer. So,I’ve been doing a lot of reflecting on how things have played out since the Cancer Act was passed in 1971. Cancer Centers were a major theme of the Cancer Act.
Before it was passed, we had three freestanding cancer centers. Now we have 63—the vast majority “matrix centers,” meaning they are integrated within the structure of university medical schools and hospitals. The matrix set-up helped us get new cancer centers up and running across the country much faster, and at less cost than building new, free-standing ones.
The downside, however, is that, historically, “matrix” centers have often run afoul of the departmental structure of medical schools. They are often centers in name only. More energy is often expended over turf struggles between the departments of medicine and cancer centers over the operation of the sections of Medical Oncology than over innovations in cancer care. This is a recurring theme that has dogged NCI cancer centers since their inception in 1972.
Was there a better model? Possibly. Columbia College of Physicians and Surgeons (C P&S) had the first University based freestanding cancer hospital. But even it was dogged by some of the problems that affect the matrix model.
In November of 2007, while researching the upcoming book, Elizabeth and I interviewed Dr. Alfred Gellhorn the first Director of the Francis Delafield Cancer Hospital.
Gellhorn was trained as a general surgeon but re trained himself in Internal Medicine. When we interviewed Dr. Gellhorn, he was involved in his fourth successful career since leaving the Delafield.
Here’s an excerpt of our conversation:
“Dr. Gellhorn how did the Delafield get started?”
“It started in 1952. A CP&S breast surgeon by the name of Christian Hagenson persuaded the city to build a cancer hospital. Hagenson could never find anybody in Internal Medicine who was interested in cancer because the Chairman of the Medicine, Robert Loeb, felt that this was a disease that a good internist would have nothing to do with, because you couldn’t cure it.
I had absolutely unbounded admiration for Bob Loeb. He was a magnificent clinician…and a wonderful teacher. I think he just had a blind spot when it came to cancer. I think that the reason that Medical Oncology got such a slow start had to do with the attitude of people in Internal Medicine.
Loeb would not consider rotating CP&S house staff through a cancer hospital, but we were able to eventually get approval for our own residency program. I was able to recruit some wonderful colleagues, like Paul Marks, Elliot Osserman, John Ultmann, Bernie Weinstein, Helen Ranney among others. And my first chief resident was Jim Holland.
I was then the Director of the Institute of Cancer Research. The Delafield was its clinical arm where we also had labs.
Loeb tolerated me. But he was fond of saying rather openly, “Alfred, you’re a part of the lunatic fringe.”
And then Loeb retired and an even more pessimistic and indifferent clinician came in as Chair of Medicine, Stanley Bradley. Stan Bradley sent two guys to replace me in ’68 who had no connection to cancer and it was closed shortly thereafter.”
Despite the obvious need for, and the success of the Delafield, two Chairs of Medicine closed it. Their actions set the template for most NCI matrix centers today. Delafield was a good model just not free standing enough.
CP&S received its NCI designation in 1974. Since then, like a lot of matrix centers, they have struggled to find their Cancer Center identity. I am sure, for those who knew of the Delafield, there have been many times when they wished they still had it.
Dr Gellhorn died peacefully in March 2008 at the age of 94. He was a great mentor and a pioneer in the cancer field. Not many people, having been told they were part of the “lunatic fringe” by one of the greats in medicine, would have had the fortitude to continue. He did. And for that, and all the many luminaries he trained, we all owe him a debt of gratitude. Thanks Alfred.
The Co-Directors of the Center for Management
Research in Healthcare at Vanderbilt,
David Dilts and Alan Sandler, reported a study
of an alarming nature to the National Cancer
Institute (NCI). The study was covered by The
Cancer Letter and documented that cancer
trials take an average of 800 days to start. The
clinical trials program in the US is broken and
apparently nobody has noticed: 800 days! Can
you imagine that? The response of the NCI
Director was a vow to cut the time to approval
in half. At 400 days the system would still be
broken. Dilts and Sandler reported that for
cooperative-group studies, a protocol takes
800 days from conception to activation, with an
additional 200 days at a cancer center. Even
an investigator-initiated trial at a cancer center
takes a median of 116 to 252 days, and if you
make amendments to a study it goes back
to the end of the line. Furthermore, the study
by Dilts and Sandler was conducted by two
cooperative groups at four centers selected
by NCI because they scored better than most
(in peer review) on the function of their clinical
trials programs. The solutions offered by
the authors of the study to rectify this situation
were not of much help (e.g. start fewer studies,
stop tweaking studies).
We live in an age where we are blessed with
bounteous information about critical steps
in the pathways that cancer cells use to outstrip
the growth of normal cells and information
about how their cell-death pathways are
broken. Although we have the specific agents
and the opportunity to capitalize on the whole
concept of ‘targeted therapy’ we find ourselves
unable to use these advances. If one looks
at past successes in treatment, they were
characterized
by the ability to test an idea
rapidly, and to tweak studies to allow swift
adjustment of protocols in response to events
happening in real time. I have not examined
the protocols in question directly, but I would
not hesitate to suggest that a study that takes
800 days to activate is hopelessly outdated
the day it starts. Furthermore, if investigators
are encouraged to avoid tweaking, then the
talents of our best investigators are either not
being used or are being wasted.
At my own institution I reside on many committees
involved in protocol review, and I can
say with authority that we are hopelessly
over-regulated.
Although cancer centers are
not perfect and have many problems of their
own, they are still where the knowledge base
is. The requisite talent to know the right way
to design and modify an ongoing study does
not reside on remote review committees at the
NCI or the FDA, yet those are the places where
the delays are greatest. Too many cooks are
spoiling the broth.
A step towards fixing this problem would
be to delegate the entire review and approval
process, at least for phase I and II studies,
to NCI-designated cancer centers with NCIapproved
clinical trials programs, with the NCI
and FDA only retaining audit responsibilities.
This was the role envisioned for cancer centers
by those who framed the NCI Act of 1971.
Most of these regulations have been imposed
on us by the US congress in the name of
patients’ safety. Review boards set up and
codified in regulations to protect patients are
doing just the opposite. They prevent patients
from having access to the fruits of the war on
cancer and the best we have to offer.
We have met the enemy and he is us!

I would like to return to the subject of over regulation and how it is sometimes of our own doing. My editorial on the broken clinical trials program generated a fair amount of discussion, including at my own institution. In an article written about the editorial, Dr. Alan Sandler provided a startling additional piece of information. At one institution, 87 steps and 29 signatures were required to launch a trial (Helwick, C. Oncol. News Int. 18, [2009]). There are no regulations that require so many signatures. Since my editorial and the article were circulated to all relevant committee members at Yale, I was invited to attend a discussion of the problem at a Cancer Committee meeting. It was a lively group and they had a good discussion hoping to improve the protocol review and approval process at Yale and dearly hoping we didn’t require 29 signatures for approval (we didn’t). Near the end of the meeting, someone asked “Why do we have to submit cooperative group protocols to our Human Investigation Committee (HIC) when they have been reviewed by NCI’s central Institutional Review Board and many other committees and are essentially unchangeable by the time they arrive at our place anyhow?”
A member of our HIC rose to answer and said ”You know, we always wondered why you sent them to us to review because we don’t really have to review them.” A hush fell on the room. People looked at one another. Investigators had assumed these protocols needed HIC review and routinely sent them into the process. HIC members thought they didn’t but took the submissions as a request for a review. Neither one had said anything to the other. Both were doing a lot of unnecessary work.
In my April editorial on the use of approved drugs in non approved ways, I highlighted another problem that occurs in our highly regulated environment (DeVita Jr, V.T. Nat. Revs. Clin. Oncol. 6, 181 [2009]). That is the assumption that everything we do needs FDA input and approval. In that case, investigators were intending to ask the FDA for approval of a new use for drugs that have been around for many years and are safe. If you ask the FDA to define a strategy for you they will and they may define a complicated set of steps. But, you may not even need to ask. They actually have better things to do.
I came across a small article in Journal Watch General Medicine by Thomas L. Schwenk entitled “A critique of clinical practice guidelines” (Schwenk, T.L. Journal Watch General Medicine 1, 1 [2009]). I gravitated to it because I dislike practice guidelines. In my view, they are too restrictive in rapidly moving fields. Schwenk was commenting on guideless in the cardiovascular field not for cancer but the general principles are the same. He cited a study of the evidence underlying guidelines developed by committees appointed by the American Heart Association and the American College of Cardiology (Tricoci, P. et al. JAMA 301, 831–841 [2009]). In short, since 1984 there have been 7,196 recommendations on 22 topics for 56 clinical practice guidelines. The number of recommendations increases by almost 50% and, the levels of evidence for guidelines decrease with every iteration so that the majority of recommendation are now made on the basis of opinions and case studies. Schwenk says “Clinical practice guidelines, once spare and elegant in their creation, dissemination and application have become commonplace, tedious and of questionable clinical relevance.” The recommendation to correct this was to replace all members of the guideline committees with each iteration, especially the leadership.
These are all examples of things we do to ourselves. They brought to mind my favorite cartoon of all times “Pogo”. In one famous cartoon, Pogo was commenting on the politics of the 1970s in the US. He is standing on a platform in front of an inclined mirror looking at his image and appears startled - his hat is seen flying off. He says “We have met the enemy and he is us!”
In the three examples I used above the enemy is us. That’s the bad news. The good news is anything put together by us can also be disassembled by us if we are willing to ask ‘is this really necessary?’
The Most important news at ASCO 2009

Iam often asked my opinion of the most important presentation at the annual ASCOmeetings. Igave up long ago trying to explain that it is impossible to read the thousands of abstracts and rate them accordingly, but like everyone else Ido keep my eyes peeled to note something of special interest. In most years the pickings are actually fairly slim; breakthrough results rarely make it to ASCO. However, this year it was easy. Buried amongst all the other news tidbits emanating from the 2009 ASCOmeeting in Orlando, Florida, was the description of a momentous event. Iread about it in the Wall Street Journal (Winslow, R. AstraZeneca, Merck to Test Cancer Drugs in ‘Cocktail’. Wall Street Journal 1 June 09). The two drugs in question, Merck’s Akt inhibitor MK 2206 and AstraZeneca’s Mek inhibitor AZD 6244, are years away from any possible approval. This is what made the announcement so unusual. They would be tested together in the clinic by their respective companies to try to verify preclinical evidence of synergy.
Why does this announcement have such an important impact? First, it is axiomatic that systemic cancer cannot be cured by single agents. Virtually all past curative treatments consist of combinations of agents and, for targeted therapies, the redundancy in the signaling systems at the disposal of the cancer cells suggests the same will be true. Are there exceptions? Yes. But they are rare. Mother Nature has sent us a message and we should listen to it. So why persist in testing cancer drugs one at a time?
Second, we are faced with an interesting problem in the development of targeted therapies; the more specific the agent is against the target the less the toxic and therapeutic effects. We got what we wished for—clean hits on target. However, we are left with very little on which to base approval of individual targeted agents for clinical use, except that these newer agents are safer than older cytotoxic chemotherapy. Pharmaceutical companies face the dilemma that after investing millions of dollars on developing an agent that blocks a theoretically important pathway they either have to ditch it, on the basis of the usual standards of efficacy for drug approval, or wait another decade of testing in combination with other promising agents. Furthermore, patients with cancer have to wait with them. In fact, this is what has happened. Most of the new and useful targeted treatments would never have been marketed if they had not been tested in combination with another agent, usually a cytotoxic drug.
Third, most biopharmaceutical companies do not have the resources to develop multiple targeted therapies at the same time. They do believe, however, that it is in the best interests of their shareholders to focus on getting their drug approved, by itself, above all else. This usually does not involve testing it jointly with a drug from another company before it is approved.
So, we are at a time when we have identified many important biological pathways cancer cells use to divide and survive, and easily have the capacity to develop inhibitors of these pathways, but our old inflexible approach does not allow us to use our imagination and test these agents early on in ways that the best scientific minds think might achieve synergy. That is why the announcement by Merck and AstraZeneca is so important. It establishes a new paradigm. We need to push competitive interests back even further. Iwould suggest that the next step is to do the same thing as each drug actually reaches the clinic—combination phase Itrials, if you will. The problems this approach will create for pharmaceutical companies trying to sort out relative value for their shareholders, and for regulatory agencies worried about safety, should not be underestimated, but they too can be solved. It is truly the way of the future for cancer treatment.
doi:10.1038/nrclinonc93

Wednesday, April 2, 2008

Trials and Tribulations

It may be apparent from my most recent postings that, in my opinion, cooperative oncology groups around the world rarely use the inductive reasoning process. In the midst of a molecular revolution the main instrument for translating research findings from the laboratory to the clinic is, therefore not functioning as efficiently as it should be. When designed properly clinical trials can address fundamental questions. The design of a good therapeutic experiment should be driven by knowledge of the biology and natural history of the disease. For example, if you want to design studies to treat Hodgkin's disease, you need to learn to think like a Reed-Sternberg cell.
I envisage a group of cancer cells sitting in a corner laughing hysterically at some of our approaches to experimental design. Surgeons only willing to study the operation they were trained to do; medical oncologists always giving chemotherapy on days 1 and 8 because that’s when we have clinics; radiation oncologists giving treatments only 5 days a week because it is difficult to work at weekends. Often none of the above are willing to even participate in a trial unless they are assured of being part of the outcome. Cancer cells don’t think that way.

When involved in oncology therapeutic research you should always have in mind the five most important questions that can be addressed in clinical trials relevant to the oncology field. It is also important to know and be able to use the most appropriate human model system to address these questions. Of course there are no clinical cooperative groups organized to do this but there should be. Most groups specialize in studying a particular disease, or a particular stage of a disease. They should always have in mind the five most important questions that can be addressed in a clinical trial in that disease, or that stage of the disease. And, the questions should not be limited to questions facing a particular specialty.
How do you discern the most important questions of the day? Any individual or group responsible for a substantial clinical trials program ought to periodically participate in state of the art exercises, where experts are asked to define the major questions. And, these experts should not be limited to one’s own group. The end result would be questions that will apply to anyone anywhere studying the disease in question. Each group would then be unique only in the resources it can bring to bear on addressing a question. If we did this, there would be fewer individual group and more intergroup studies.
The larger the group involved the less the tendency there is to go through this kind of exercise. Groups become attached to their own specialties or their own studies. The focus is not on the best way to address important therapeutic questions but the best way to prove their approach is superior to someone else’s.
And, there is a tendency to do studies that address less important questions so that a clinical trial is available for each disease or stage of a disease that falls within the purview of a group. This is useful as a measure of the activity of a group when we review each other’s grants. So while we have a myriad of regulatory agencies and committees enforcing burdensome regulations ostensibly to protect patients, we spend precious little effort assuring that a clinical trial is really addressing a fundamental question. If you want to really protect the interests of cancer patients, for every trial you are asked to review, ask yourself where is the hypothesis? And, will the proposed study test it or try to confirm it?

Tuesday, February 26, 2008

More on Strong Inference

I want to revisit the subject of my previous editorial on the subject “Strong inference.” It’s an important concept. That editorial focused on a specific set of clinical studies on the treatment of early-stage Hodgkin’s disease but the failure to use strong inference is generic in the design of clinical trials because of the nature of the participants; that is, human subjects. Asking fundamental questions is not easy in clinical trials.
John Platt, who coined the term strong inference, said it consists of applying the following three steps: the development of alternative hypotheses; the development of crucial experiments devised to exclude one or more hypotheses, and the execution of careful studies to obtain easily interpretable result. This process results in the minimum number of steps to solve a problem.
The studies on Hodgkin’s disease were not designed to exclude the hypothesis that an adequately delivered, standard chemotherapy program, not a favorite untested alternative, could perform as well alone as when combined with radiotherapy. These studies spanned four decades without answering definitively the question of the respective places of radiotherapy and chemotherapy in early-stage disease. The same is true of the testing of local and systemic treatments for localized breast cancer. All the necessary data and tools to test the alternate hypotheses that radical mastectomy was either too much for small tumors or too little for large tumors, were in place by the 1960s. The major reason for failure was heretofore unappreciated micrometastases. But, definitive clinical trials were not completed until the 1980s because studies not designed to exclude a hypothesis are often repetative. "We measure, we define, we compute, we analyze, be we do not exclude", Platt said.
By training, clinicians cannot alter their methods rapidly and they tend to be men and women of one method. Disproving a therapeutic hypothesis might also result in the shift of the major part of the management of a disease from one specialty to another, which is generally not well received in medicine; therefore, there is a tendency for specialty competition to dominate the design of clinical experiments. Management shifts eventually happened in the examples cited above but they took too long.
We need to increase the use of strong inference in the design of all of our clinical studies. Hypotheses need to be clearly visible and the experiments designed to exclude them rather than support them. It will redirect us to a problem rather than a method orientation. But, this requires investigators to be willing, repeatedly, to put aside their last methods and adopt new ones. Investigators should also be willing to design studies that may exclude their specialty from the management of the disease. When a fact fails to fit a hypothesis we should retain the fact and discard the hypothesis.
As we enter the arena of molecularly targeted therapy, we will, in my view, see a shift form doing large studies looking for small differences to doing small studies looking for large differences. We may also need to introduce these new treatments at earlier stages where they will necessarily compete with established treatments. The design of such trials will be daunting but important to capture the clinical value of the many new advances we see printed in this journal in every issue. The use of strong inference will guide us well. It is applicable to all research, both in a laboratory and in the clinic and it is what really distinguishes good from bad research regardless of the size of the particle under study. Try it, you’ll like it.

Strong inference and Inductive Reasoning

In November, Ferme et al., (1)published an article on the treatment of early- stage Hodgkin’s disease and said the following; “Our study showed that a combination of chemotherapy and radiotherapy should now be the standard of care for all patients with localized stage, supradiaphragmatic Hodgkin’s disease” and, added somewhat amazingly, that “the remaining question now under investigation is whether early-stage Hodgkin’s disease can be cured by chemotherapy alone”.
I have to confess I was a bit stunned by this backward statement . And decided to editorialize about it in Nature CLinical Practice Oncology, where I serve as Editor in Chief. This , the initial editorial, will appear in the April issue. The one that will follow will appear in the May issue.
In 1970, the cure of advanced Hodgkin’s disease with combination chemotherapy was reported,(2 ) an observation that has been amply confirmed since then. The complete remissions in that study have remained durable over four decades. In the 1960s, Howard Skipper promulgated “the inverse rule”, which stated there is an invariable inverse relationship between the body burden of cancer cells and curability by chemotherapy in all experimental systems studied. In 1991, we published a randomized trial comparing MOPP chemotherapy to standard radiotherapy in poor prognosis early-stage Hodgkin’s disease.(3,4 ) Even after 25 years’ follow-up chemotherapy won hands down, as it has in all studies since, when compared with radiotherapy alone.(5 )The collective response and survival data from the chemotherapy arm of our 1991 study and others like it that use standard and adequate chemotherapy suggest the inverse rule is operative in human systems. No one has shown a survival advantage of adding radiotherapy to chemotherapy in early-stage disease compared with standard and adequate chemotherapy alone. Despite this, almost all large clinical trials have not tested a standard chemotherapy program alone compared with combined modality regimens. This is important because we know that radiotherapy is associated with very substantial late carcinogenic effects especially to the breast.
I used the words “adequate chemotherapy” repeatedly because one has to question the quality of the administered chemotherapy in the Ferme study although no data are supplied in the paper to judge this. It’s as if as long as the acronym is familiar just saying it was used is sufficient to assure the quality of administration. Administered in 92 different institutions it was only capable of achieving a complete remission in 64% of patients with early-stage disease, a substantially lower complete remission rate than everyone is achieving in patients with advanced disease who have a much larger tumor burden. When inadequate chemotherapy is used, radiotherapy always improves the outcome.
In 1964, John Platt coined the use of the term “strong inference” to describe a particular approach to research using an inductive reasoning process.(6) The process involves devising alternative hypotheses and designing experiment that will exclude a hypothesis. “How many of us,” he said, “focus on experiments to exclude a hypothesis? We measure, we define, we compute, and we analyze but we do not exclude." If one examines rapidly moving fields you will find that inductive reasoning is the backbone of the design of experiments. In the laboratory a hypothesis can be excluded in a week. Clinical investigators, because of the nature of their experimental subjects, have a special burden but also a special responsibility not to waste resources on experiments that take 5 years or more to complete that are not designed to exclude a hypothesis.
Strong inference and the inductive reasoning process have not been apparent in these large clinical studies of Hodgkin’s disease. The alternate hypothesis that standard and adequate chemotherapy, alone in early stage Hodgkin’s disease would be equivalent or better than combined modality therapy has not really been tested let alone excluded. We already know we can cure it. The reason for this is addressed in another posting to follow.

References
1.Fermé et al. (2007) Chemotherapy plus involved-field radiation in early-stage Hodgkin's disease. N Engl J Med 357: 1916–1927
2. DeVita VT Jr et al. (1970) Combination chemotherapy in the treatment of advanced Hodgkin's disease. Ann Intern Med 73: 881–895
3. DeVita VT Jr et al. (1980) Curability of advanced Hodgkin's disease with chemotherapy. Long-term follow-up of MOPP-treated patients at the National Cancer Institute. Ann Intern Med 92: 587–595
4. Longo DL et al. (1991) Radiation therapy vs. combination chemotherapy in the treatment of early stage Hodgkin's disease: Seven-year results of a prospective randomized trial. J Clin Oncol 9: 906–917
5. Longo DL et al. (2006) A prospective trial of radiation alone vs combination chemotherapy alone for early-stage Hodgkin’s disease: implications of 25-year follow-up to current combined modality therapy [abstract # 98]. Pro Am Soc Hem 108(11)
6. Platt JR (1964) Strong inference: certain systematic methods of scientific thinking may produce much more rapid progress than others Science 146: 347–353