In this part I will explore the development of sophisticated clinical trial designs in the later part of the 20th century and also look at some watershed public health tragedies that, unfortunately, were the catalyst for a host new regulations relating to medical research.
The 1940s: a focus on medical ethics
We take up the story in the 1940s, a landmark trial was being designed by the UK MRC investigating the effectiveness of streptomycin treatment for pulmonary tuberculosis. Along with the MRC, there was another group of researchers investigating the same drug in the United States. The US group had ample stores of the treatment drug, streptomycin, and had no reason to amend standard clinical trial practice at that time. In post-war Britain however, Bradford Hill (the head statistician for the MRC trial), didn’t have the same plentiful access to streptomycin and couldn’t treat all his patients with the drug. He decided to split the trial into 2 groups and randomly assign patients to a treatment or control group. This unwittingly eliminated a now well-known form of treatment "bias" in which clinicians select the healthier patients for experimental treatments leaving sicker patients in the control/standard therapy group. Hill's new design was the first true randomised controlled clinical trial. It was not however, "double blinded" – this is another way of insuring the objectivity of a trial by negating the power of "suggestion." In double blind clinical trials neither the patient or researchers know whether patients are receiving the new treatment or placebo. In this case the lack of double blinding in Hills study made little difference as the study showed conclusively that streptomycin treatment worked. The results of the study were published in 1948, Hill's use of concurrent controls (randomized, controlled) was highly praised as "a new era of medicine”.
Just prior to Hill’s trial a huge step forward in medical ethics took place in Germany 1947. The Nuremburg code was formulated in response to the Nazi medical atrocities of World War II. The Code was based on a memorandum by Dr. Andrew Ivy and described ten research ethics principles for human experimentation. These principles stated that the "voluntary consent of the human subject is absolutely essential" and also included other principles such as; the patient being able to end the experiment at any time, and that all safety precautions are to be taken to limit pain and suffering. Following on from these new regulations the World Medical Association formally rearticulated these principles in 1964. This was known as the Declaration of Helsinki, and is still looked upon today as the foundation of modern medical ethics.
Mid-century: high profile cases lead to change
Unfortunately, in the decades to come it took several high profile public health disasters for researchers to fully comprehend and enforce the ideas of medical ethics. One of the first and most well-known tragedies surrounded the use of Thalidomide.
Thalidomide was a widely used drug for the treatment of nausea in pregnant women in the late 1950s and early 1960s. It became evident in the 1960s that thalidomide had not been properly assessed before market and treatment with thalidomide could cause severe birth defects in children. Thalidomide use in pregnant women was banned in most countries at that time but thalidomide did go on to be a useful treatment for leprosy and later, multiple myeloma.
The thalidomide tragedy marked a turning point in toxicity testing, as it prompted the United States and international regulatory agencies to develop systematic toxicity testing protocols . The subsequent study of thalidomide and its effects on developmental biology led to important discoveries in the biochemical pathways involved in limb development. Recent research on thalidomide and its mechanisms of action is leading to a better understanding of its molecular targets and with an improved understanding of these targets, safer drugs may be designed.
Another very public example of research ethics gone awry was the notorious Tuskegee study. The study, which was titled "Tuskegee Study of Untreated Syphilis in the Negro Male“ was an infamous clinical study conducted between 1932 and 1972 by the U.S. Public Health Service, it invovled studying the natural progression of untreated syphilis in rural African-American men in Alabama under the auspices of receiving free health care from the government.
This ‘study’ which lasted 40 years, was extremely controversial: the researchers knowingly kept the study going for many years after penicillin had been validated as an effective treatment for the very syphilis the study was monitoring. Local officials turned whistle-blowers leaked the information to the press in the 1970s, this led to changes is US law concerning the treatment of participants on studies and the ethical standards to be followed by all researchers.
Blood cancer research and clinical trials
In our own medical arena of blood cancer and more specifically childhood leukaemia, the years from 1965-1975 were very important in terms of clinical trial development. The first cooperative clinical trials for childhood leukaemia were established in the 1950s in the UK and were supported by the then National Cancer Institute. Initially all children were treated with the same set of available medical interventions. It was observed however that children's long-term outcomes often correlated with characteristics that were present at diagnosis. This gave the clinicians an opportunity to tailor treatment plans according to certain diagnostic clinical features. Several important studies from 1969-1972 (Modan et al, Zippin et al), explored and validated these links which paved the way for the strategy of risk adaptation within clinical trials (and later in general treatment) to be developed. Despite generally poor outcomes due to basic treatments at that time, this period was pivotal in the development of clinical trials for infant ALL. Foundations were being laid for future therapy, namely the need for infant-specific ALL protocols, catogorisation of patietns along prognostic lines to allow for risk stratification, and a more unified approach between study groups to overcome limitations due to low incidence.
An international standard
Moving into the 1990s, the next great leap forward in clinical trial safety took place in 1996 in Brussels. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) (snappy name!) issued guidelines for Good Clinical Practice (GCP). ICH-GCP is a harmonised international standard that "protects the rights, safety and welfare of human subjects, minimises human exposure to investigational products, improves quality of data, speeds up marketing of new drugs and decreases the cost to sponsors and to the public." Currently all clinical studies worldwide must adhere to ICH-GCP if the results are to be published or the data considered by any major national drug authorisation board.
So that’s that, a potted history of clinical trials and related regulations in the 20th century. In the next and final part of the trilogy we will bring the story right up to date and explore some innovative clinical trial designs (basket, umbrella, adaptive) that are being used to test new drugs for blood cancer today. I’ll also give an update on our very own TAP programme and some of the innovative approaches to trials that we support.
Till next time!