Clinical trials are the foundation of modern medical advancements, yet very few people know their history and how they have evolved over time. These studies are essential for the development of new treatments, medications, and therapies that improve the quality of life for millions of people worldwide. Over the centuries, the way clinical trials have been conducted has changed drastically, transitioning from rudimentary methods to sophisticated, highly regulated controlled studies.
Today, clinical trials are highly regulated and safe processes designed to ensure the efficacy and safety of treatments before they are approved and widely used. However, this has not always been the case. For much of history, medical research relied on observation and experimentation without strict regulations, leading to significant discoveries but also posing substantial risks to participants.
From the earliest attempts to test natural remedies in ancient times to modern studies that use cutting-edge technology and rigorous methodologies, the history of clinical trials is a story of learning, ethics, and scientific progress. Every discovery, regulation, and innovation has contributed to the development of safer and more effective treatments, benefiting present and future generations.
The First Clinical Trials in History
The First Recorded Clinical Trial (1562 B.C.)
The Book of Daniel in the Bible: During his reign in Babylon, King Nebuchadnezzar ordered his people to eat only meat and drink only wine, believing this diet would keep them in optimal physical condition. However, several young nobles preferred a diet of vegetables and opposed the king’s decree. As a compromise, the king allowed these rebels to follow a diet of legumes and water, but only for ten days.
At the end of the experiment, the vegetarians appeared healthier and better nourished than those who followed the king’s diet. Seeing this, the king permitted them to continue with their preferred diet. Though rudimentary, this can be considered the first documented clinical trial, demonstrating how observation and comparative methods could be used to assess the effectiveness of an intervention.
The “Canon of Medicine” by Avicenna (1025 A.D.)

This medical encyclopedia, written by the Persian polymath Ibn Sina (Avicenna), covered a wide range of medical and scientific topics. It suggested that clinical trials should test a remedy in its natural state on an uncomplicated disease. Avicenna also recommended studying two opposing cases, analyzing the time of action, and verifying the reproducibility of effects.
These principles resemble a modern approach to clinical trials. However, historical records do not confirm whether these guidelines were systematically applied in medical practice at the time.
1537 – The Famous Surgeon Ambroise Paré

The French surgeon Ambroise Paré was responsible for treating wounded soldiers on the battlefield. At that time, the conventional treatment for gunshot wounds was to cauterize injuries with boiling oil. However, during one battle, the number of injured soldiers was overwhelming, and Paré ran out of oil. This forced him to improvise with an alternative treatment:
“In the end, I ran out of oil and was forced to apply a digestive ointment made of egg yolk, rose oil, and turpentine instead.”
That night, Paré was unable to sleep, fearing he had failed his patients. However, the next morning, he was astonished by what he found:
“I woke up early to check on them and, beyond my expectations, those to whom I had applied the digestive ointment felt little pain, their wounds were neither swollen nor inflamed, and they had slept through the night.”
In contrast, the soldiers treated with boiling oil suffered from fever, extreme pain, and severe wound inflammation. This experience led Paré to an important realization:
“I then decided never again to cruelly burn the poor wounded by gunshots.”
This moment marked a significant shift in surgical practices and demonstrated the importance of careful observation, comparative treatment, and humane approaches in medical care.
1747 – James Lind’s Scurvy Trial (18th Century)
In 1747, Scottish physician James Lind conducted what is considered the first modern clinical trial.
While working as a surgeon on a British naval ship, Lind was horrified by the high mortality rate caused by scurvy among sailors. Determined to find a cure, he designed a comparative experiment. In his 1753 book A Treatise on the Scurvy, Lind detailed how he selected 12 sailors “as similar as possible to one another” and divided them into six pairs. All were given the same basic diet, but each pair received a different supplement: cider, vitriol elixir (sulfuric acid), vinegar, citrus fruits, and an “elective remedy recommended by a hospital surgeon.” The two most severely ill patients were given seawater as a treatment.
Lind recorded the results:
“The most sudden and visible positive effects were observed in those who consumed oranges and lemons. One of them was fit for work within six days. The other had the most significant recovery among all in the trial. After citrus fruits, I thought cider had the best effects.”
This experiment demonstrated the crucial role of vitamin C in preventing scurvy and laid the foundation for controlled clinical trials, where different groups receive distinct interventions and their outcomes are systematically compared.

Portrait of James Lind. Source: Wellcome Library, London. Stipple engraving by J. Wright after Sir G. Chalmers, 1783.
1780 Gilbert Blane and the First Large-Scale Clinical Trial

In the 1700s, scurvy was alarmingly prevalent among sailors, leading to the failure of numerous naval operations. This disease, combined with fevers and other illnesses caused by the unsanitary conditions on ships, posed a severe threat to naval forces.
In October 1781, Scottish physician Gilbert Blane conducted a large-scale study and demonstrated that one in seven sailors in the West Indies fleet died from scurvy annually. Based on his observations, he recommended preventive measures such as the supply of wine, fresh fruit, and other provisions to help combat the disease. Additionally, he advocated for stricter health regulations and improved discipline in naval hygiene practices.
By July 1782, Blane reported remarkable results: the annual mortality rate among sailors had dropped to one in twenty. His findings contributed significantly to the widespread adoption of citrus fruit rations in the British Navy, a practice that ultimately helped eradicate scurvy from the fleet.
1816 Alexander Hamilton: The Path to Randomized Clinical Trials
In 1816, British Army surgeon Alexander Lesassier Hamilton reported a controlled trial conducted during the Peninsular War (also known as the Spanish War of Independence) to assess the (harmful) effects of bloodletting for fever. According to Hamilton, regarding the sick, it was arranged “that each of us was responsible for one-third of the total. The sick were received indiscriminately and were attended to, as much as possible, with the same care and provided with the same comforts. Neither Mr. Anderson nor I used the lancet even once. He lost two, I lost four cases, while from the other third (treated with bloodletting), 35 patients died.”
1863 Austin Flint: The Introduction of the Placebo
The word “placebo” first appeared in medical literature in the early 19th century. Hooper’s Medical Dictionary of 1811 defined it as “an epithet given to any medicine more to please than to benefit the patient.” American physician Austin Flint planned the first clinical study comparing a dummy remedy with an active treatment, which he detailed in his book A Treatise on the Principles and Practice of Medicine, published in 1886. Flint treated 13 patients suffering from rheumatism with an herbal extract instead of an approved remedy. He later reported: “This was administered regularly and became well known in my wards as the ‘placebo remedy’ for rheumatism. The favorable course of the cases was such that it secured the full confidence of the patients.”
The Advancement of Clinical Trials in the 19th Century
The 19th century brought a greater interest in scientific experimentation and the application of the experimental method in medicine. Treatments began to be tested more systematically, although still without strict regulations.
Louis Pasteur and Vaccine Trials
One of the most important advancements during this period was the work of Louis Pasteur, who conducted experimental trials to develop vaccines. Pasteur, a French chemist and microbiologist, revolutionized medicine with his discoveries about microorganisms and their role in diseases. His germ theory of disease was fundamental to the development of preventive treatments, such as vaccines.
In 1885, he successfully tested the first rabies vaccine on a boy named Joseph Meister, who had been bitten by a rabid dog. At the time, rabies was a fatal disease with no effective treatment. Pasteur and his team had worked on developing a vaccine by weakening the rabies virus through an attenuation process in rabbit nerve tissue.
When Meister was brought to Pasteur, the child was at high risk of developing the disease. Although Pasteur was not a medical doctor, he decided to administer his experimental vaccine in a series of injections, hoping it would save the boy’s life. To the team’s surprise and joy, Joseph Meister survived without developing rabies symptoms, proving the vaccine’s effectiveness.
This was a key milestone in the history of clinical research and immunology, confirming that vaccines could prevent deadly diseases. Furthermore, this discovery laid the groundwork for the development of future vaccines and the expansion of medical research in microbiology. Thanks to Pasteur’s work, vaccination became a crucial method for preventing infectious diseases, saving millions of lives to this day.
1932 The Tuskegee Syphilis Experiment
One of the most infamous examples of unethical experimentation on humans was the Tuskegee Syphilis Study, conducted at the Tuskegee Hospital in Alabama. This clinical study on syphilis lasted 40 years and was carried out without the informed consent of the participants—600 African American men (400 infected and 200 healthy as a control group). They were falsely told they were being treated for “bad blood,” a local term used to describe various ailments, including syphilis, anemia, and fatigue.
The sick participants did not receive proper treatment to cure their disease, even after penicillin was discovered as an effective cure for syphilis. The study’s goal was to compare the progression of untreated syphilis patients with that of healthy men.
This case led to the creation of the Belmont Report (1978), titled Ethical Principles and Guidelines for the Protection of Human Subjects of Research, commissioned by the U.S. government. Written by a panel of experts, the report established three ethical principles:
- Respect for persons
- Beneficence
- Justice
Tuskegee Hospital Patients
1943 First Double-Blind Controlled Trial: Patulin for the Common Cold
The United Kingdom’s Medical Research Council (MRC) conducted a trial between 1943 and 1944 to investigate the use of patulin, an extract from Penicillium patulinum, as a treatment for the common cold. This was the first double-blind comparative trial with concurrent controls in the general population. It was also one of the last trials to use non-random or quasi-random subject assignment.
1946 Austin Bradford Hill: First Randomized Curative Trial of Streptomycin
The concept of randomization was introduced in 1923. However, the first randomized controlled trial for streptomycin in pulmonary tuberculosis was conducted in 1946 by the UK’s Medical Research Council (MRC), with English epidemiologist Austin Bradford Hill as the lead statistician.

This trial set a meticulous standard for design and execution, using systematic criteria for enrollment and data collection, unlike many contemporary studies. A key advantage of randomization over simple alternation was the concealment of treatment allocation at the time of patient enrollment. Another significant feature was the use of objective measures, such as expert interpretation of X-rays by specialists who were blinded to the patients’ treatment assignments.
The English epidemiologist Austin Bradford Hill.
1960 The Thalidomide Tragedy
The words thalidomide and birth defects became permanently linked due to pregnant women who used thalidomide, marketed since 1957 in Germany as a sedative and nausea relief medication, giving birth to children with severe malformations.
In November 1960, the Food and Drug Administration (FDA) denied the request to market thalidomide in the United States due to its potential effects on the fetus. In 1961, the association between thalidomide and severe embryopathy was discovered, leading to its withdrawal from the market in Germany, the United Kingdom, Australia, and other countries.

Thalidomide was marketed in Germany since 1957 as a sedative and nausea relief for pregnant women.
1964 Declaration of Helsinki: Ethical Principles for Medical Research Involving
Human Subjects
The World Medical Association (WMA) developed this declaration as a statement of ethical principles to guide physicians and other participants in medical research involving human subjects. It was the first document to require that biomedical research on humans adhere to generally accepted scientific principles.
1996 International Conference on Harmonization Good Clinical Practice (ICH-GCP)
The International Conference on Harmonization of Good Clinical Practice (ICH-GCP) is a set of ethical and scientific standards for the design, conduct, recording, and reporting of clinical trials involving human subjects.
Finalized in 1996, this conference brought together representatives from regulatory agencies and industry from Europe, Japan, and the U.S., as well as the World Health Organization (WHO), Canada, Nordic countries, and Australia. Its objective was to harmonize regulatory requirements for clinical trials, establishing fundamental principles such as informed consent, participant safety and well-being, data quality, and result integrity. Additionally, it provides detailed guidelines on specific aspects of trial planning, execution, and reporting.
Evolution of Clinical Trials in the 20th Century
The Case of Unregulated Studies
Before the 20th century, clinical trials lacked strict regulations, leading to dangerous and unethical practices. Medications and treatments were tested without standardized protocols, putting patients’ health at risk. A tragic case that marked a turning point in pharmaceutical regulation was the sulfanilamide disaster in the U.S. during the 1930s.
Sulfanilamide was an effective antibacterial agent in powder or tablet form. However, in an attempt to make it more accessible and appealing, a pharmaceutical company created a liquid version using diethylene glycol as a solvent—a highly toxic compound. Without conducting safety tests, the medication was distributed nationwide, leading to tragedy: over 100 people, including children, died from acute kidney failure.
This event was pivotal in shaping pharmaceutical regulations. In response, the U.S. Congress passed the Federal Food, Drug, and Cosmetic Act of 1938, requiring companies to test the safety of their products before marketing them. This law marked a milestone in the regulation of the pharmaceutical industry.
The Nuremberg Code and the Declaration of Helsinki
After World War II, the world became aware of the inhumane experiments conducted on prisoners by the Nazis. These studies, carried out without the subjects’ consent, included disease exposure, toxic substance testing, and extreme experiments in conditions of hypothermia and deprivation.
In response, the Nuremberg Code was established in 1947, laying down fundamental ethical principles for research involving human subjects. Among its key principles were voluntary informed consent, potential benefit to participants, and the avoidance of unnecessary suffering. This code became the foundation of modern bioethics and set the stage for future regulations.
Later, in 1964, the Declaration of Helsinki refined these principles, becoming the international ethical guide for human research. Drafted by the World Medical Association (WMA), the declaration placed greater emphasis on participant protection, ethical review by independent committees, and transparency in research. It also established the importance of ongoing review of clinical trials to ensure that risks remain lower than the potential benefits.
These milestones in clinical trial regulation marked a radical shift in medical research, ensuring that modern studies are safer, more ethical, and rigorously controlled, benefiting both participants and the advancement of medical science.
Modern Clinical Trials
Today, clinical trials follow a rigorous process divided into several phases, designed to ensure the safety and effectiveness of new treatments before they reach the public. Each phase has a specific purpose, from evaluating the initial safety of a drug to assessing its effectiveness in large populations. Thanks to this structured process, researchers can identify potential risks, optimize dosages, and ensure that the benefits of a treatment outweigh any possible adverse effects.
Clinical Trials Today: Safer and More Efficient Than Ever
Throughout history, clinical trials have evolved significantly. What were once rudimentary tests, often with little oversight and questionable ethics, have now become highly regulated and safe processes. Thanks to advancements in science, technology, and international regulations, modern clinical trials are designed to prioritize participant safety and ensure reliable results.
Regulation and Oversight: A Strictly Supervised Process
For a drug or treatment to be approved, it must undergo an extensive evaluation process supervised by regulatory agencies. In the United States, the Food and Drug Administration (FDA) is responsible for reviewing each new treatment, while in Europe, this role falls to the European Medicines Agency (EMA). These institutions enforce strict standards to ensure that every clinical trial meets criteria for safety, effectiveness, and ethics.
Additionally, before a study can begin, it must be reviewed and approved by an Institutional Review Board (IRB). These committees ensure that participants’ rights are respected, that they are clearly informed about the study, and that their well-being is protected at all times.
The Technological Revolution in Clinical Trials
Technological advancements have completely transformed the way clinical trials are designed and conducted. Thanks to tools like artificial intelligence (AI) and big data, researchers can analyze vast amounts of information in real time, identifying patterns and predictions that previously took years to uncover.
- Artificial Intelligence (AI): Enables the analysis of patient data and prediction of which treatments may be most effective, optimizing participant selection and accelerating the research process.
- Big Data: By collecting information from millions of patients worldwide, scientists can identify trends and improve the accuracy of studies.
- Decentralized Trials: Thanks to telemedicine and wearable devices, participants can now be monitored without having to visit a hospital or clinic physically, making trials more accessible and efficient.
Enhanced Safety for Participants
One of the greatest achievements of modern clinical trials is their focus on patient safety. Unlike the past, when many trials were conducted without proper consent or strict oversight, today’s studies include multiple mechanisms to ensure that participants are informed and protected at all times.