If you are trying to make sensible decisions about your health, you really should do it based on solid, scientifically-based evidence. And that is the idea behind Evidence-Based Medicine (See also NIH).
I find the Wikipedia definition a little easier to follow than the one from NIH, though they are really saying the same thing. Here is the Wikipedia definition:
“Evidence-based medicine (EBM) is an approach to medical practice intended to optimize decision-making by emphasizing the use of evidence from well-designed and well-conducted research. Although all medicine based on science has some degree of empirical support, EBM goes further, classifying evidence by its epistemologic strength and requiring that only the strongest types (coming from meta-analyses, systematic reviews, and randomized controlled trials) can yield strong recommendations; weaker types (such as from case-control studies) can yield only weak recommendations. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients. Use of the term rapidly expanded to include a previously described approach that emphasized the use of evidence in the design of guidelines and policies that apply to groups of patients and populations (“evidence-based practice policies”). It has subsequently spread to describe an approach to decision-making that is used at virtually every level of health care as well as other fields (evidence-based practice).
Whether applied to medical education, decisions about individuals, guidelines and policies applied to populations, or administration of health services in general, evidence-based medicine advocates that to the greatest extent possible, decisions and policies should be based on evidence, not just the beliefs of practitioners, experts, or administrators. It thus tries to assure that a clinician’s opinion, which may be limited by knowledge gaps or biases, is supplemented with all available knowledge from the scientific literature so that best practice can be determined and applied. It promotes the use of formal, explicit methods to analyze evidence and makes it available to decision makers. It promotes programs to teach the methods to medical students, practitioners, and policy makers.”
Now, it will take some time (and probably more than one article) to unpack all of this, but it is important to understanding a rational approach to health. To start with, let’s look at one passage near the end: “It thus tries to assure that a clinician’s opinion, which may be limited by knowledge gaps or biases, is supplemented with all available knowledge from the scientific literature so that best practice can be determined and applied. It promotes the use of formal, explicit methods to analyze evidence and makes it available to decision makers.” This means that anyone who is concerned, but most particularly physicians, needs to understand what the studies show, and be able to evaluate those studies. And that is why we have taken some time already, and will take more, to get at how studies are done, and the strengths and weaknesses of each different approaches.
Studies are not all equal
You may read a news story, or hear a story on television, that says something like “A new study just came out that may change how you eat!” Or maybe it will be about new hope for cancer sufferers. What you generally will not hear is any discussion of the quality of the study, which is a guide to how much you should believe it. Of course as we have already pointed out, until the study has been replicated and validated by additional research you should adopt a wait-and-see attitude. And of course there is the rule advanced by Carl Sagan that says “Extraordinary claims require extraordinary evidence.” So what are the characteristics of good studies?
Randomized Controlled Trials
A good study is one that is randomized and controlled, which aims to eliminate any potential bias. You start by defining the population of interest. If you are testing a drug to fight malaria, your population would be all people who have malaria. Since the major incidence of malaria is in Africa, you would probably go there to do the study and select participants from the population there. The random part comes in how the participants are selected from this population. The statisticians’ definition of a random sample is one where every member of the population being studied has an equal chance of being selected. Bias would occur if they were not equally likely to be selected. So if your sample only had men, you would not have a randomized sample. This was a problem with a number of drug trials many decades back. We still have questions about adults vs. children with a number of drugs that were never tested on children. Of course, the key is to define the population properly. If you are testing a treatment for ovarian cancer, a sample of all women is perfectly proper.
Controlled means you have a test group and a control group that the test group is compared to. The allocation of participants to these groups should be perfectly random as well. If the study is a placebo-controlled study, the control group will receive no active treatment, but in every respect should be treated precisely the same. If the test group gets a pill, the control group will get a pill as well (usually something harmless like a sugar pill). If the test group gets a shot, the control group will also get a shot (usually a harmless saline solution). You do this because of the well-documented placebo effect, which shows that people getting sugar pills and saline shots show signs of improvement. Even more interesting is the finding that even when people know they are getting a placebo, they tend to improve! The purpose of the placebo-controlled study is to make sure that the treatment is really doing something and not just making people feel better because they are getting care.
The other type is the positive-control study. This is when there is already a recognized standard treatment and the study is about an alternative. Again, you need to randomly assign people to the test group and the control group to eliminate bias, and you are testing to see if there is a significant improvement using the new treatment compared to the existing treatment. If the new treatment is no better than what you already have, there may not be any point in using it. And if the new one is subject to patent, and the old one is available as a generic, there is a significant cost to using the new one.
If you really want a gold standard for a study, you combine the randomized controlled methodology with something called double-blind. In pretty much every randomized controlled trial, the participants do not know whether they are in the group receiving the treatment, or in the control group. If you stop there, it is single blind. But it has long been known that researchers are human, and may have a tendency to see results where there are none. There is the famous example of the N-rays, reported in 1903, which turned out to be purely imaginary but which respected researchers reported seeing evidence for.
So a double-blind study solves that problem by carefully keeping the researchers in the blind about which person is in each group. To do this you might assign every participant a number, and have one person prepare the pills, or shots, or whatever and label them with the appropriate number. Then a different researcher would administer the treatment and record their observations. All they would know is the number assigned to each person. So if they find improvement in a certain group of numbers, and that group proves to be the treatment group, you can have more confidence that this is a legitimate result. If you combine all these methods into a randomized, controlled, double-blind study, you have one the highest levels of validity that you can have in medical research.
Note: Stopping a trial
Of course, there are occasions when a trial is stopped before it is completely finished, usually because some compelling evidence surfaces in the middle. In the worst case, you may find the treatment causes harm, and you stop the study so that no further harm is done to anyone. In the best case you may see convincing evidence early on that the treatment is delivering benefit, and stop the study and switch to promoting the new treatment to everyone.