In a recent study of heart medications (REDUCE-IT), mineral oil was used as a “placebo”. Mineral oil is not a biological substance, and the body rejects it with nausea and diarrhea. Some of the adverse effects of mineral oil are listed here. There is no reason to use mineral oil as a “placebo” instead of (for example) olive oil except that it may make the test drug look better in comparison.
Placebos are properly used in drug trials to separate the biochemical benefit of a drug from the psychological benefit of seeing a caregiver and feeling confident that help is on the way. There is no justification for placebos in safety trials, which seek early detection of harmful side-effects that might occur in a minority of patients. The use of placebos in safety trials, particularly of vaccines, is a scam that has been augmented by a statistical sleight-of-hand, reversing the burden of proof.
The justification for placebo controls in drug trials is that mental expectation can have powerful healing effects. Drug companies should be held to a high standard — they must prove with 95% statistical confidence that it is their product that offers a benefit, not merely the relief afforded by the passage of time, or of receiving attention from a confidence-inspiring professional.
When the placebo protocol was first introduced, drug manufacturers hated it. They found that, particularly for psychological medications, the placebo effect was so powerful that their drugs were handicapped in the statistics, and they opined that this was unfair. The mind-over-matter effect is indeed one powerful healer. Time is another. People tend to see a professional when their pain or depression or digestion or anxiety becomes unbearable, and just because they come for treatment at a low point in their wellbeing, chances are that six weeks later they will be feeling better, no matter what. (The term for this phenomenon is regression to the mean.) In trials of antidepressants, typically a third of patients feel a lot better six weeks after they begin any treatment, and most drugs don’t help much more than a third of depressed patients.
But profit is a powerful motivator, and pharmaceutical researchers soon found ways to turn the requirement of placebos to their advantage. In testing a new drug for safety, the manufacturer must look for harmful side-effects that show up in some small percentage of people who take the drug. You can look for people who react badly to the drug just by comparing the people who have adverse symptoms to the general population — no placebo is necessary. It is helpful to have two matched populations, a test group that receives the treatment and a control group that does not; but there is no need to give the control group some other substance, because the idea of a “placebo effect” does not apply.
(There is such a thing as a “nocebo effect”, in which people who expect that they are going to be sick develop psychosomatic symptoms. It’s not clear that this situation applies to drug safety trials; but even if it does, I would argue that the nocebo effect is part of the downside of giving a drug, and the safety standard should be calibrated to include psychological as well as chemical harm from a treatment.)
The object of an efficacy trial is to determine how many people get better, how much better, how quickly? The body has a natural ability to recover from disease, so it’s important to separate the benefits of the drug from whatever the body is doing on its own.
The object of a safety trial is to determine what proportion of people suffer detrimental side effects, and to characterize those side effects. It’s fair to measure responses of the test group against a control population that is untreated because there is always a background rate of people who suffer from conditions that could be erroneously attributed to the test drug. If safety is evaluated for treatment of a disease, it is fair to control for the background rate of conditions that could mimic side effects in a population that has that disease.
Vaccines are a special case, because they are given to healthy people. This means that only a small proportion of recipients will (eventually) benefit from the vaccine, so a small number of people harmed becomes a significant counterweight.
As control for healthy recipients, it is the background rate of headaches or sore arms or fatigue or nausea in the healthy population that should be subtracted. Childhood vaccines are a more special case because children are not able to judge long-term benefits and immediate harms, and adults must be especially conscious of protecting them. Vaccines given to infants are a very special case because their immune systems, along with the rest of their bodies, are in an intense stage of development, and anything that modifies that development is likely to have lifelong effects, for good and for ill. Anything that we do to a one-day-old infant may have unexpected effects — even a saline injection. Safety testing for anything that is proposed to be administered to healthy infants should pass the most rigorous long-term safety studies to assess the effects on the growing child and the future adult.
Christine Stabell-Benn is my teacher when it comes to long-term effects of vaccines, both beneficial and adverse. There are large effects of vaccination on the immune system in general — not just the specific disease targeted by the vaccine — and these effects can be helpful or harmful. The age and condition of the patient are important variables, and the kind of vaccine can make a difference between long-term harm and long-term health. If there is one lesson to draw from Stabell-Benn, it is that every vaccine must be individually tested for their long-term effects on exactly the population which is proposed to be receiving it.
Reading through abstracts of vaccine safety reviews, I usually come across the phrase, “no evidence of increased risk [ref]” or “there were no significant differences between cases and controls [ref]”. “No significant difference” is a term of art that means that the study in question cannot prove with 95% confidence that the vaccine is producing more side effects than the placebo. The implication here is that “no news is good news” — in other words, the burden of proof is on those who claim the vaccines are dangerous. Once this is stated explicitly, it becomes obvious that the logic is wrong. Safety should be exhaustively quantified and all risks taken into account before a medication is administered to newborn babies. Some might claim the same proof safety should be required of every product given to healthy adults as well.
In practice, most vaccines are safety-tested not against a null control, and not against an inert (saline) placebo. In practice most vaccines are safety-tested against a different vaccine as the “placebo”. Then if the new vaccine is not provably more dangerous than the old vaccine, it is considered “safe”. In other words, if statistics suggest with only 94% certainty (the opposite of p<0.05) that the new vaccine is more dangerous than the old, then the researcher can honestly report that there is “no significant difference” in their safety. The new vaccine is then added to the list of approved vaccines, and it is fair game to use it as a “placebo” in a study of next year’s vaccine.
Here’s what the WHO has to say on the subject:
Control Vaccines: In place of a placebo, a vaccine against a disease that is not the focus of the trial is given to participants who do not receive the trial vaccine. Typically the control vaccine is a licensed vaccine for which efficacy has been demonstrated and the safety profile is well characterized. The motivation for using active rather than inert “placebos” is to fulfill the ethical duty of beneficence and, sometimes, to avoid giving an injection with an inert substance. A methodological disadvantage, however, is that trials using these types of placebos provide a less perfect control. It may be difficult or impossible to assess fully the safety and reactogenicity of the trial vaccine, although its efficacy can usually be assessed satisfactorily. Such trials may also be less acceptable to regulators. Some regulators and/or public health authorities may prefer data from a placebo-controlled trial on which to make decisions whether or not to approve or adopt a vaccine. — WHO
When testing a new treatment drug, it may be unethical to deprive half the patients in the study from treatment. But vaccine recipients don’t have any disease, and there is no ethical imperative to recruit 30,000 additional subjects and give them a vaccine against tetanus just because you’re testing a vaccine for measles on 30,000 others.
Adverse events from every drug should be evaluated in comparison to what the patient would experience if he did not receive any treatment. Placebos are unnecessary, and use of toxic placebos is clearly a sham, a ruse to hide adverse events.
In addition, the burden of proof has been reversed in these safety trials. The proper use of the 95% confidence test would be to demand 95% confidence that the drug being tested is not causing harm. But in practice, the convention is to invert the burden of proof. The new vaccine is considered “safe” unless there is 95% statistical certainty that it is worse than whatever vaccine was used as a “control”.
The bottom line
What I’ve described is a ratchet mechanism, by which more and more dangerous vaccines can be approved year by year, because they are being compared to the previous vaccine which is now assumed “safe”. Last fall, the process reached its final reductio ad absurdum (or maybe reductio ad tragodia) when the bivalent COVID vaccine was approved without any human safety data being submitted to FDA.
We don’t know what we need to know about the safety of a great many pharmaceutical products, and vaccines given to newborn infants are in a risk class all by themselves.