The recent ACCORD fiasco gave a good example of how science can be used to twist people’s thinking and public policy. If we want to make good decisions in a world full of people trying to deceive us, we can benefit from learning how science is supposed to work. That way, we can recognize when it is being misused.
The scientific method
The scientific method was one of the great advances of human history. Instead of just thinking about the world and guessing what makes it tick, science insists on collecting information (data) through observation and experiment. Then, you formulate a theory (called a “hypothesis”) about it. What produced the results that you observe? And you test that hypothesis with more experiments, to see if it holds up, and modify it as necessary.
These steps must be repeatable by others. Almost any result can come up one time because of experimental error or a freak occurrence. If you don’t write everything down and share it with others, so that they can confirm your results, it’s not really science.
There are many kinds of experiments. The one most used in the health-care field is called the “randomized controlled trial,” or RCT. In an RCT, people are assigned at random to one group out of two or more. These groups should be as similar as possible and should receive the same treatment except for one factor, often a drug. Everything else is “controlled” for, meaning everyone is the same. So whatever differences you see between the groups are most likely due to the one experimental factor.
Health experts and policy makers have come to see the RCT as the “gold standard.” If an RCT shows some benefit for a drug, it must be good. If a psychosocial treatment, like diabetes self-management education, doesn’t have RCTs to document it, we shouldn’t use it or pay for it.
What’s wrong with the experimental model
Proving things with experiments is a good idea. The scientific method has overcome a lot of superstition and false beliefs. But experiments and science are misused in many ways and are used to spread all kinds of misinformation. Here are some reasons why scientific “evidence” doesn’t always hold up:
- RCTs don’t work well when there is more than one intervention. If you’re studying the benefits of peer mentoring, as our blogger Eric wrote about (“P2P Diabetes”), it’s hard to randomize, since you can’t force someone to accept a peer mentor. It’s hard to control for everything, because not all peers will be the same. And it’s hard to say which one, particular piece of the mentoring helped. So your peer study will be ignored, if it can get done at all.
- Bigger studies have more statistical power, so they get more attention. The more people you enroll in your study, the more impressive your results are likely to be. A relatively small benefit looks convincing if thousands of people experienced it. Drug companies have deep pockets to run huge studies. If Eric’s doctor’s lauched a peer mentoring study, it would be run on a shoestring, probably by staff volunteering after their other duties are done. It would have to be much smaller and therefore have less impressive statistical results.
- Studies don’t consider individual variation—if one drug works for 40% of the people, and another for 30%, and placebo for 20%, the 40% drug will be recommend for everyone. In reality, the other drug, or no drug at all, may well work best for people like the ones who benefited from those treatments.
- Scientists aren’t always honest or wise. They frequently misinterpret results. In the ACCORD trial, study directors concluded that normal blood glucose levels led to higher death rates in people with Type 2 diabetes. And media reported their conclusions as fact. These mistakes often seem to reflect the interests of those who paid for the study.
Cardiologists studied whether providing diabetes information to people with diabetes and heart disease would improve cardiac outcomes. They found no helpful effect. They publicized their results in The American Journal of Cardiology as indicating that all forms of diabetes education were a waste of time, and the money should be spent on statin drugs instead. But we already knew that diabetes information alone doesn’t help in most cases. It’s the skills and support for changing behavior that make the difference, as has been shown in many studies.
My take-home message is this: The scientific method is wonderful. But modern science is often a rigged game, not an honest search for truth. Pay attention to scientific reports, but take them with a grain of salt, and investigate for yourself.
What do you think? Please let us know by commenting here.