We’ve all seen the reports and advertisements… alcohol and sunshine are completely bad for us, calcium makes strong bones, Nutella is part of a healthy breakfast…
Research isn’t black and white – there is a lot of grey that is subject to interpretation. Even numbers can be manipulated to make the results of a study sound more dramatic. And sometimes, some numbers and facts (and even entire studies) are ignored when they don’t support the idea a researcher wants to prove. Of course, this isn’t good science but, unfortunately, we are starting to see this more and more…
Some years ago, I presented several lectures at an event sponsored by Dalhousie University. One presentation was provided to me by a pharmaceutical company co-sponsoring the event. A Dal professor was in the audience and, afterward, politely pointed out the bias that was in the data, with a graph stretched out to make the curved lines look more separated – making the drug’s effect look greater than it actually was. I was quite embarrassed but, in the end, thanked him for educating us in what to look for.
So, here are some “red flags” you can watch for when you are reading about a study in the media:
Observational study (versus an Interventional study) – An observational study looks at what factors or events tend to occur together but provides no conclusion about a cause. An interventional study, in contrast, uses two groups that are well matched, changes the one factor they want to test, and then measures whether this made a difference at the end of the study. An observational study can give us an idea what could be tested, but an interventional study is needed to prove a cause.
The words “is associated with” – This simply indicates that the factor they are talking about occurred along with a particular disease or condition in an observational study. An association does not demonstrate a cause. For example, yellow fingers (from smoking) could be said to be associated with heart disease; however, we know it is smoking that causes heart disease, not yellow coloured fingers. Scrubbing the colour off the fingers would do nothing to prevent heart disease. So, humour me while I say this again… if something is associated with a disease, it doesn’t necessarily mean it causes the disease.
“Relative Risk Reduction (RRR or RR)” – This method of presenting results, compares the outcomes from 2 groups by using a percentage difference rather than subtracting the actual numbers, and can make differences appear much larger than they actually are. An example explains it more clearly: If you had 2 groups of 100,000 people and the results were 2 cases in the untreated group but only 1 in the treated group, this would be a 50% relative risk reduction. However, in actual numbers, called the “Absolute Risk Reduction”, where the results are simply subtracted, the difference is only 0.001%, a much less impressive figure. RRR is often used to make results appear more significant than they actually are, especially when the difference is very small. All too often, it is not specified that the results are RRR, creating misunderstanding of the study conclusion, even with doctors.
The time over which the results were gathered is not specified – Was the effect observed after a short period of treatment or did they study participants take the drug for years in order for it to make any difference? We see this often when “Number Needed to Treat (NNT)” presentation of data is used. This number tells you how many people they needed to treat to make a difference for 1 person, so the lower the number, the fewer they needed to treat to see an effect. However, did you need to treat these people for 10 years to make a difference for one, or only for a month? The time frame makes a big difference, and needs to be specified.
The study only uses deaths from one disease, not total mortality – The factor being studied could improve one disease state while worsening another. If total mortality is not mentioned, it is likely that the drug did not increase the overall lifespan of the study participants. Overall health and quality of life are what is important… we want to live longer healthier lives, not just change our cause of death. An example would be looking only at decreasing heart disease, while ignoring a drug’s effect on increasing death from other causes. New, less publicized reports of the well-known JUPITER study have pointed out that the cholesterol drug being tested did not change overall survival, yet the study results touted a significant reduction in deaths from heart disease.
Who paid for the study – Any study that has been sponsored by a party with an agenda runs the risk of being biased. Negative results are more likely to be ignored or never published, if money is on the line! Unfortunately, much of our drug research is done by manufacturers of medicines, and has lead to incorrect results, whether intentional or not. Examples are Provera used in hormone replacement therapy, and the arthritis drugs Vioxx and Bextra. Although these drugs were tested thoroughly and used for years by millions of people, they were eventually found in balanced independent studies to cause more harm than benefit.
Small numbers of participants – It is easier for a false result to be drawn when there are fewer people in a study. Using larger groups and ensuring that these groups are as similar as possible before changing one factor for testing, is more likely to give an accurate result. Of course, a very large study done over several years that quotes only relative risk reduction for one condition should send a red flag alert even though it is a large study.
Multifactorial testing or making several changes at once – No conclusion can be drawn regarding one particular drug if several changes were made for the test. This was done with calcium supplement testing, where participants were given Calcium and vitamin D and were told to exercise more, and it was concluded that calcium supplements strengthened bone. It has since been demonstrated that dramatically increasing calcium intake by using supplements does not reduce bone fractures.
Using a “surrogate end point” or measuring something other than the actual beneficial outcome you are trying to achieve. For example, simply measuring bone density does not give a true picture of how strong bone is… many years ago, fluoride was given to create more dense bone, and it certainly did this well. However, the bone created was brittle and broke easily, somewhat like a piece of chalk. The goal of treating osteoporosis is to prevent bone fractures, not simply make more dense bone. Unfortunately, it is difficult to measure actual bone strength so our medical system continues to measure bone density as a gauge of bone health. However, studies for osteoporosis prevention now must demonstrate fewer bone fractures, not just more dense bone.
This week I received a pharmacy flyer, with a 2-page article on the dangers of alcohol consumption. While it is factually correct, I’m sure, and offers some good advice about avoidance of drinking in excess, there is no discussion of study results showing benefits of light to moderate drinking over total abstinence. Only on a side bar, is there mention that “you may have heard that drinking a glass of red wine is good for your heart” and that this “may be true”. In actual fact, there is no study that found red wine reduced heart disease more than any other type of alcohol; this has simply been proposed as a possible reason why the French (in France) have low rates of heart disease in spite of their high-fat diet.
As for the dangers of sunshine, we know that we need some sun exposure in order to make the vitamin D we need, but no studies have been done to determine how much is too much, and the safe amount likely varies widely, depending on the angle of the sun, skin characteristics and other factors. The “5 servings a day” recommendation for vegetables and fruit consumption came out of thin air, and alcohol “limits” vary widely, since there is no real science to support the limits. The benefit versus risk of statin cholesterol medications is being questioned in many areas of the world (as discussed in last week’s blog) and, well, Nutella really doesn’t add a lot of nutrition to a breakfast even if it does contain ground hazelnuts along with the sugar and yummy chocolate! But, that last one was just an advertisement, not a “study”, so not many of us were likely taken in by that…
I still remember how dejected a friend was years ago, telling us he couldn't have gravy on his Thanksgiving turkey because his triglycerides were elevated… and now we realize how inaccurate these measurements really were and how little effect triglycerides have on heart health. I think we all, especially health professionals, need to question how good the evidence is behind scientific “facts” that are presented to us. “Everything in moderation” may just be the best approach to life, especially when it comes to depriving yourself of what you enjoy most based on questionable reports.
If you are interested in reading further on this issue, I can suggest “Doctoring Data” by Scottish physician, Dr. Malcolm Kendrick.