Why is it that time and time again, we read about studies that say a particular vitamin is great for preventing this and that, or a new drug could revolutionize the treatment of such and such, only to hear a few years later that those studies got it all wrong, that the vitamin might actually increase the risk for cancer, while the drug is leading to side effects left and right.

Why is it that so many medical studies are often contradicted by another – or even retracted altogether? And what can we as patients do about it?

Dr. John Ioannidis, an adjunct professor at Tufts University School of Medicine, once opined that most published research findings are false. You read that right. Ioannidis reviewed a number of archived medical journals and was struck by how many findings were refuted by later findings. He found that many studies were conducted improperly, using poor study design and questionable data analysis.

Some studies become so bogged down in errors, they are later retracted. Just last week, for example, Science was compelled to retract a watershed paper that suggested that chronic fatigue syndrome might be caused by a virus called XMRV.

And the number of retractions are on the rise. Earlier this year, The Wall Street Journal calculated that the number of retractions of scientific studies have risen more than 15-fold since 2001. That's a lot of retractions.

There are a number of ways medical studies can go off the rails – countless ways, in fact. Here are just a few:

Confirmation bias

This is the tendency to give more attention to data that support our beliefs and to ignore that which contradicts our beliefs. When researchers have been studying a certain topic their whole careers and become biased, they might cherry pick data and (consciously or not) set up their studies in ways that will give them results to back up what they have already come to believe.

A great example of how easy it can be to cherry pick data came in a satirical study published last year in the Canadian Medical Association Journal. University of Calgary medical resident Ken Myers wrote a review article is which he contended that smoking can help marathoners run faster.

It sounds ridiculous, but it actually wasn't hard for him to gather the evidence to back his hypothesis. He found a number of studies that showed that smoking boosts lung volume and hemoglobin and helps with weight loss. Since all are important to improving running performance, he was able to draw the conclusion that smoking helps with running.

Of course, the conclusion Myers makes is wrong; any of us knows that. But with other areas of research, it may not be as easy to tell when researchers have strayed from the mark.

(And if you're interested: the technical reason why smoking boosts lung volume and hemoglobin relates to problems with oxygen delivery in the lungs of smokers, not increased fitness.)

Not accounting for confounders

Confounders can often pop up in all kinds of studies, but are a particular risk in observational studies in which researchers follow a group of people over a number of years to see how a particular treatment or behaviour affects them – for example: whether taking a daily multivitamin can reduce the risk of cancer.

When they get their data back at the end of the study, they try to eliminate any factors that might have influenced their findings; the usual things are age, gender, smoking status. The problem is that it can often be difficult to control for all factors – especially if there are unknown factors that are influencing the results.

A study that notices that coffee drinkers have much higher lung cancer rates might conclude that it's the coffee that's causing the cancer and miss the confounding factor that people who drink a lot of coffee also tend to smoke a lot of cigarettes.

A real-life example of this is the hormone replacement therapy (HRT) debacle that came out of the Nurses' Health Study, an ongoing observational study that began in the early 1980s. Researchers in that study noticed that nurses who used HRT after menopause had a much lower risk of heart disease. So doctors soon began prescribing HRT regularly to postmenopausal women.

But when researchers tested HRT through the Women's Health Initiative using randomized controlled trials – a more controlled, rigourous form of study than an observational study – they discovered that HRT can have actually the opposite effect, raising the risk of breast cancer, heart attacks and strokes.

Why did the first study have such different results? It's not altogether clear, but one theory is that the kind of women who went to their doctors seeking HRT in the late 70s and early 80s were also the type who were health conscious and took a number of lifestyle measures to reduce their heart disease risk. The researchers did not know to control for these factors, which led them to draw inaccurate conclusions.

Conflicts of interest

A study a couple of years ago published in the journal Cancer found that nearly one-third of cancer research studies that are published in the big-name journals disclosed a conflict of interest. In many cases, pharmaceutical companies and other forms of industry had funded the studies, while others had an author who was also earning an income from "industry."

That in itself doesn't necessarily mean there was a problem – after all, the pharmaceutical and biotechnology industries fund a lot of important medical research, and if science relied solely on government and university-led studies, a lot of research might never happen. But the authors of the Cancer study also found that the studies with reported conflicts of interest were also more likely to report positive findings.

That led study author Dr. Reshma Jagsi, assistant professor of radiation oncology at the University of Michigan Medical School, to wonder whether these conflicts of interest were influencing results.

"A serious concern is individuals with conflicts of interest will either consciously or unconsciously be biased in their analyses," she said .

"As researchers, we have an obligation to treat the data objectively and in an unbiased fashion. There may be some relationships that compromise a researcher's ability to do that."

Publication bias

Publication bias refers to the practice of publishing only the studies that look favourably on a treatment and refusing to publish studies that either cast a negative light on it, or show it has no effect at all.

One of the most well-known examples of this came when the makers of the anti-anxiety medication Paxil were accused of suppressing results from four studies that showed that not only was the drug not effective in teens, it might also increase their risk of suicide.

Dr. Erick Turner, a psychiatrist with the Portland VA Medical Center in Oregon caused a stir in the psychiatry world a few years back when he released a study in the New England Journal of Medicine that showed that the vast majority of clinical trials that found that antidepressants were ineffective either were never published or were presented as positive findings.

His team looked at 74 studies done on 12 widely prescribed antidepressants. Of the 36 studies with negative or questionable results for the drug, only three were ever published. Meanwhile, of the 38 studies that produced positive results, all but one were published.

Turner's study was not able to conclude whether the antidepressant makers chose not to submit the negative studies to medical journals, or whether they did and their papers were rejected, or both. But the net result was that patients and their doctors were misled and given the impression that the drugs were more effective than the research had shown.

"Selective publication can lead doctors and patients to believe drugs are more effective than they really are, which can influence prescribing decisions," Turner said at the time.

While drug companies have to register with the FDA for clinical trials in advance, there is no obligation to report the results. Turner said that should change.

"Doctors and patients must have access to evidence that is complete and unbiased when they are weighing the risks and benefits of treatment," he said.

Rarely a final word

There are many reasons why even badly conducted studies get played up in the news: researchers who are hungry for tenure might overhype their study's conclusions, while their research institute might do the same in a quest for more donor funds. Medical journals profit from studies that suggest breakthroughs, and of course, media outlets like splashy headlines.

Finally, readers and patients share some of the blame too. We all want science to tell us how to prevent cancer, live longer, avoid disease -- and we want clear cut answers. We look to scientists to give us the final word.

The problem is, there is rarely a final word. Most studies come with long explanations from the authors on the weaknesses of their own research and what questions future studies could answer. The research on any topic doesn't end with one paper -- or even a multitude of papers; it's an ongoing process and one that often involves contradictory findings, as sports medicine researcher and science blogger Travis Saunders suggested recently on the blog he co-authors, Obesity Panacea.

"That's the way science works. You look at slightly different populations, different measures, etc, and suddenly things change. Everyone knows that, and yet it's not the way that science is typically portrayed by newspapers and other news agencies," he wrote.

 True eureka moments are few and far between in science. Every medical study should be viewed in the context of the other studies that have gone before it -- and with a healthy degree of skepticism.