(Too) Common Research Mistakes

 

    Science is committed to the search for truth. The scientific method is built on principles of test and retest, to constantly be open to being wrong and seek to find accurate results, no matter the cost. Modern science has contributed more to the world than anyone could have possibly imagined, and yet experts agree that the majority of scientific studies are completely false.

    The biggest reason for human error in scientific studies is, of course, innocent mistake.  Unfortunately, in this era of competition and rewards for “exciting” results, there is enough temptation to cause many scientists to consciously or subconsciously alter their results or let something “inconvenient” slide. There is too much reward for eye-catching results that sell journal subscriptions and too little reward for unexciting results that simply build our “human knowledge colossus.” The “Human Colossus” is a term created by blogger Tim Urban to describe the collective knowledge of the human race as a gigantic brain, of which each individual is a single cell. As we build our Human Colossus, we find truth.

    Studies that contribute to the collective knowledge of the human race often don’t seem exciting by themselves. And unfortunately, careers can be made or lost based on how exciting someone’s research results appear. Replicating studies (reproducibility of scientific studies is one of the cornerstones of good science) is also under-rewarded. Studies that find no relationship or difference between tested variables, or “negative results,” are often rejected from publication.

    Following is a simple example of using poor statistics and nearly falling prey to bias. I coauthored a research project in college about blood sugar: how the FDA recommended daily doses of soluble fiber or exercise lowered a person’s blood sugar levels after a meal… we found nothing. Or, more accurately, we found that the amount of sugar we fed participants probably outweighed any affect that the soluble fiber or exercise may have had.

    Through this process, however, I learned several incredibly important lessons: I learned that good research is difficult, that study flaws are sometimes only visible after huge amounts of time and effort have been spent, and that I myself am not above feeling the temptation to misrepresent results when the situation presents itself.  Our graphs, for example, initially seemed to show a significant difference between fiber and exercise and total blood sugar, however, the difference disappeared when we added more subjects, included a control, and adjusted for starting blood sugar levels in a follow-up project. The adjusted graph is more accurate but less impressive:

 


 

 

 

 

    My research project did contribute infinitesimally to humanity’s knowledge, despite the “negative” result. I learned that the minimum American Heart Association recommended daily dose of fiber or exercise might have no discernable moderating effect on blood sugar, if someone has a high sugar meal. This means that ingesting 30 grams of fiber, or walking for 30 minutes, may not help someone control their blood sugar, if they are not also limiting their sugar intake. With further research, humanity could discover the dosage of fiber or exercise that would be effective, or more individuals could be convinced to eat less sugar. Theoretically, people could eventually live healthier lives because of science and, ever so slightly, because of me.

    If I had misrepresented my results, I may have convinced a few people to take fiber pills instead of exercising. I may have been sponsored by dietary fiber company and advanced my own career, but I would have made the world a more unhealthy place. Valuing the pursuit of truth over the pursuit of results is always the better thing to do when you factor the rest of humanity into the equation.

    There are several issues with the modern era of the scientific method aside from temptation to misrepresent results and the reward system that limits publishing of “negative results” and fails to reward reproduction of studies. There are issues of outdated statistics, lack of funding, inability to control variables, lack of publishing opportunities, no requirements to publish method details, under-enforcement of requirements to publish raw data, high cost, low transparency, ect.

    Thankfully, a few organizations and individuals are committed to addressing some of these issues. Modern data-mining can help with faulty statistics, organizations like The Center for Open Science in Charlottesville Va. can improve transparency, and people like Dr. John Ioannidis, with his ground-breaking article on how most scientific studies are false, can challenge the scientific community to refine their process. We all must be more skeptical. Without significant widespread reform, however, the scientific community is at risk of losing the commitment to truth that it was initially built upon.