The unlucky number seven: A rather painful critique of my internship project

 

For my internship project I was investigating the effects of deep brain stimulation (DBS) as a treatment for Parkinson’s disease. This included improvements in motor and quality of life, but also cognitive decline. I ended up dealing with a rather large and complex data set, which after all the computed variables had been made contained 233 variables for 281 participants. To make things more complex the data had been combined from three different sources which meant there were inconsistencies with coding and administered tests in the studies. I tried to be as stringent as possible with the initial data checking, handling and following analyses. However, I was still left feeling unconfident with my findings. Unfortunately, I had good reason for this uneasy feeling. Whilst checking I found two rather huge mistakes. Fortunately I still had time to improve my project before the final was handed in.

The Suspect P-Value

I cleaned up the final version of my data set and syntax and decided to rerun all the analyses and make sure I had consistent results. It was all going rather well until I moved onto the cognitive variables. In my write up I had already found a wrongly reported P-value for the Mattis Dementia Rating Scale (MDRS). I had reported it as significant, when in fact the p value was 0.017, insignificant for my alpha of 0.01. When I reran the analysis it turned out to be even worse, the P-value was actually 0.022. I knew I had double checked the analysis, so was rather baffled! I found the earlier version of data set which gave me the original 0.017 P-value and began my search. The data seemed identical, until I checked my IQ covariate. I had missed a missing value coding of 777 (inability to complete). The IQ covariate which I had used in all my cognitive analyses! I reran all my analyses, and changed my report. Mostly my analyses were not largely affected by this mistake, however I did lose significance on one comparison which went from P=.009 to P=.025!

The Questionable Predictor

I also had to question my inclusion of the MDRS as a significant baseline predictor of cognitive decline following deep brain stimulation. The MDRS was a very desirable predictor; it was measure of global cognition that was already in use by the neuropsychologists and doctors to set a cut off point for undergoing DBS. Originally significant, once made into a T-Score to correct for age and education it was pushed out by years of education and IQ. As IQ and education can be seen to conceptually overlap, I had made one model excluding education. Once again the MDRS was significant and IQ was no longer in the model. I had kept this model as it was more parsimonious and practical, as the IQ measure would not be as available as the MDRS. However, I was feeling a little uncertain about my decision. Once I had rerun my analysis with the corrected IQ I found that the original model was now medication dose and education. I decided to retain this model, which can be seen as more consistent and improved research practice.

I feel my mistakes were a combination of long research hours with a large data set and previous expectations clouding my judgement. Overall I learnt a lot from doing this critical review of my own work. It has made my very aware of my own fallibility, despite having good intentions. I am glad that I have managed to locate and fix these problems both for this project, and to help me improve my research practice for future projects.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>