The use of large registry-based datasets in spinal research has exploded in recent years. Often criticised for potentially limited clinical relevance, little research has been done on the academic impact of such research.
A team from Harvard Medical School, Boston, USA, aimed to empirically evaluate research using the US National Surgical Quality Improvement Program (NSQIP) as a dataset.
The systematic review, published in The Spine Journal, identified 1,525 spine- and orthopaedics-related articles using the NSQIP through a search of PubMed, Embase, Web of Science and Scopus. The articles were published from January 2007 to July 2015.
One-hundred and fourteen articles were included in the review, which were catalogued according to the impact factor of the journal in which they were published, as well as the number of times they had been cited.
The average impact factor of journals included in the study was 2.75, with a range of 0 to 5.28. “The range of citations was also quite broad, with the lower 37% of publications overall having zero citations,” the authors wrote in the systematic review. “Only a small percentage (23%) of publications [with a spine surgical focus] were actually cited more than nine times in the literature.”
Upon further analysis, the researchers found that 85% of the articles evaluated were published in journals with a relatively low impact. Those with the highest number of citations—which the study authors used as an indicator of quality—appeared in those journals with the highest impact. The researchers, therefore, surmised that “although quality spine surgical research…is clearly achievable with the NSQIP registry, most of the research carried out using this dataset appears to have had minimal academic impact.” The authors argued that their findings “are likely translatable to work conducted with similar datasets,” given the extent by which the NSQIP has been used by researchers from across the USA.
The authors note a number of limitations of their work, including the difficulty by which an objective and comprehensive assessment of the impact of a study can really be measured.
Noting that large datasets can indeed be useful for research, the authors ultimately caution those engaging in such investigation. “The findings of our study demonstrate that there is a need for more discriminant use of the data,” they write. In conclusion, the authors advocate the importance of working to maximise engagement with pieces of research by following available guidelines, advising that researchers “use best practices when using ‘Big Data’ along the lines of the NSQIP in [orthopaedic] research.”