Natural language processing (NLP) and computer vision methods have the potential to transform the extraction of scoliosis data elements from text and imaging electronic health records (EHRs), a new cross-sectional study has shown.
The research, which was undertaken by A. Noelle Larson (Mayo Clinic, Rochester, USA) et al, won the John H. Moe Best Basic Research Poster award at the Scoliosis Research Society’s (SRS) 57th annual meeting (14–17 September 2022; Stockholm, Sweden) recently.
Speaking to Spinal News International, Larson said: “This line of research can leverage AI algorithms to create scalable real-time registries to inform clinical practice.”
Larson et al hypothesised that, firstly, surgery classification by a scoliosis-specific NLP-enabled algorithm would be highly correlated with manual chart review and, secondly, that a previously published model for Cobb angle prediction might not work well on their external dataset due to the limitations of training data.
According to the researchers, “there is significant interest in understanding the effectiveness of different implant technologies in scoliosis. However, it is not possible to distinguish surgical details using procedure codes alone or structured EHR data. Cobb angle guides treatment planning but manual measurement is time consuming and prone to observer variation.
“Registries rely on labour-intensive manual chart review and radiographic measurements by trained individuals. Secondary use of EHRs and application of state-of-the-art technologies has potential to establish sustainable registries that can provide real-time information on surgical outcomes,” they add.
For the NLP algorithm, surgery-related keywords were compiled from a domain expert and refined further in the training phase. The procedure section of 831 operative notes along with exclusion criteria were used to improve accuracy. The performance on the test set (n=327) was compared with manual chart review.
For Cobb angle prediction, the researchers used YOLOv5 for extracting the Region of Interest (RoI) from 2,278 anteroposterior X-rays. They researchers predicted the landmarks using a previous published method. The model was trained on the 2019 MICCAI challenge dataset and tested on the researchers’ scoliosis dataset.
The NLP algorithm achieved an F1 accuracy score of greater than 0.91, and micro and macro average F1 scores of 0.98 and 0.94 respectively. The YOLOv5 model performed well on RoI extraction, but the performance of the previously published model was shown to be poor on X-rays with severe scoliosis. The model also typically missed T1 and vertebrae at the thoracolumbar junction.