Article
Author(s):
A combination of poor performance and methodological limitations were found for the machine learning models.
While more and more prediction models for psychiatric disorders are being developed, particularly for pediatric patients, the results still do not show the models are ready for clinical use.
A team, led by Seena Fazel, MD, BSc (Hons), MBChB, FRCPsych, Department of Psychiatry, Oxford Health NHS Foundation Trust, University of Oxford,conducted a systematic review of new prediction models for child and adolescent mental health, while also examining their development and validation.
In recent years there has been several new prediction models leveraging new machine learning technology to forecast the risk of mental health disorders, including attention deficit/hyperactivity disorder (ADHD), for child and adolescent individuals. However, some methods might be of higher quality than others, making it important to create a hierarchy of quality for use.
In the study, the investigators culled 5 databases for published studies on developing or validating multivariable prediction models involving patients under 18 years between January 1, 2018 and February 18, 2021. They assessed quality using the Transparent Reporting of a multivariable prediction models for Individual Prognosis Or Diagnosis checklist.
The team also assessed the quality of methodology using items based on expert guidance and the PROBAST tool.
Of the 100 studies identified, 41 involved a new prediction model, 48 focused on validating an existing model, and the remaining 11 studies involved both development and validation.
The majority of studies (n = 75) reported a model discrimination measure and 26 studies reported calibration.
For the 52 new prediction models, 12% (n = 6) were for suicidal outcomes, 35% (n = 18) produced a future diagnosis, and 10% (n = 5) centered on child maltreatment.
Other outcomes identified were violence, crime, and functional outcomes.
There were also 11 new models aimed specifically at high-risk populations.
Approximately 33% of development studies were sufficiently statistically powered (n = 16%; 31%). This was mostly lower for validation trials (n = 12; 25%).
For performance, the discrimination—measured by the C-statistic—for new models ranged from 0.57 for a tool predicting ADHD diagnosis in an external validation sample to 0.99 for a machine learning model predicting foster care permanency.
“Although some tools have recently been developed for child and adolescent mental health for prognosis and child maltreatment, none can be currently recommended for clinical practice due to a combination of methodological limitations and poor model performance,” the authors wrote. “New work needs to use ensure sufficient sample sizes, representative samples, and testing of model calibration.”
Currently, the majority of mental health conditions are diagnosed for and treated based on clinical interviews and questionaries, as well as cognitive testing, which can give clinicians a better understanding on why an ADHD patient is behaving in a certain way.
However, in ADHD cognitive tests do not identify the variety of symptoms and deficits, including selective attention, poor working memory, altered time perception, difficulties maintaining attention, and impulsive behavior commonly associated with the disorder.
The study, “Prediction models for child and adolescent mental health: A systematic review of methodology and reporting in recent research,” was published online in JCCP Advances.