Objective
To evaluate the feasibility of a convolutional neural network (CNN) with word embedding to identify the type and severity of patient safety incident reports.
Materials and Methods
A CNN with word embedding was applied to identify 10 incident types and 4 severity levels. Model training and validation used data sets (n_type = 2860, n_severity = 1160) collected from a statewide incident reporting system. Generalizability was evaluated using an independent hospital-level reporting system. CNN architectures were examined by varying layer size and hyperparameters. Performance was evaluated by F score, precision, recall, and compared to binary support vector machine (SVM) ensembles on 3 testing data sets (type/severity: n_benchmark = 286/116, n_original = 444/4837, n_independent = 6000/5950).
Results
A CNN with 6 layers was the most effective architecture, outperforming SVMs with better generalizability to identify incidents by type and severity. The CNN achieved high F scores (> 85%) across all test data sets when identifying common incident types including falls, medications, pressure injury, and aggression. When identifying common severity levels (medium/low), CNN outperformed SVMs, improving F scores by 11.9%–45.1% across all 3 test data sets.
Discussion
Automated identification of incident reports using machine learning is challenging because of a lack of large labelled training data sets and the unbalanced distribution of incident classes. The standard classification strategy is to build multiple binary classifiers and pool their predictions. CNNs can extract hierarchical features and assist in addressing class imbalance, which may explain their success in identifying incident report types.
Conclusion
A CNN with word embedding was effective in identifying incidents by type and severity, providing better generalizability than SVMs.