Abstract
Study design
Literature review and survey.
Objectives
To provide an overview of existing computerized International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) algorithms and to evaluate the use of the current algorithms in research and clinical care.
Setting
Not applicable.
Methods
Literature review according to three organizing concepts for evaluation of Health Information Products (reach, usefulness, and use) was conducted.
Results
While the use of computerized ISNCSCI algorithms has been around for many years, many were developed and used internally for specific projects or not maintained. Today the International SCI community has free access to algorithms from the European Multicenter Study about Spinal Cord Injury (EMSCI) and the Praxis Spinal Cord Institute. Both algorithms have been validated in large datasets and are used in different SCI registries for quality control and education purposes. The use of the Praxis Institute algorithm by clinicians was highlighted through the Praxis User Survey (n = 76) which included participants from 27 countries. The survey found that over half of the participants using the algorithm (N = 69) did so on a regular basis (51%), with 54% having incorporated it into their regular workflow.
Conclusions
Validated computerized ISNCSCI classification tools have evolved substantially and support education, clinical documentation, communication between clinicians and their patients, and ISNCSCI data quality around the world. They are not intended to replace well-trained clinicians, but allow for reclassification of ISNCSCI datasets with updated versions of the ISCNSCI, and support rapid classification of large datasets.
Similar content being viewed by others
Introduction
The International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) exam is the gold standard assessment used to determine the level and severity of neurological injury after spinal cord injury (SCI). Originally developed in 1982, the ISNCSCI is defined by the International Standards Committee of the American Spinal Cord Injury Association (ASIA), and continues to undergo regular revisions. Now in its eighth edition [1], the ISNCSCI represents an important tool for both clinical care and research [2, 3].
There are two components to obtaining an accurate and reliable ISNCSCI exam: the first is performing the bedside examination to obtain motor, sensory and rectal exam scores. The second is using those scores to classify the SCI using the ISNCSCI classification rules [4]. These classification rules are used to determine total sensory and upper and lower extremity motor scores, sensory and motor levels as well as a single neurological level of injury, the ASIA Impairment Scale (AIS) together with a broad categorization of the severity of injury (complete/incomplete) and –if applicable– the zones of partial preservation (ZPPs). It has been shown that training can improve the performance of the bedside exam, as well as classification [5, 6].
Despite training, error-free classification remains an issue and has been a primary driver for the development of computerized algorithms that can perform the classification using a standardized set of ISNCSCI rules [7,8,9,10,11,12]. The opportunity to implement ISNCSCI classification algorithms as a computer program was recognized and many computerized ISNCSCI algorithms have been developed over the last two decades. The purpose of this paper is to provide an overview of existing computerized ISNCSCI algorithms and evaluate the current algorithms available for use in research and clinical care and provide recommendations for future directions.
Methods
Literature overview of computerized ISNCSCI algorithms
The first computerized ISNCSCI algorithm was published by Wang et al. [10], presented at the joint meeting of the ASIA and the International Spinal Cord Society (ISCoS) in 2002, and there have been 7 in total presented and/or published as of 2021 (see Fig. 1). Most of these computerized ISNCSCI algorithms were developed for specific research projects and many [7, 10,11,12,13] have been presented or published without any known or published follow-up. Reasons for the original development of these algorithms include: SCI clinical research data quality control [8, 9, 11, 12]; improving and standardizing clinical ISNCSCI use [7, 13]; reducing the time required for classification and documentation of the ISNCSCI exam both for individual exams [7] and large datasets [8]; and supporting education [7,8,9].
There are two ISNCSCI algorithms that are publicly available and updated to reflect the 2011 or 2019 versions of the Standards. The European Multicenter Study about Spinal Cord Injury (EMSCI) network has included an ISNCSCI classification algorithm since 2003 [14].The algorithm has been validated using a dataset (N = 5542) from the EMSCI network (Table 1) [8]. The second is the Praxis (formerly known as the Rick Hansen Institute) ISNCSCI algorithm developed as part of the Rick Hansen SCI Registry (RHSCIR) database [15] in 2004. It was redeveloped in collaboration with ISCoS and a group of international experts in 2012 and was made publicly available at the ISCoS and the Academy of Spinal Cord Injury Professionals (ASCIP) meetings in 2012 [16, 17]. The 2012 version was validated using input from the group of international experts as well as the SCI community along with a dataset of 2106 ISNCSCI cases from the RHSCIR [9].
Currently the EMSCI algorithm classifies using the 2011 version of the ISNCSCI and Praxis algorithm classifies according to the 2019 version. Made publicly available via web application interfaces (Fig. 2A, C) in 2011 (EMSCI) and 2012 (Praxis), these computer algorithms share many of the same features [18, 19]. Developed to support education and quality control for the EMSCI and RHSCIR databases, both require entry of the clinically determined sensory and motor scores and the anorectal exam results, using automated logic to determine the resulting classification variables. Both support the classification of cases with not testable (NT) scores, where the manual classification might be challenging. Table 1 provides an overview of these computer algorithms, including both similarities and differences.
Since initial development, these two computer algorithms have both undergone improvements and continue to make updates to improve usability and outputs. Originally meant for demonstration purposes, the EMSCI ISNCSCI calculator has incorporated data export interfaces [20], multiple application programming interfaces [20], and extended visualizations [21] (Fig. 2B). The initial Praxis ISNCSCI algorithm incorporated the use of an exclamation mark to enable classification with cases where there was a non-SCI condition causing motor or sensory weakness above the neurological level of injury, which has now been updated to reflect asterisk use as outlined in the updated eighth edition [22]. Additionally, it provides the algorithm source code for both V1.0 (seventh ISNCSCI edition) and V2.0 (eighth ISNCSCI edition) in an open source format to facilitate integration into other applications. While in the majority of cases the results of both algorithms match, there are unique cases where the algorithms get different classification results. These rare cases have in common that the sensory level is within a segment with testable key muscles (C5-T1 or L2-S1), the key muscle functions of the respective extremity are all normal and a region with normal sensory functions is found caudal to the clinically testable myotomes of the motor intact extremity (Fig. 2A, C) [23]. This type of cases with differing classification results of the two algorithms help to identify parts of ISNCSCI where further clarifications are needed. These were therefore shared with ASIA’s International Standards Committee for discussion and to inform future revisions or updates of the standards.
Evaluation of the EMSCI and Praxis ISNCSCI algorithms in research and care
To better understand the reach and impact of these publicly available algorithms in both research and care, a review was performed using Sullivan, Strachan and Timmons (2007) three organizing concepts for evaluation of Health Information Products: reach, usefulness, and use [24].
Reach
(defined as the breadth and saturation of product dissemination), incorporating: distribution, both directly or indirectly (e.g. web application use), and referrals by other projects [24] was determined in April 2021 by the number and type of reported users (e.g. from citations, requests for source code use); number of languages the web application is available in; and rates of citation for the validation papers of the two algorithms [8, 9] in the peer-reviewed academic literature obtained from Google Scholar on April 19, 2021. For the Praxis algorithm, distribution was also measured through web application analytics between August 1, 2012-February 28, 2021.
Usefulness
(defined as the quality of information products and services that is appropriate, applicable and practical) can be largely determined by user satisfaction and perceived quality of a product [24]. Perceived quality was evaluated by reviewing algorithm validation and existing literature citing either algorithm validation paper for quality related reports. User satisfaction was evaluated using 14 questions from a Praxis ISNCSCI algorithm User Survey on use and usefulness which was open between 2016 and 2017. Questions were asked about the users’ profession, method and frequency of use, what the algorithm was being used for, challenges during use, and specific impacts of use.
Use
(defined as what is done with knowledge gained from the information product or service) incorporates both the amount of use and context of use [24]. Use was measured by reviewing reported uses in both literature citing these algorithms (April 2021) and ongoing clinical studies, Praxis User Survey results for questions about frequency and context of use, and outlining algorithm use to date by the ASIA International Standards Committee.
Results
Reach
EMSCI and Praxis computerized ISNCSCI algorithms have been integrated into multiple platforms for users. Both are available through public web applications and accessed internationally. Statistics are not available for the EMSCI ISNCSCI Calculator web application due to European General Data Protection Regulations. It is available in three languages (see Table 1) [25]. The EMSCI algorithm validation publication (2012) has been cited 38 times (Google scholar accessed 19 April 2021), including twenty-one related to clinical research, six to the ISNCSCI itself (e.g. challenging cases, training, etc.), four to clinical practice or SCI registries, two to ISNCSCI algorithms, one that referenced the algorithm to show the uncertainty of early ISNCSCI exams, and four either not available in English or that did not describe the ISNCSCI algorithm use.
Between August 1st, 2012 and February 28, 2021, the Praxis ISNCSCI algorithm web application was accessed 207,994 times by 114,323 users in 175 countries (web application Google statistics accessed April 20, 2021). It is available in two languages (see Table 1). The freely available open source code has been downloaded 2174 times. The Praxis algorithm validation publication has been cited 32 times (Google scholar accessed 19 April 2021), including twenty relating to clinical research, nine to clinical practice or SCI registries/harmonized datasets, one to ISNCSCI algorithms, and two that did not describe the ISNCSCI algorithm use.
Usefulness
In terms of the perceived quality, both EMSCI and Praxis algorithms have been validated for determining ISNCSCI classification in a variety of real cases including those with not testable values (EMSCI N = 5542 exams from EMSCI database; Praxis N = 2106 exams from RHSCIR) [8, 9]. The EMSCI algorithm has also been found to reduce the time required for classification and documentation of the ISNCSCI exam, both in individual exams as well as large datasets [8]. Literature citing the algorithms reports they improve accuracy by reducing clinician determined classification errors [14, 26, 27]. The Praxis algorithm is referenced as a valuable tool to be included in the standardization of data for clinical use and research in SCI, and the use of an ISNCSCI algorithm is also recommended to characterize natural recovery after SCI [28, 29]. In addition, Dvorak et al. propose using these algorithms to help improve the accuracy of neurological assessments required for informing care and research [30].
User satisfaction, captured in the Praxis User Survey, reflects the majority of participants who used the algorithm (92%, 59/64; N = 76, 5 missing, 7 had not used) felt it was very useful to their work with the two most highly valued functions being the automated classification according to the most recent ISNCSCI rules, and the ability to save a .pdf-file of an exam (Fig. 3A). Furthermore, the algorithm increased awareness and use of the ISNCSCI exam, enabled the participants to feel confident in classifying an ISCNSCI assessment, and provided them with support for questions about conducting and classifying their assessments (Fig. 3B).
Use
The primary algorithm use reported in publications for both EMSCI and Praxis algorithms was to ensure ISNCSCI data accuracy in clinical research, including both clinical trials and observational research using SCI egistries. The EMSCI algorithm has also been incorporated into the EMSCI database (Table 1). The Praxis algorithm has been integrated into the RHSCIR, Australian Spinal Cord Registry, New Zealand Spinal Cord Injury Registry, Dutch National SCI Data Set, and Model Systems Database [31]. Additionally, the EMSCI algorithm was used as a screening tool for the Nogo Inhibition in SCI clinical trial (NISCI) [32], and the ongoing Canadian-American Spinal Cord Perfusion Pressure and Biomarker Study reports using the Praxis algorithm to confirm classification accuracy (Reichl personal communication Nov.2020) [33].
Additional uses reported were integration into electronic medical records (EMRs) and education. The Praxis algorithm has been integrated into EMRs in Denmark (project to incorporate International SCI datasets into EPIC (Verona, Wisconsin, USA) EMR), Finland, Mexico, Korea, and the USA [34]. The EMSCI algorithm was integrated into EMRs of 4 European SCI centers and has been used to evaluate clinical classification skills and the impact of clinical training [6, 14].
How clinicians were using the algorithm was highlighted through the Praxis User Survey (N = 76) which included participants from 27 countries. Most were clinicians (69/71, 5 missing) with the majority (78%, 56/72, 4 missing) working in a hospital setting. The survey found that over half of the participants using the algorithm (N = 69) did so on a regular basis (51%, Fig. 3C), with 54% (34/63) having incorporated it into their regular workflow. The most common uses were to confirm the classification after the assessment has been completed and to educate others (Fig. 3D). An example specified in the comments outlined this was done by, “using the algorithm to fill the form, check final assessment and include it in patients’ documentation” and “I always have them (students or new staff) calculate the score on their own and then use it as a double check so then we can discuss why the difference”. Other comments mention, “providing a copy to the patient to track progress over time” or using it to, “motivate for continued physiotherapy rehabilitation”. Of those who had used the algorithm, one-third (33%, 22/66) identified having no challenges with using the ISNCSCI algorithm while others reported challenges with internet access (17%, 11/66) and the inability to use it on their smartphones (17%, 11/66).
These algorithms have also been used to inform ASIA’s International Standards Committee where areas identified during algorithm development that would benefit from additional clarification were brought forward for discussion. This has led to the addition of standardized levels for documentation of non-key muscles to the ISNCSCI worksheet and the eighth edition updates on how to classify non-SCI conditions, with other areas still under consideration by the committee. When combined with large clinical datasets, the use of ISNCSCI algorithms allows the ASIA International Standards Committee to evaluate the impact of changes to the ISNCSCI classification rules and make evidence informed decisions. One example of this is the use of the EMSCI ISNCSCI calculator in combination with the EMSCI dataset to inform the updated 2019 ZPP rule change [35].
Discussion
The use of computerized ISNCSCI algorithms has been around for many years but many were developed and used internally for specific projects or not maintained [7, 10,11,12,13]. Today the International SCI community has free access to the updated online format of the EMSCI and Praxis algorithms.
A key reason why these two algorithms are broadly used by the SCI community is their support and integration into large research networks, which in contrast to other algorithms developed, ensures the long-term provision of updated and accurate tools by their developers. Other reasons include the very important initial validation which is the prerequisite for high classification accuracy. For this step, a substantially large dataset is mandatory so that many types of cases can be considered by the algorithm as well as by SCI/ISNCSCI experts, who clarify how to interpret the ISNCSCI rules correctly. Registries like EMSCI and Praxis have broad inclusion criteria which ensures that typical ISNCSCI cases, including cases with classification challenges such as not testable scores, are used for development and validation with human experts. After achieving this milestone, the public interfaces must be developed and maintained, which requires further long-term resources. Finally, the adoption of new ISNCSCI revisions requires substantial resource, which ongoing registries are more likely to provide than specific research projects.
The EMSCI and Praxis algorithms have different interfaces, features, and levels of integration ability for other projects including databases and EMRs and have been and will continue to be developed independently.
One of the big advantages of independent developments is the identification of cases in which they arrive at different classification results (Fig. 2A, C). Both teams collaborate fruitfully on the scientific level, e.g. to identify a problem in the motor level definition to be addressed in future ISNCSCSI revisions [23]. This helps to inform ASIA’s International Standards Committee about the potential need for clarification or correction of certain aspects of ISNCSCI.
Osunronbi recommends that, “Utilizing ISNCSCI calculators can reduce classification errors and may help clinicians with simple but time-consuming tasks … clinicians should not rely exclusively on the ISNCSCI calculators, as human experts may be better than computational algorithms at dealing with complex cases of ISNCSCI classifications such as the presence of non-SCI conditions, and multi-level SCI“, and indeed this is a limitation of computerized ISNCSCI classification algorithms [27]. Although these algorithms can reduce classification errors, they can only be as accurate as the bedside exam scores entered, and cannot provide accurate classification in cases where complex clinical reasoning is required. ASIA’s International Standards Committee has recently emphasized the necessity of well-trained clinical assessors to ensure correct classifications [36]. Both web applications clearly outline this limitation, recommending classification still be performed or reviewed by a skilled examiner, though they continue to improve in the types of cases they are able to classify, with updates to reflect changes introduced with the eighth edition of ISNCSCI, and facilitate reclassification of exams using the updated standards. These algorithms share many similarities and have been successful in being broadly used internationally in both clinical care and research.
The first metric of reach identified a broad number and type of algorithm users with many accessing them using the indirect web application interfaces. This reflects the challenges many have in performing the classification component of the ISNCSCI exam. Clinicians require a simple, easy to access tool to support this skill and researchers require a tool to enable flagging of exams that may have been erroneously classified. This is further supported by Armstrong’s evaluation of ISNCSCI worksheet classification by trained clinicians in three multicenter randomized control trials, who concluded that, “continued training and a computerized algorithm are essential to ensure accurate scoring, scaling and classification of the ISNCSCI and confidence in clinical trials” [26].
For the second metric, usefulness, a key feature is looking at the validation of the algorithms themselves in improving accuracy of clinical classification. Multiple studies that have used these validated ISNCSCI algorithms to evaluate assessor accuracy have shown significant error rates in manual classification [14, 26, 27]. Armstrong reported one or more errors on 74.5% of worksheets across three clinical trials, with errors mostly involving incorrect motor (30.1%), sensory levels (12.4%), ZPP (24.0%) and AIS (8.3%) [26]. Schuld et al. reported on the results of a retrospective computerized reclassification of 420 manually classified ISNCSCI exams and found the lowest agreement in motor levels (62%), motor ZPPs (80.8%) and AIS (83.4%) with AIS B most often misinterpreted as AIS C and vice versa (AIS B as C: 29.4% and AIS C as B: 38.6%) [14]. In a neurosurgical unit where senior clinicians provide formalized but not standardized ISNCSCI orientation training to junior doctors, Osunronbi found an error rate of 17.7% (N = 249) in senior clinicians which may have led to higher error rate in the more junior clinicians they provided training to (30.2%, N = 119) [27]. Though this is not the ideal ISNCSCI training structure, it accurately reflects the real-world scenario at many hospitals. These studies suggest that nonexperts should receive proper training before using the ISNCSCI in clinical practice, but also highlight the usefulness of validated computer based ISNCSCI algorithms as an additional tool to improve classification accuracy even for trained clinicians.
Perceived usefulness, reported by algorithm users reflects that the ISNCSCI algorithm was also useful in significantly increasing their awareness and use of the ISNCSCI, improving their understanding of the classification rules, ability to assess and classify exams, and also their perceived confidence in classifying. Being confident is one of the most important personal factors influencing clinical decision making and successful assessment [37].
The final metric, use, reflects implementation of the algorithms, and three themes emerged. The first theme, use for education, is shown by the Praxis User Survey which demonstrated that the Praxis ISNCSCI algorithm is used to learn the ISNCSCI classification rules and for educating others. Due to the heterogeneity and complexity of SCI, the ISNCSCI exam is complex, with both theoretical and hands on training required to become competent. ASIA provides many tools to support training (International Standards Training e-Learning Program (INSTeP), ISNCSCI booklet, motor/sensory exam guides), but none of these tools provides real-time exam-specific feedback on classification and support for questions. In a review of trainee perception of medical training technologies, web-based learning was perceived as most valuable when associated with real-time feedback, a simple interface, and extended time for completion, with E-learning interventions that are perceived as lacking interactivity being viewed less favorably [38]. This aligns with the features rated as valuable by respondents (ask questions about a classification they do not understand and access to support for conducting and classifying an ISNCSCI assessment) and represents an area for potential enhancement by making the computational decision process more transparent. Algorithm-supported education in combination with hands-on training and existing tools provided by ASIA, comprise a comprehensive training package.
The second theme, the need for algorithms to ensure data quality, is evidenced by the extensive use of these algorithms both through the publicly available web applications as well as through integration into other registries, databases, clinical trials, and EMRs. Maintaining a high level of quality of ISNCSCI examinations is essential in clinical trials where the classification is often used as inclusion/exclusion criteria, to stratify groups, and as a primary outcome. It is also of utmost importance within networks like EMSCI and Praxis. The use of a standardized computer program to accurately classify ISNCSCI datasets allows clinical trials an additional data quality check, where discrepancies between clinical classification and computer calculated classification can be verified with study sites. It also allows networks like EMSCI and Praxis to ensure high data quality and provide education on classification to their network sites. The differences seen in types of use reported by the scientific literature versus the Praxis User Survey may relate to the fact that the former is probably biased to reflect a researcher perspective while participants of the latter were mainly clinicians.
Interestingly, the third theme was the variety of unintended uses found. These included informing the ASIA International Standards Committee, supporting clinical documentation, conducting bedside exams, and using the resulting worksheet to improve patient self-tracking and motivation. Given the wide variety of unintended uses, future research may be warranted to further explore and engage patients and clinicians to determine their needs and the value of additional features as well as actual demand by these users.
There are several limitations associated with this work which must be considered. Metrics for the evaluation were based on citations using Google Scholar which relies on the authors to include the citation. There may be other studies that used these algorithms without referencing them, resulting in under-reporting use. There is no standardized comprehensive evaluation of both algorithms available so some results are generalized. The Praxis algorithm user survey was conducted on a sample of convenience and was posted on the algorithm web application, which could bias the results. A prospective formal evaluation of both algorithms, targeting centers known to treat individuals with SCI to determine ISNCSCI algorithm use, would be helpful to better understand the breadth of use and inform future enhancements. Future activities planned for the EMSCI and Praxis algorithms include continuing to enhance features for users (e.g. development of an iOS/Android app to address identified limitations of internet access and smartphone compatibility) as informed by how these algorithms are being used and user feedback. A key future direction to be considered by both algorithms will be investigating appropriate ways to incorporate the new Expedited–ISNCSCI which is an abbreviated ISNCSCI designed for use by trained clinicians in screening and follow-up scenarios [39].
In conclusion, the use of validated, computerized classification tools is an effective way to decrease ISNCSCI classification errors due to human error and ensures a consistent set of classification rules is clearly defined. Computerized ISNCSCI algorithms will never replace the role of well-trained clinicians in ISNCSCI classification. They allow reclassification of ISNCSCI datasets with updated versions of the ISCNSCI, and support rapid classification of large datasets. They will continue to support the ASIA International Standards Committee in evaluating the impacts of possible future revisions to make evidence-informed modifications and highlight classification rules which may need further clarification. These algorithms have evolved to be used around the world as a valuable tool to support education, clinical documentation, communication between clinicians and their patients, and ISNCSCI data quality.
Data availability
The datasets generated and/or analyzed during the current study are available from the Praxis Spinal Cord Institute on reasonable request.
References
Rupp R, Biering-Sørensen F, Burns SP, Graves DE, Guest J, Jones L, et al. International Standards for Neurological Classification of Spinal Cord Injury: Revised 2019. Top Spinal Cord Inj Rehabil. 2021;27:1–22. https://doi.org/10.46292/sci2702-1.
Steeves JD, Lammertse D, Tuszynski MH, Steeves JD, Curt A, Fawcett JW, et al. Guidelines for the conduct of clinical trials for spinal cord injury (SCI) as developed by the ICCP panel: clinical trial outcome measures. Spinal Cord. 2007;45:206–21. https://doi.org/10.1038/sj.sc.3102010.
Tuszynski MH, Steeves JD, Fawcett JW, Lammertse D, Kalichman M, Rask C, et al. Guidelines for the conduct of clinical trials for spinal cord injury as developed by the ICCP Panel: clinical trial inclusion/exclusion criteria and ethics. Spinal Cord. 2007;45:222–31. https://doi.org/10.1038/sj.sc.3102009.
Cohen ME, Ditunno JF Jr, Donovan WH, Maynard FM Jr. A test of the 1992 International Standards for Neurological and Functional Classification of Spinal Cord Injury. Spinal Cord. 1998;36:554–60.
Chafetz RS, Vogel LC, Betz RR, Gaughan JP, Mulcahey MJ. International standards for neurological classification of spinal cord injury: training effect on accurate classification. J Spinal Cord Med. 2008;31:538–42.
Schuld C, Wiese J, Franz S, Putz C, Stierle I, Smoor I, et al. Effect of formal training in scaling, scoring and classification of the International Standards for Neurological Classification of Spinal Cord Injury. Spinal Cord. 2013;51:282–8. https://doi.org/10.1038/sc.2012.149.
Linassi G, Li PI, Shan R, Marino RJ. A web-based computer program to determine the ASIA impairment classification. Spinal Cord. 2010;48:100–4. https://doi.org/10.1038/sc.2009.98.
Schuld C, Wiese J, Hug A, Putz C, Van Hedel HJA, Spiess MR, et al. Computer implementation of the international standards for neurological classification of spinal cord injury for consistent and efficient derivation of its subscores including handling of data from not testable segments. J Neurotrauma. 2012;29:453–61. https://doi.org/10.1089/neu.2011.2085.
Walden K, Bélanger LM, Biering-Sørensen F, Burns SP, Echeverria E, Kirshblum S. et al. Development and validation of a computerized algorithm for International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI). Spinal Cord. 2016;54:197–203.
Wang D, Taylor B, Gardner B, Frankel HA. Computerized program for neurological classification of spinal cord injury according to the ASIA/IScoS international standards. Presented at Joint Meeting of Am. Spinal Inj. Assoc. (ASIA) and Int. Spinal Cord Soc. (ISCoS), Vancouver, Canada: 2002.
Chafetz RS, Prak S, Mulcahey MJ. Computerized classification of neurologic injury based on the international standards for classification of spinal cord injury. J Spinal Cord Med. 2009;32:532–7.
Oleson C, Marino R. Spinal Cord Injury Classification: Comparison of Human and Computer Algorithm for ASIA Impairment Scale Grades. Presented at Joint Meeting of Am. Spinal Inj. Assoc. (ASIA) and Int. Spinal Cord Soc. (ISCoS), Washington (DC), USA: 2011.
Kriz J, Hlinkova Z, Hakova R, Hysperska V, Spanhelova S, Frgalova B. Development of electronic forms for neurological and functional examination of spinal cord injured patients. Neurol Praxi. 2015;16:276–81.
Schuld C, Franz S, van Hedel HJA, Moosburger J, Maier D, Abel R, et al. International standards for neurological classification of spinal cord injury: classification skills of clinicians versus computational algorithms. Spinal Cord. 2015;53:324–31. https://doi.org/10.1038/sc.2014.221.
Noonan VK, Kwon BK, Soril L, Fehlings MG, Hurlbert RJ, Townson A, et al. The Rick Hansen Spinal Cord Injury Registry (RHSCIR): a national patient-registry. Spinal Cord. 2012;50:22–7. https://doi.org/10.1038/sc.2011.109.
Waring W, Echeverria E, Kirshblum S, Reeves R. ISNCSCI Calculator (International Standards for the Neurological Classification of Spinal Cord Injury). Presented at Int. Spinal Cord Soc. (ISCoS) Meeting, London, England: 2012.
Walden K, Burns S. ISNCSCI Calculator (International Standards for the Neurological Classification of Spinal Cord Injury). Presented at Acad. Spinal Cord Inj. Prof. (ASCIP) Meeting, Las Vegas, USA: 2012.
EMSCI ISNCSCI Calculator: https://ais.emsci.org/ (accessed April 29, 2021).
Praxis ISNCSCI Algorithm: https://www.isncscialgorithm.com/ (accessed April 29, 2021).
Schuld C, Franz S, Weidner N, EMSCI Study Group, Rupp R. Computational ISNCSCI Scoring, Scaling and Classification beyond the EMSCI Database. Top Spinal Cord Inj Rehabil. 2014;20(Suppl):63–4.
Schuld C, Schweidler J, Koller R, Weidner N, EMSCI Study Group, Rupp R. Color-coded dermatome and myotome maps for graphical representation of ISNCSCI datasets. Presented at Joint Meeting of Am. Spinal Inj. Assoc. (ASIA) and Int. Spinal Cord Soc. (ISCoS), Montreal, QC, Canada: 2015.
Rupp R, Schuld C, Biering-Sørensen F, Walden K, Rodriguez G, Kirshblum S. A taxonomy for consistent handling of conditions not related to the spinal cord injury (SCI) in the International Standards for Neurological Classification of SCI (ISNCSCI). Spinal Cord 2022;60:18–29. https://doi.org/10.1038/s41393-021-00646-0.
Schuld C, Franz S, Heutehaus L, Weidner N, Rupp R. Cases of ambiguous motor level determination according to the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) – Need for clarification of the current motor level definition. Top Spinal Cord Inj Rehabil. 2019;25(Suppl 1):112–3.
Sullivan TM, Strachan M, Timmons BK. Guide to Monitoring and Evaluating Health Information Products and Services. https://msh.org/resources/guide-to-monitoring-and-evaluating-health-information-products-and-services/ (accessed April, 2021).
Schuld C, Franz S, Schweidler J, Kriz J, Hakova R, Weidner N, et al. Implementation of multilingual support of the European Multicenter Study about Spinal Cord Injury (EMSCI) ISNCSCI calculator. Spinal Cord. 2022;60:37–44. https://doi.org/10.1038/s41393-021-00672-y.
Armstrong AJ, Clark JM, Ho DT, Payne CJ, Nolan S, Goodes LM, et al. Achieving assessor accuracy on the International Standards for Neurological Classification of Spinal Cord Injury. Spinal Cord. 2017;55:994–1001. https://doi.org/10.1038/sc.2017.67.
Osunronbi T, Sharma H. International Standards for Neurological Classification of Spinal Cord Injury: factors influencing the frequency, completion and accuracy of documentation of neurology for patients with traumatic spinal cord injuries. Eur J Orthop Surg Traumatol. 2019;29:1639–48. https://doi.org/10.1007/s00590-019-02502-7.
Kirshblum S, Snider B, Eren F, Guest J. Characterizing natural recovery after traumatic spinal cord injury. J Neurotrauma. 2021;38:1267–84. https://doi.org/10.1089/neu.2020.7473.
Biering-Sørensen F, Noonan VK. Standardization of Data for Clinical Use and Research in Spinal Cord Injury. Brain Sci 2016;6:29. https://doi.org/10.3390/brainsci6030029.
Dvorak MF, Cheng CL, Fallah N, Santos A, Atkins D, Humphreys S, et al. Spinal cord injury clinical registries: improving care across SCI care continuum by identifying knowledge gaps. J Neurotrauma. 2017;34:2924–33.
Nachtegaal J, van Langeveld SA, Slootman H, Post MWM. Implementation of a Standardized Dataset for Collecting Information on Patients With Spinal Cord Injury. Top Spinal Cord Inj Rehabil. 2018;24:133–40. https://doi.org/10.1310/sci2402-133.
NISCI - Nogo Inhibition in Spinal Cord Injury: https://clinicaltrials.gov/ct2/show/NCT03935321 (accessed April, 2021).
Canadian-American Spinal Cord Perfusion Pressure and Biomarker Study (CASPER). https://clinicaltrials.gov/ct2/show/NCT03911492 (accessed April, 2021).
Biering-Sørensen F, Cohen S, Rodriguez GM, Tausk K, Martin J. Electronic medical record: data collection and reporting for spinal cord injury. Spinal Cord Ser Cases. 2018;4:70. https://doi.org/10.1038/s41394-018-0106-3.
Schuld C, Franz S, Weidner N, Kirshblum S, Tansey K, EMSCI study group, Rupp R. Increasing The Clinical Value of the Zones of Partial Preservation - A Quantitative Comparison of a New Definition Rule Applicable Also in Incomplete Lesions. Top Spinal Cord Inj Rehabil. 2018;24(Suppl 1):120–1.
ASIA International Standards Committee: ASIA Education Committee, Rupp R. Assessor accuracy of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI)-recommendations for reporting items. Spinal Cord. 2018;56:819–20. https://doi.org/10.1038/s41393-018-0133-8.
Hecimovich MD, Volet SE. Importance of building confidence in patient communication and clinical skills among chiropractic students. J Chiropr Educ. 2009;23:151–64. https://doi.org/10.7899/1042-5055-23.2.151.
Moran J, Briscoe G, Peglow S. Current technology in advancing medical education: perspectives for learning and providing care. Acad Psychiatry J Am Assoc Dir Psychiatr Resid Train Assoc Acad Psychiatry. 2018;42:796–9. https://doi.org/10.1007/s40596-018-0946-y.
American Spinal Cord Injury Association. Expedited ASIA ISNCSCI Exam (E-ISNCSCI). Version 1 2020. https://asia-spinalinjury.org/expedited-isncsci-exam (accessed April, 2021).
Acknowledgements
We thank Zeina Waheed and Eduardo Echeverria (Praxis Spinal Cord Institute) for their support in finalizing this manuscript.
Funding
This work is partly supported by the Praxis Spinal Cord Institute with funding from Health Canada, Western Economic Diversification Canada. Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
KW and CS compiled a first draft of the manuscript. All authors involved in an internal review and final approval process.
Corresponding author
Ethics declarations
Competing interests
VKN and KW are employees of the Praxis Spinal Cord Institute.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Walden, K., Schuld, C., Noonan, V.K. et al. Computer International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) algorithms: a review. Spinal Cord 61, 125–132 (2023). https://doi.org/10.1038/s41393-022-00854-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41393-022-00854-2