Machine Detection of Nystagmus from Video Recordings

Team: Precision Care Medicine: Pink

Program: Biomedical Engineering

Nystagmus is the instability of the eyes reflecting a physiologic change in neural circuitry that connects the inner ear, brain, and the eye. Previous studies have shown that nystagmus precedes MRI changes by 48-72 hours in stroke patients presenting with isolated dizziness or vertigo. Dizziness and vertigo accounts for over 4 million emergency department (ED) visits per year, and It is difficult for ED providers to differentiate between benign and catastrophic nystagmus rapidly and accurately. This increases stroke misdiagnosis rate, stroke-related disabilities, unnecessary hospitalization/testing, and healthcare spending. Using deep learning approaches, we developed a solution that will be able to predict nystagmus from a smartphone video. This will enable more appropriate triage, as well as remote neurologic diagnosis. Our preliminary model had an AUC of 0.87, accuracy of 84.21%, sensitivity of 86.9%, and a specificity of 82.8%.


David S. Zee, MD
Kirby Gong
Indranuj Gangan
Raimond L Winslow, PhD
Joseph L Greenstein, PhD

Team Members

Project Links

  • Project Poster
  • Additional Project Information

    Video Transcript:

    Hi, I am Kemar Green, Team leader for Team Pink. Our precision medicine project was entitled “Machine Detection of Nystagmus from video recordings”. Nystagmus is an abnormal eye movement that reflects a physiologic change in neural circuitry that connects the inner ear, brain, and the eye. It can precede MRI changes by 48-72 hours in stroke patients presenting with isolated dizziness or vertigo. The problem is that nystagmus identification and interpretation can be challenging for non-specialists. This is magnified in the setting of COVID, where this must be done via telemedicine as the video quality affects our ability to identify and interpret nystagmus. We wanted to create a model that could be applied to mobile phone and other devices with low quality video capabilities to serve as a screening tool for remotely triaging patients with dizziness into inner ear disease or stroke. To accomplish this, we developed a deep-learning system to classify 60 Hz recordings as videos with nystagmus or video without nystagmus. As shown in Figure 2, To build our model, we took the raw video clips and converted them to individual frames. We then created filtered images using recursive filtering methods that contained the nystagmus motion information. We then fed these filtered images into a neural network and generate probabilities for each image. Using various voting methods, we were able to classify the entire video as nystagmus of no nystagmus based on the filtered image probabilities. As shown in Figure 3, the performance of the model was calculated using the area under the receiver operating characteristic curve (abbreviated AUC) with sensitivity, specificity, negative (NPV) and positive (PPV) predictive values. The best performing model had an AUC of 0.87. Based on our results, we conclude that nystagmus can be detected from low quality videos using deep-learning methods and can be useful for remote diagnosis of dizzy patients in a pandemic. Some future work includes: detection of other eye movement abnormalities; home-based neurologic screening/diagnosis; and improving Tele-neuroophthalmology & Tele-neurootology. We would like to thank the Johns Hopkins Neurology Department & the Neuro-Visual & Vestibular Disorders (NVV) division for supporting the project.


    Additional Project Advisors:

    David E. Newman-Toker MD/PhD (

    Oleg V. Komogortsev PhD  (

    Jorge Otero-Millan PhD (

    Daniil Pakhomov (

    Sanchit Hira (

Smartphone-enabled remote dizziness triage system