Home Community A pose-mapping technique could remotely evaluate patients with cerebral palsy

A pose-mapping technique could remotely evaluate patients with cerebral palsy

A pose-mapping technique could remotely evaluate patients with cerebral palsy

It could possibly be a hassle to get to the doctor’s office. And the duty might be especially difficult for fogeys of youngsters with motor disorders corresponding to cerebral palsy, as a clinician must evaluate the kid in person regularly, often for an hour at a time. Making it to those frequent evaluations might be expensive, time-consuming, and emotionally taxing.

MIT engineers hope to alleviate a few of that stress with a brand new method that remotely evaluates patients’ motor function. By combining computer vision and machine-learning techniques, the tactic analyzes videos of patients in real-time and computes a clinical rating of motor function based on certain patterns of poses that it detects in video frames.

The researchers tested the tactic on videos of greater than 1,000 children with cerebral palsy. They found the tactic could process each video and assign a clinical rating that matched with over 70 percent accuracy what a clinician had previously determined during an in-person visit.

The video evaluation might be run on a spread of mobile devices. The team envisions that patients might be evaluated on their progress just by establishing their phone or tablet to take a video as they move about their very own home. They might then load the video right into a program that might quickly analyze the video frames and assign a clinical rating, or level of progress. The video and the rating could then be sent to a physician for review.

The team is now tailoring the approach to guage children with metachromatic leukodystrophy — a rare genetic disorder that affects the central and peripheral nervous system. In addition they hope to adapt the tactic to evaluate patients who’ve experienced a stroke.

“We would like to cut back a little bit of patients’ stress by not having to go to the hospital for each evaluation,” says Hermano Krebs, principal research scientist at MIT’s Department of Mechanical Engineering. “We expect this technology could potentially be used to remotely evaluate any condition that affects motor behavior.”

Krebs and his colleagues will present their recent approach on the IEEE Conference on Body Sensor Networks in October. The study’s MIT authors are first creator Peijun Zhao, co-principal investigator Moises Alencastre-Miranda, Zhan Shen, and Ciaran O’Neill, together with David Whiteman and Javier Gervas-Arruga of Takeda Development Center Americas, Inc.

Network training

At MIT, Krebs develops robotic systems that physically work with patients to assist them regain or strengthen motor function. He has also adapted the systems to gauge patients’ progress and predict what therapies could work best for them. While these technologies have worked well, they’re significantly limited of their accessibility: Patients need to travel to a hospital or facility where the robots are in place.  

“We asked ourselves, how could we expand the nice results we got with rehab robots to a ubiquitous device?” Krebs recalls. “As smartphones are in all places, our goal was to make the most of their capabilities to remotely assess individuals with motor disabilities, in order that they might be evaluated anywhere.”

A brand new MIT method incorporates real-time skeleton pose data corresponding to the one pictured, to remotely analyze the videos of youngsters with cerebral palsy, and routinely assign a clinical level of motor function.

Image: Dataset created by Stanford Neuromuscular Biomechanics Laboratory in collaboration with Gillette Children’s Specialty Healthcare

The researchers looked first to computer vision and algorithms that estimate human movements. Lately, scientists have developed pose estimation algorithms which might be designed to take a video — as an illustration, of a lady kicking a soccer ball — and translate her movements right into a corresponding series of skeleton poses, in real-time. The resulting sequence of lines and dots might be mapped to coordinates that scientists can further analyze.

Krebs and his colleagues aimed to develop a way to investigate skeleton pose data of patients with cerebral palsy — a disorder that has traditionally been evaluated along the Gross Motor Function Classification System (GMFCS), a five-level scale that represents a toddler’s general motor function. (The lower the number, the upper the kid’s mobility.)

The team worked with a publicly available set of skeleton pose data that was produced by Stanford University’s Neuromuscular Biomechanics Laboratory. This dataset comprised videos of greater than 1,000 children with cerebral palsy. Each video showed a toddler performing a series of exercises in a clinical setting, and every video was tagged with a GMFCS rating that a clinician assigned the kid after the in-person assessment. The Stanford group ran the videos through a pose estimation algorithm to generate skeleton pose data, which the MIT group then used as a place to begin for his or her study.

The researchers then looked for tactics to routinely decipher patterns within the cerebral palsy data which might be characteristic of every clinical motor function level. They began with a Spatial-Temporal Graph Convolutional Neural Network — a machine-learning process that trains a pc to process spatial data that changes over time, corresponding to a sequence of skeleton poses, and assign a classification.

Before the team applied the neural network to cerebral palsy, they utilized a model that had been pretrained on a more general dataset, which contained videos of healthy adults performing various every day activities like walking, running, sitting, and shaking hands. They took the backbone of this pretrained model and added to it a brand new classification layer, specific to the clinical scores related to cerebral palsy. They fine-tuned the network to acknowledge distinctive patterns throughout the movements of youngsters with cerebral palsy and accurately classify them throughout the foremost clinical assessment levels.

They found that the pretrained network learned to accurately classify children’s mobility levels, and it did so more accurately than if it were trained only on the cerebral palsy data.

“Since the network is trained on a really large dataset of more general movements, it has some ideas about the best way to extract features from a sequence of human poses,” Zhao explains. “While the larger dataset and the cerebral palsy dataset might be different, they share some common patterns of human actions and how those actions might be encoded.”

The team test-ran their method on various mobile devices, including various smartphones, tablets, and laptops, and located that almost all devices could successfully run this system and generate a clinical rating from videos, in near real-time.

The researchers at the moment are developing an app, which they envision parents and patients could at some point use to routinely analyze videos of patients, taken within the comfort of their very own environment. The outcomes could then be sent to a physician for further evaluation. The team can be planning to adapt the tactic to guage other neurological disorders.

“This approach might be easily expandable to other disabilities corresponding to stroke or Parkinson’s disease once it’s tested in that population using appropriate metrics for adults,” says Alberto Esquenazi, chief medical officer at Moss Rehabilitation Hospital in Philadelphia, who was not involved within the study. “It could improve care and reduce the general cost of health care and the necessity for families to lose productive work time, and it’s my hope [that it could] increase compliance.”

“In the long run, this may also help us predict how patients would reply to interventions sooner,” Krebs says. “Because we could evaluate them more often, to see if an intervention is having an impact.”

This research was supported by Takeda Development Center Americas, Inc.


Please enter your comment!
Please enter your name here