Gene J. Puskar/AP photo

At the UPMC Children’s Hospital in Pittsburgh, scores of babies and older children with head trauma are admitted each year and abuse is the leading cause of death in the ICU.

These injured patients are often too young to explain their bruises and fractures, whether accidental or inflicted. If caretakers are responsible, they may not admit it to clinicians, who usually must flag young patients for screening based on their own intuition.

Story Continued Below

So UPMC is building an EHR-based alert system to make the flagging process more objective, said Rachel Berger, the chief of the hospital’s child abuse prevention program. Eventually it could help staff intervene before abuse happens, she says.

With initial funding from the Patient-Centered Outcomes Research Institute, Berger is designing a system in the hospital’s Cerner EHR that could search a patient’s record for indicators of abuse, such as past injuries or a history of unexplained symptoms. Her longer-term goal is a predictive algorithm that pulls public data sets — school records, prior abuse reports or a parent’s criminal history — to identify children likely to suffer abuse in the future.
In the absence of such diagnostic aids, clinicians may act on their own biases about what kinds of families are likely to commit abuse, she says: low-income, minority children exhibiting the same injuries are generally screened more often than wealthier white ones.

The over-screening and under-screening of specific populations is a pervasive challenge in child welfare, says Cindy Christian, a child abuse specialist at the Children’s Hospital of Philadelphia.

Being wrongly tagged as a potential perpetrator can be traumatic in itself, she said. On the other hand, “it’s easy to miss a case of child abuse when somebody is a good liar.” And the price of a missed signal can be high: an estimated 2,000 children per year die of child abuse in the U.S.

A key challenge in applying algorithms to complex social issues is ensuring that they don’t encode biases clinicians already harbor — especially if they’re being trained on abuse reports that over-represent certain populations, Christian said.

But if researchers are conscious of such biases, the automatic flagging system could remove some of the guesswork and help hospitals focus on the children who have or are very likely to have suffered abuse, she said.

What they’re doing now

It will likely take years for Berger to build out a robust predictive algorithm. But UPMC has already built one foundational element: an EHR alert flagging ER patients whose injuries could be caused by abuse — an infant with significant unexplained bruises, for instance.

Doctors don’t screen consistently even in cases where abuse seems like an obvious possibility, Berger says. But when clinicians get a computer alert related to abuse, their ears prick up, she said — even when the patient or family doesn’t fit their preconceived profile of abuse.

The next step is to modify the algorithm to account for previous medical encounters, not just the current one. If an infant comes into the ER for an injury that seems accidental, the system might pick up a history of missed well-child visits, suggesting caretaker neglect.

In between a quarter and a third of child abuse cases, the child has had previous injuries that weren’t reported as abuse, Christian said. “A baby might have a bruise, but a family might tell you a good story.”

In addition to the PCORI funding, Berger’s team has support from UPMC’s Beckwith Institute to help six other children’s hospitals deploy it by the end of 2019. They’re also building interfaces that could integrate with hospitals using Epic, Cerner and AllScripts software.

Eventually, researchers hope to incorporate outside data sets such as school or immunization records and a parent’s criminal history, looking for warning signs that could lead clinicians to intervene and talk with the families. Eventually, deep learning could detect signals from previously documented abuse cases to find patterns that might detect current or, potentially, future victims, she said.

The UPMC Children’s team is aware of the challenge of algorithm bias, said CIO Srinivasan Suresh, who is also an ER doctor. But the software is designed to flag patients based on factors such as neglect and family environment, not race or income.

Decisions to screen a child for abuse or to report them to child protective services are often thorny, based on a combination of suspicion and some medical history. “Unless it’s witnessed, it’s hard to say ‘This was abuse,’” Suresh said.

CORRECTION: An earlier version of this story misstated which EHR vendor UPMC Children’s Hospital uses. The hospital uses Epic.