‘Transparent’ AI Improves Outcome Prediction in Medicare Patients

By ACCESS Team
A doctor with a patient

Artificial intelligence (AI) can be used to save lives. But according to some in the field, the problem with conventional AI – such as deep learning – is that the connection among predictors (the factors that AI uses to predict outcomes), their interactions with one another and their connections with the prediction are buried in the software program. These hidden computations are a challenge when it comes to making complicated, life-saving decisions.

A team of scientists from Mederrata Research, a nonprofit group that works to reduce medical errors, and Sound Prediction Inc., a company that works to make AI’s decision-making understandable, teamed up on the inaugural AI challenge posed by the Centers for Medicare & Medicaid Services. Working with the National Institutes of Health (NIH) the researchers used  the Bridges-2 system at Pittsburgh Supercomputing Center to design ways for achieving a “transparent AI” whose “thinking process” could be more apparent to humans.

The team designed their AI to reveal risk factors for unplanned readmission or death among Medicare patients any time after discharge from the hospital. They started by training the AI on hospital visits for Medicare patients between 2009 and 2011, then they tested it on data from 2012.

Employing Bridges-2 – with its powerful next-generation GPUs, extreme memory nodes and capacity for moving large chunks of data in and out – the researchers applied a mulitlevel model (MLM), a statistical tool with similarities to AI programs, to sort the data into cohorts of similar cases, for which the supercomputer identified certain factors for each in a way that enabled the experts to understand the AI decision-making process. The outcome was the revelation of risk factors for unplanned readmission or death among Medicare patients after discharge.

It’s hugely challenging, and I think the issue is that … you have a large number of interrelated variables that you think might bias the outcome … [With] these newer models, you can’t accurately track how the prediction is computed from the predictors.

Josh Chang, Mederrata Research

The team reported their results at the AI for Social Good workshop at the Association for the Advancement of Artificial Intelligence conference in Washington D.C. in February 2023.

You can read more about this story here (Published Feb. 1, 2023): Bridges-2 Science Highlights


Project Details

Institution: PSC (Pittsburgh Supercomputing Center)
University: Carnegie Mellon
Funding Agency: NSF
Grant Number: Grant no. ACI-1548562; Allocation no. TG-DMS190042

The science story featured here, allocated through August 31, 2022, was enabled through Extreme Science and Engineering Discovery Environment (XSEDE) and supported by National Science Foundation grant number #1548562. Projects allocated September 1, 2022 and beyond are enabled by the ACCESS program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.

Sign up for ACCESS news and updates.

Receive our monthly newsletter with ACCESS program news in your inbox. Read past issues.