Bias Detection Tools in Health Care Challenge
Questions?
The Minimizing Bias and Maximizing Long-Term Accuracy, Utility and Generalizability of Predictive Algorithms in Health Care Challenge seeks to encourage the development of bias-detection and -correction tools that foster “good algorithmic practice” and mitigate the risk of unwitting bias in clinical decision support algorithms.
See more details about the Challenge.
Winners
First place
InterFair
Project name: InterFair with Fairness Oriented Multiobjective Optimization (FOMO)
Members: William La Cava, Elle Lett
Second place
MLK
Project name: MLK Fairness
Members: Amir Asiaee, Kaveh Aryanpoo
Aequitas
Project name: ESRD Bias Detection and Mitigation
Members: Sujatha Subramanian, Jo Stigall, Tenzin Jordan Shawa, Jagadish Mohan, Senthil K. Ranganathan
Third place
Team CVP
Project name: Debiaser – AI Bias Detection and Mitigation Tool for Clinical Decision Making
Members: Manpreet Khural, Wei Chien, Lauren Winstead, Cal Zemelman
Super2021
Project name: BeFair: A Multi-Level-Reweighing Method to Mitigate Bias
Members: Yinghao Zhu, Jingkun An, Enshen Zhou, Hao Li, Haoran Feng
Honorable Mention
Dr. Nobias Fünke’s 100% Natural Good-Time Family Bias Solution
Project name: Metric Lattice for Performance Estimation (MLPE)
Members: Kellen Sandvik, Jesse Rosen, Conor Corbin
GenHealth
Project name: GenHealth
Members: Ricky Sahu, Ethan Siegel, Eric Marriott
Icahn School of Medicine
Project name: AEquity: A Deep Learning Based Metric for Detecting, Characterizing and Mitigating Dataset Bias
Members: Faris F. Gulamali, Ashwin S. Sawant, Jianying Hu, Girish N. Nadkarni
ParaDocs Health
Project name: ParaDocs Health
Members: Omar Mohtar, Dickson T. Chen, Vibhav Jha, Dhini Nasution, Matt Segar
Learn more about the submissions.
Key Dates
Note: Dates subject to change as necessary
- Challenge Announcement: September 1, 2022
- Registration and Submission Portal Opens: October 31, 2022
- Submission Deadline: March 1, 2023
- Technical Evaluation Phase: March 2023
- Federal Judging Phase: March and April 2023
- Winners Announced: April 2023
- Demo Day: May 5, 2023
Background
Although artificial intelligence (AI) and machine learning (ML) algorithms offer promise for clinical decision support (CDS), their potential has yet to be fully realized. Even well-designed AI/ML algorithms and models can become inaccurate or unreliable over time due to various factors, including changes in data distribution; subtle shifts in the data, real-world interactions and user behavior; and shifts in data capture and management practices. Over time, these changes and shifts can degrade the predictive capabilities of algorithms, which can negate the benefits of these types of systems for clinics.
How do we detect these shifts or changes on a continual basis to maintain prediction quality? Monitoring an algorithm’s behavior and flagging any significant changes in performance may enable timely adjustments that ensure a model’s predictions remain accurate, fair and unbiased over time. This approach maintains the predictive capability of an algorithm in the real world.
As AI/ML algorithms are increasingly used in health care systems, accuracy, generalizability and avoidance of bias and drift become more important. Bias primarily surfaces in two forms. Predictive bias is seen in algorithmic inaccuracies that produce estimates that significantly differ from the underlying truth. Social bias reflects systemic inequities in care delivery leading to suboptimal health outcomes for certain populations.
To address these issues and improve clinician and patient trust in AI/ML-based CDS tools, this Challenge invites groups to develop bias-detection and -correction tools that foster “good algorithmic practice” and mitigate the risk of unwitting bias in CDS algorithms.
Challenge Goals
The goal of this Challenge is to identify and minimize inadvertent amplification and perpetuation of systemic biases in AI/ML algorithms used as CDS through the development of predictive and social bias-detection and -correction tools. For this Challenge, participants across academia and the private sector are invited to participate in teams, as representatives of an academic or private entity, or in an individual capacity to design a bias-detection and -correction tool.
For most up-to-date information about the rules, submission requirements, judging criteria, prizes, how to enter and to register for the Challenge, please visit the ExpeditionHacks site. You also can visit the Challenge.gov site.