Friday, April 3rd, 2020
What Do Algorithmic Fairness and COVID-19 Case-Severity Prediction Have in Common? | Simons Institute Polylogues
In this episode of Simons Institute Polylogues, Shafi Goldwasser (Director, Simons Institute) interviews Guy Rothblum (Weizmann Institute) about a new research collaboration applying techniques from the field of algorithmic fairness to determine which patients are most likely to develop severe cases of COVID-19.
REFERENCES
- “Multicalibration: Calibration for the (Computationally-Identifiable) Masses,” by Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum.
- “Addressing Bias in Prediction Models by Improving Subpopulation Calibration,” by Noam Barda, Noa Dagan, Guy N. Rothblum, Gal Yona, Eitan Bachmat, Philip Greenland, Morton Leibowitz, and Ran Balicer [under submission].
- COVID-19 collaboration. Clalit Research Institute: Adi Berliner, Amichai Akriv, Anna Kuperberg, Dan Riesel, Daniel Rabina, Galit Shaham, Ilan Gofer, Mark Katz, Michael Leschinski, Noa Dagan, Noam Barda, Oren Auster, Reut Ohana, Shay Ben-Shachar, Shay Perchik Uriah Finkel, Yossi Levi. Technion: Daniel Greenfeld, Uri Shalit, Jonathan Somer. Weizmann Institute: Guy Rothblum, Gal Yona.
Related articles
- Letter from the Director, May 2020
- Perception as Inference: The Brain and Computation | Theory Shorts
- Research Resilience Under Quarantine | Simons Institute Polylogues
- Feature: Muddled Models
- From the Inside: Online and Matching-Based Market Design
- From the Inside: Proofs, Consensus, and Decentralizing Society
- Research Vignette: Geometry of Polynomials
- This Spring at the Simons Institute