A Unifying Philosophical Theory of AI Explanations
Atoosa Kasirzadeh (University of Toronto & Australian National University)
Zoom
Note: The event time listed is set to Pacific Time.
The social and ethical implications of prediction-based decision systems in sensitive contexts have generated lively debates among multiple stakeholders in- cluding computer scientists, ethicists, social scientists, policy makers, and end users. Yet, the lack of a common language and a multi-dimensional framework for an appropriate bridging of the technical, ethical, and legal aspects of the debate prevents the discussion to be as effective as it can be. Drawing on philosophy, this paper offers a multi-faceted unifying theory for the varieties of data and non-data analytical explanations as to why a prediction-based decision is obtained. The theory identifies the existence and significance of dependencies between different kinds of AI explanations as well as the role of normative and pragmatic values in making sense of these explanations. This framework lays the groundwork for establishing the relevant connection between technical, moral, and legal aspects of artificially-intelligent decision making.
To register for this event and receive the Zoom link, please email organizers bendavid.shai [at] gmail.com (subject: Inquiry%20to%20register%20for%20Interpretable%20Machine%20Learning%20event%20June%2029%2C%202020) (Shai Ben-David) or ruth.urner [at] gmail.com (subject: Inquiry%20to%20register%20for%20Interpretable%20Machine%20Learning%20event%20June%2029%2C%202020) (Ruth Urner).