Թ

Event

AI, Explainability and Epistemic Dependence

Wednesday, January 7, 2026 18:15to19:45
University of Hamburg, DE
All information available on this page

Lecture given by Jocelyn Malure at the University of Hamburg

The idea that people subjected to opaque AI-based decisions have a “right to explanation”, under specific circumstances, is generating a stimulating and productive debate in philosophy. Some early normative defenses of the right to explanation or public justifications (Vredenburgh 2021; Maclure 2021) are being challenged from a variety of perspectives (Ross 2022; Taylor 2024; Fritz 2024; Karlan & Kugelberg 2025). Alternatively, some are qualifying or refining the case for a right to explanation (Da Silva 2023; Grote & Paulo 2025; Dishaw 2025). While I addressed the argument according to which deep artificial neural networks are not significantly more fallible and opaque than human minds in a previous paper (2021), I now want to turn my attention to two new emerging counterarguments to the right to explanation thesis. The first one is normative: the standards of public reason do not typically apply to AI decisions and the interests at play do not justify the cost of a granting a right to explanation. The second one is epistemic: social epistemologists have long been urging us to recognize human thinkers’ basic epistemic dependence upon the testimonies of others and upon a variety of complex social processes. The defenders of the right to explanation arguably overlook the possibility that it may be justified to defer epistemically to black box algorithms. Although serious, I will argue that these counterarguments are unsuccessful.

Back to top