BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20260202T100330EST-1054Nt5zFZ@132.216.98.100 DTSTAMP:20260202T150330Z DESCRIPTION:Lecture given by Jocelyn Malure at the University of Hamburg\n \nThe idea that people subjected to opaque AI-based decisions have a “righ t to explanation”\, under specific circumstances\, is generating a stimula ting and productive debate in philosophy. Some early normative defenses of the right to explanation or public justifications (Vredenburgh 2021\; Mac lure 2021) are being challenged from a variety of perspectives (Ross 2022\ ; Taylor 2024\; Fritz 2024\; Karlan & Kugelberg 2025). Alternatively\, som e are qualifying or refining the case for a right to explanation (Da Silva 2023\; Grote & Paulo 2025\; Dishaw 2025). While I addressed the argument according to which deep artificial neural networks are not significantly m ore fallible and opaque than human minds in a previous paper (2021)\, I no w want to turn my attention to two new emerging counterarguments to the ri ght to explanation thesis. The first one is normative: the standards of pu blic reason do not typically apply to AI decisions and the interests at pl ay do not justify the cost of a granting a right to explanation. The secon d one is epistemic: social epistemologists have long been urging us to rec ognize human thinkers’ basic epistemic dependence upon the testimonies of others and upon a variety of complex social processes. The defenders of th e right to explanation arguably overlook the possibility that it may be ju stified to defer epistemically to black box algorithms. Although serious\, I will argue that these counterarguments are unsuccessful.\n\n \n DTSTART:20260107T231500Z DTEND:20260108T004500Z LOCATION:DE\, University of Hamburg SUMMARY:AI\, Explainability and Epistemic Dependence URL:/jarislowsky-chair/channels/event/ai-explainabilit y-and-epistemic-dependence-370278 END:VEVENT END:VCALENDAR