BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20260203T182549EST-7433icXprT@132.216.98.100 DTSTAMP:20260203T232549Z DESCRIPTION:Conférence donnée par Jocelyn Maclure à l’Université de Hambour g\n\nThe idea that people subjected to opaque AI-based decisions have a “r ight to explanation”\, under specific circumstances\, is generating a stim ulating and productive debate in philosophy. Some early normative defenses of the right to explanation or public justifications (Vredenburgh 2021\; Maclure 2021) are being challenged from a variety of perspectives (Ross 20 22\; Taylor 2024\; Fritz 2024\; Karlan & Kugelberg 2025). Alternatively\, some are qualifying or refining the case for a right to explanation (Da Si lva 2023\; Grote & Paulo 2025\; Dishaw 2025). While I addressed the argume nt according to which deep artificial neural networks are not significantl y more fallible and opaque than human minds in a previous paper (2021)\, I now want to turn my attention to two new emerging counterarguments to the right to explanation thesis. The first one is normative: the standards of public reason do not typically apply to AI decisions and the interests at play do not justify the cost of a granting a right to explanation. The se cond one is epistemic: social epistemologists have long been urging us to recognize human thinkers’ basic epistemic dependence upon the testimonies of others and upon a variety of complex social processes. The defenders of the right to explanation arguably overlook the possibility that it may be justified to defer epistemically to black box algorithms. Although seriou s\, I will argue that these counterarguments are unsuccessful.\n\n \n DTSTART:20260107T231500Z DTEND:20260108T004500Z LOCATION:DE\, Universität Hamburg SUMMARY:AI\, Explainability and Epistemic Dependence URL:/jarislowsky-chair/fr/channels/event/ai-explainabi lity-and-epistemic-dependence-370278 END:VEVENT END:VCALENDAR