Abstract:
This paper critically examines the concept of transparency in AI systems within the domains of law and governance using tools in philosophy. Current discussions often treat transparency, explainability, and trust as closely linked, frequently requiring explainable AI (XAI) techniques as necessary justifications of AI decisions. Drawing on philosophy, cognitive science, and jurisprudence, we argue in this paper that this approach involves a category mistake: XAI explanations typically describe causal mechanisms or computational processes, which are levels below that of propositional justification expected in human deliberation, including human legal and theoretical reasoning. It concludes that transparency should be understood as a requirement in social dilbeberation rather than a purely technical feature of AI systems. Accordingly, the paper proposes a pluralistic, context-sensitive approach to AI transparency informed by analogies with existing legal and regulatory practices, with XAI serving as a useful but neither necessary nor sufficient tool.

