Skip to main content

Explainable AI (XAI) Insights

Discover key insights, studies, white papers and expert interviews on Explainable Artificial Intelligence (XAI). These resources show how transparency in AI builds trust, ensures regulatory compliance, and enables responsible innovations.

Explainable AI: Who Needs Explanations, What to Explain, and Why It Matters

Samek and Schmid et. al. 2025 (White Paper)

This white paper explores how Explainable AI (XAI) makes complex systems transparent and trustworthy. It emphasizes tailoring explanations to different audiences—developers, regulators, and end-users—and presents methods such as feature attribution, surrogate models, and interactive XAI.

Explainable AItailored AI explanationsinteractive XAIsurrogate modelsresponsible innovation

Explainable AI in Manufacturing: How XAI Improves Jobs, Quality, and Human–AI Collaboration

World Economic Forum (Article)

Explainable AI helps overcome distrust in “black box” systems by making AI decisions transparent. In manufacturing, this builds worker confidence, reduces errors, and improves decision-making—enhancing human expertise rather than replacing it.

Explainable AImanufacturing AIhuman-AI collaborationquality control AIAI in industry

Industrial Explainable AI: Building Trust, Safety, and Compliance in Manufacturing

Siemens (White Paper)

Siemens shows how XAI strengthens trust in industrial AI by ensuring transparency, accountability, and compliance with regulations like the EU AI Act across the AI lifecycle.

Explainable AIindustrial AIEU AI ActAI lifecyclesafe AI systems

Explainable AI and Data Protection: Transparency, Accountability, and Risks under EU Law

European Data Protection Supervisor (Report)

The EDPS highlights XAI’s role in fairness and transparency under EU law but warns about oversimplified explanations, IP exposure, and over-reliance on AI—stressing the need for human oversight.

Explainable AIGDPRAI transparencydata protection AIAI accountability

Prof. Wojciech Samek on Explainable AI: A Game Changer for Safe and Responsible AI

Interview with Prof. Wojciech Samek (TU Berlin / Fraunhofer HHI)

Prof. Samek explains why XAI is crucial as models grow complex, and outlines three waves of XAI research—post-hoc explanations, mechanistic interpretability, and holistic model understanding.

Explainable AIsafe AI researchFraunhofer HHITU Berlin AI

Explainable AI Study: Requirements, Use Cases, and Practical Solutions

German Federal Ministry for Economic Affairs and Energy (BMWi)

Commissioned by BMWi, this study identifies business requirements, real-world use cases, and challenges (like balancing IP protection with openness). It concludes that explainability is essential for trust and market acceptance.

Explainable AIBMWi studyGDPR complianceAI use casestransparent AI solutions