Explainable AI (XAI) is central to our mission of developing machine learning systems that are trustworthy, ethical, and effective. Our work in this area aims to make AI systems transparent and understandable to both experts and non-specialists, fostering trust and enabling informed decision-making.
From developing post-hoc explanation techniques, such as feature attribution methods, to designing inherently interpretable models, and making theoretical frameworks to understand state-of-the-art , we address the growing demand for AI systems that can justify their decisions. Our research also extends to defining and quantifying interpretability itself, as well as studying its trade-offs with accuracy and computational efficiency.
By grounding our XAI research in rigorous technical foundations and practical relevance, we aim to contribute to a future where AI systems are not only powerful, but also safe and human-centric.