I am a Full Professor in Explainable AI at Maastricht University, and a visiting Professor at TU Delft.
I am the PI of the Epsilon lab, where I lead and contribute to several projects in the field of human-computer interaction in artifical advice giving systems, such as recommender systems; specifically developing the state-of-the-art for automatically generated explanations (transparency) and explanation interfaces (recourse and control). These include projects funded by IBM, and Twitter, as well as a EU Marie-Curie ITN on Interactive Natural Language Technology for Explainable Artificial Intelligence. In addition, I collaborate with a number of private and public organizations such as Inspectie Leefomgeving en Transport (ILT), CGI/Prorail, Capgemini/Unilever, Porsche, Blendle, PersGroep, and FDMedia. I regularly shape international scientific research programs (e.g., on steering committees of journals, or as program chair of conferences), and actively organize and contribute to high level strategic workshops relating to responsible data science, both in the Netherlands and internationallty. I am a senior member of the ACM.

cartoon

(Cartoon by Erwin Suvaal from CVIII ontwerpers.) As algorithmic decision-making becomes prevalent across many sectors it is important to help users understand why certain decisions are being proposed. Explanations are needed when there is a large knowledge gap between human and systems, or when joint understanding is only implicit. This type of joint understanding is becoming increasingly important for example when news providers, and social media systems; such as Twitter and Facebook; filter and rank the information that people see.

To link the mental models of both systems and people our work develops ways to supply users with a level of transparency and control that is meaningful and useful to them. We develop methods for generating and interpreting rich meta-data that helps bridge the gap between computational and human reasoning (e.g., for understanding subjective concepts such as diversity and credibility). We also develop a theoretical framework for generating better explanations (as both text and interactive explanation interfaces), which adapts to a user and their context. To better understand the conditions for explanation effectiveness, we look at when to explain (e.g., surprising content, lean in/lean out, risk, complexity); and what to adapt to (e.g., group dynamics, personal characteristics of a user).

Relevant keywords: explanations, natural language generation, human-computer interaction, personalization (recommender systems), intelligent user interfaces, diversity, filter bubbles, responsible data analytics.


News:

2021

Still hiring assistant professors (tenure track) and postdocs to work on explainable AI!

15th of January: Full paper accepted to Persuasive'21, lead by Tim Draws and with Zoltan Szlavik, Benjamin Timmermans, Kush R. Varshney, and Michael Hind. Disparate Impact Diminishes Consumer Trust Even for Advantaged Users.

14th of January: New journal paper accepted to ACM TiiS: Humanized Recommender Systems: State-of-the-Art and Research Issues. Lead by Trang Tran Ngoc, and with Alexander Felfernig.

4th of January: Recognized as a senior member of the ACM!

4th of January: Book Chapter: ``Explaining Recommendations: Beyond single items'' has been conditionally accepted for the publication of the third edition of the Recommender Systems Handbook!

4th of January: New full paper accepted at the FAccT Conference: ``Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces''. Joint work with Mats Mulder, Oana Inel, and Jasper Oosterman. This is an outcome of Mat's Masters thesis and a great collaboration with Blendle Research.

2020

2nd of November:Paper accepted to the XAI.it 2020 Workshop titled: ``Considerations for Applying Logical Reasoning to Explain Neural Network Outputs'', with Federico Cau and Davide Spano.

15th of Sept. Looking forward to giving a conference keynote at the International Conference on Research Challenges in Information Science (CORE rank: B)! Title: ``Explainable AI is not yet understandable AI''

23rd of July Full paper with Mesut Kaya and Derek Bridge titled ``Ensuring Fairness in Group Recommendations by Rank-Sensitive Balancing of Relevance'' has been accepted at Recsys'20 as a Long Paper

29th of May Full paper accepted at ACM Hypertext 2020 titled: ``You do not decide for me! Evaluating Explainable Group Aggregation Strategies for Tourism''. With Shabnam Najafian, Oana Inel, Daniel Herzog, and Sihang Qiu.

20th of April Our paper ``Motivated Numeracy and Active Reasoning in a Western European Sample" has been accepted to the journal of Behavioural Public Policy, with Paul Connor, Emily Sullivan, and Mark Alfano (to appear).

9th of April Our paper ``On the merits and pitfalls of introducing a digital platform to aid conservation management: Volunteer data submission and the mediating role of volunteer coordinators'' was accepted to the Journal of Enviornmental Management ( IF = 4.865, currently open access ), with Koen Arts et al.

30th of March Our paper ``Eliciting User Preferences for Personalized Explanations for Video Summaries'' with Oana Inel and Lora Aroyo has been accepted at UMAP 2020!

30th of March New report of Dagstuhl Perspectives Workshop Diversity, Fairness, and Data-Driven Personalization in (News) Recommender Systems, full manifesto to follow.