I am a Full Professor in Explainable AI at Maastricht University, and a visiting Professor at TU Delft. I lead and contribute to several projects in the field of human-computer interaction in artificial advice giving systems, such as recommender systems; specifically developing the state-of-the-art for automatically generated explanations (transparency) and explanation interfaces (recourse and control). These include projects funded by IBM, and Twitter, as well as an EU Marie-Curie ITN on Interactive Natural Language Technology for Explainable Artificial Intelligence. Currently, I am representing Maastricht university as a Co-Investigator in the ROBUST consortium, pre-selected for a national (NWO) grant with a total budget of 95M (25M from NWO) to carry out long term (10-years) research into trustworthy artificial intelligence.
I regularly shape local and international scientific research programs (e.g., on steering committees of journals, or as program chair of conferences), and actively organize and contribute to high level strategic workshops relating to responsible data science, both in the Netherlands and internationally. I am a senior member of the ACM.
(Cartoon by Erwin Suvaal from CVIII ontwerpers.)
As algorithmic decision-making becomes prevalent across many sectors it is important to help users understand why certain decisions are being proposed. Explanations are needed when there is a large knowledge gap between human and systems, or when joint understanding is only implicit. This type of joint understanding is becoming increasingly important for example when news providers, and social media systems; such as Twitter and Facebook; filter and rank the information that people see.
To link the mental models of both systems and people our work develops ways to supply users with a level of transparency and control that is meaningful and useful to them. We develop methods for generating and interpreting rich meta-data that helps bridge the gap between computational and human reasoning (e.g., for understanding subjective concepts such as diversity and credibility). We also develop a theoretical framework for generating better explanations (as both text and interactive explanation interfaces), which adapts to a user and their context. To better understand the conditions for explanation effectiveness, we look at when to explain
(e.g., surprising content, lean in/lean out, risk, complexity); and what to adapt to
(e.g., group dynamics, personal characteristics of a user).
explanations, natural language generation, human-computer interaction, personalization (recommender systems), intelligent user interfaces, diversity, filter bubbles, responsible data analytics.
6th of July:
Looking forward to participating in the panel on Ethics and NLG International Natural Language Generation Conference (INLG) on the 20th of July.
3rd of June:
The third edition of the Recommender Systems Handbook including our book Chapter ``Explaining Recommendations: Beyond single items'' has now been published.
3rd of June:
Congratulations to PhD candidate Alisa Reiger on her acceptance to the UMAP doctoral consortium, and accepted paper at the Explainable User Modeling workshop titled ``Towards Healthy Online Debate: An Investigation of Debate Summaries and Personalized Persuasive Suggestions''!
I'll be giving one of the keynotes at the Joint EurAI Advanced Course on AI, TAILOR Summer School 2022
. This year's theme is Explainable AI.
17th of March:
Our paper ``Comprehensive Viewpoint Representations for a Deeper Understanding of User Interactions With Debated Topics'' with Tim Draws, Oana Inel, Christian Baden, and Benjamin Timmermans has won best paper award at CHIIR'22! Third best paper award in a row!
23rd of November:
Our paper ``The European Approach to AI from a Recommender System Perspective''
is now online. With Tommaso Di Noia, Panagiota Fatourou, and Markus Schedl This is a Big Trend paper in the Communications of the ACM (CACM) Region Special Section Europe 2022
17th of November:
Our paper ``A Checklist to Combat Cognitive Biases in Crowdsourcing'' with Tim Draws, Alisa Reiger, Oana Inel and Ujwal Gadiraju has won the Amazon Best Paper Award at HCOMP2021
6th of October:
The University of Maastricht is preparing to participate in a 10-year AI research programme. I will be a co-investigator and chair for the integration of humanities and social sciences in ROBUST, a consortium applying for an NWO grant with a total budget of 95M (25M from NWO) to carry out long term research into reliable artificial intelligence (AI).
UM press release
, ICAI lab press release
, NWO press release
27th of September:
Our journal submission, ``Design Implications for Explanations: A Case Study on Supporting Reflective Assessment of Potentially Misleading Videos''
, was published in Frontiers in Artificial Intelligence, section AI for Human Learning and Behavior Change. Authors: Oana Inel, Tomislav Duricic, Harmanpreet Kaur, Elisabeth Lex, Nava Tintarev
9th of September:
Our submission ``This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias'' won Douglas Engelbart best paper award at HyperText'21! Authors: Alisa Rieger, Tim Draws, Mariet Theune, and Nava Tintarev.
26th of August:
A group effort titled ``Toward Benchmarking Group Explanations: Evaluating the Effect of Aggregation Strategies versus Explanation'' was accepted to the Perspectives workshop at ACM RecSys. Submission led by Francesco Barile.
Another group effort was also accepted to HCOMP: A Checklist to Combat Cognitive Biases in Crowdsourcing, led by Tim Draws.
12th of July:
Two full papers accepted to HyperText'21. 1) ``This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias''. Lead by Alisa Rieger, and with Tim Draws
and Mariet Theune. 2) ``Exploring User Concerns about Disclosing Location and Emotion Information in Group Recommendations''. Lead by Shabnam Najafian, and co-authored with Tim Draws, Francesco Barile, Marko Tkalcic, and Jie Yang.
9th of June:
Paper accepted at the workshop NLP for positive impact: ``Are we human, or are we users? The role of natural language processing in human-centric news recommenders that nudge users to diverse content.'' With: Myrthe Reuver, Nicolas Mattis, Marijn Sax, Suzan Verberne, Natali Helberger, Judith Moeller, Sanne Vrijenhoek, Antske Fokkens and Wouter van Atteveldt.
26th of May:
``How Do Biased Search Result Rankings Affect User Attitudes on Debated Topics?'' was accepted as a full paper to SIGIR'21, lead by PhD candidate Tim Draws and with Ujwal Gadiraju, Alessandro Bozzon, and Ben Timmermans.
18th of March:
Really enjoyed moderating the Webinar on Ethics in AI, co-organized by DKE, BISS, Brightlands, and IBM!
17th of March:
Gave a talk at Computational Communication Science group at the VU titled: ``Toward Measuring Viewpoint Diversity in News Consumption''.
8th of March:
Full paper accepted at UMAP: Factors Influencing Privacy Concern for Explanations of Group Recommendation (acceptance rate full papers 23,3%)! Paper lead by Shabnam Najafian and with Amra Delic and Marko Tkalcic.
15th of January:
Full paper accepted at Persuasive'21, lead by Tim Draws and with Zoltan Szlavik, Benjamin Timmermans, Kush R. Varshney, and Michael Hind. Disparate Impact Diminishes Consumer Trust Even for Advantaged Users.
14th of January:
New journal paper accepted to ACM TiiS: Humanized Recommender Systems: State-of-the-Art and Research Issues. Lead by Trang Tran Ngoc, and with Alexander Felfernig.
4th of January:
Recognized as a senior member of the ACM
4th of January:
Book Chapter: ``Explaining Recommendations: Beyond single items'' has been conditionally accepted for the publication of the third edition of the Recommender Systems Handbook!
4th of January:
New full paper accepted at the FAccT Conference: ``Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces''. Joint work with Mats Mulder, Oana Inel, and Jasper Oosterman. This is an outcome of Mat's Masters thesis and a great collaboration with Blendle Research.