XAI alignment workshop: A practice-oriented, multi-stakeholder approach for human-centred AI explanations
TIME
Thursday, 9th September, 11:30 -13:00 CEST
GOAL
You will learn how to implement the workshop into your practice in order to align on envisioned use of the system and define explainability needs from multiple views for trusted and fair AI systems.
CHALLENGE
The hidden nature of AI systems poses problems to the implementation in real-life situations where it affects human decisions and the fairness, accountability and trustworthiness of the systems. As a result, AI systems are increasingly expected (by law) to be transparent and explainable. The current research on how to create Explainable AI (XAI) systems is not so clear on how to apply their ideas in practice. In addition, there is little guidance in the pre- and post phases of the development process on how to explain a system to different stakeholders with different expectations. This is a gap in the existing research about Explainable AI where living lab methodologies, human-centered design and human-computer interaction can play an important role. Combining the technical and social perspectives, we ask the question: how can you make sure to not just explain a system, but also that all stakeholders understand the system enough to trust it and are able to use it?
Through two case studies we have developed this workshop methodology to address the ‘understand & align’ step of the process and now we want to teach others how to use this approach in their work practices.
OBJECTIVE
The XAI alignment workshop is a co-creative workshop which aims to align an interdisciplinary development team and its main users of a specific project on their understanding of the envisioned system and the people involved in and (indirectly) affected by the system. It helps to formulate a first view on the explainability needs of different user groups and communities, at different moments in time, and with different communication goals. The main objective of the workshop is to prepare the system-to-be with features that can increase the trust and fairness in the system outcomes. Our aim in developing this workshop was twofold: we focussed (1) on how to make AI explanations more human-centred and (2) on how to support practitioners in doing so. During this workshop we will guide the participants through the workshop set-up in order to give them the ability to facilitate and implement the workshop into their own organisations and living labs.
After this workshop, participants..
.. have experienced the importance and complexity of explainable AI
.. have learned (new) ways to address this complexity within an organization and/or project
OUTCOMES
Participants will experience a new approach for the development of human-centered explainable AI systems which they can add into their toolbox. Using this workshop within the work practice ensures a common reflection among the project team on the explainability needs and impacted people. Thus, a broader social or human perspective is embedded in the technical choices throughout all the stages of development, instead of being added as an afterthought, or not at all. It is the first step towards creating AI systems that are context-specific, relevant, fair and trustworthy to all stakeholders.
BRIEF OUTLINE / METHODOLOGY
VALUE FOR PARTICIPANTS
Welcome & introduction
In the first 15 minutes we will present the background of the workshop methodology, explain the objectives and the idea behind the workshop and introduce the AI-application that we will be working on.
Exercise 1: Setting the technological scene using the AIIA
In this exercise we will discuss together with the participants the technological set-up of the AI-application using parts of the Artificial Intelligence Impact Assessment tool.
Introducing the persona’s
We have prepared a few persona’s based on a few common stakeholders.
Exercise 2: Brainstorming the explainability needs
Based on the persona’s we ask the participants to brainstorm together about the explainability needs in smaller groups in a Miro template.
Exercise 3: Brainstorming on societal impact
Using the Tarot Cards of Tech we will individually brainstorm different impact ideas on Miro.
Next steps
We will present the next steps of the workshop and how we envision future implementation in organisations and work practices.
Discussion & closing
We’d like to hear your thoughts and ideas to improve the workshop further!
A new tool to implement into your living lab when AI is a topic in a project, connecting the field of Explainable AI to living labs and human-computer interaction & human-centered design in order to bring inspiration & new perspectives to all.
The workshop could be interesting for a wider audience because understanding how AI systems work makes dealing with them easier. Learning about the complexity of Explainable AI can stimulate a bottom-up movement of users demanding to open the black box algorithms that determine many parts of their lives.
AUDIENCE
Anyone involved in the development of new AI systems, such as data scientists, innovation managers, marketing, business owners and analysts, or people using living lab approaches in AI projects. But is also interesting for anyone interested in learning ways to develop or to go into dialogue about human-centred AI systems.
MAX NUMBER OF PARTICIPANTS
30
FACILITATORS
An Jacobs

An Jacobs
An Jacobs holds a PhD in Sociology and is a part-time lecturer at the Vrije Universiteit Brussel (VUB, Belgium). She is also program manager of “Data and Society” and unit lead of “Health and work Living Lab” within the imec-SMIT-VUB research group. She is co-founder of the interdisciplinairy research group Brubotics with a focus on human-robot interaction. In her research career at the VUB since 2005 she is and has been Principal Investigator in several digital innovation projects involving older adults both on a local and European level, including ProACT and SEURO. She was active in the living lab community from the very beginning and was the methodologist supporting the startup of the Care LivingLabs in Flanders.
Jonne van Belle

Jonne van Belle
Jonne van Belle, MSc, is a researcher working at imec-SMIT-VUB since 2020. She works on projects related to smart cities, data privacy and artificial intelligence and translates these insights to society as part of the Knowledge Centre Data & Society. With her background in design, user research and human-technology relations, she is interested in understanding the social impact of technologies and how this can be improved in real-life cases using design and communication. In her projects she likes to combine ideas from human-centered design, anthropology, sociology and philosophy of technology to find ways to create responsible (design of) technologies.