Skip to the content.

About

This third ACM C&C workshop on explainable AI for the Arts (XAIxArts) will bring together a community of researchers and creative practitioners in Human-Computer Interaction (HCI), Interaction Design, AI, explainable AI (XAI), and Digital Arts to explore the role of XAI for the Arts. XAI is a core concern of Human-Centred AI and relies heavily on HCI techniques to explore how to make complex and difficult to understand AI models more understandable to people. Our first workshop explored the landscape of XAIxArts and identified emergent themes, whilst the second iteration focused on co-developing a manifesto.

This workshop will bring researchers together to expand the XAIxArts community; advance discussions on current and emerging XAIxArts practices and identify areas for intervention using speculative futures. The main themes will focus on:

Workshop Date: TBC Venue: Online.

Themes

This workshop will explore how XAI might be used in the Arts and how the Arts might contribute to new forms of XAI. It will examine the challenges and opportunities at the intersection of XAI and the Arts (XAIxArts), offering a fresh and critical view on the explainable aspects of Responsible AI and Human-Centred AI more broadly. The themes include but are not limited to:

Reducing Marginalisation by AI

Current AI models tend to prioritise certain views, experiences and perspectives. This raises critical questions about inclusion and exclusion in AI systems: Who is represented, and who is marginalised? Whose perspectives are privileged, and who holds power in shaping these models? How can we ensure equal accessibility across diverse user groups?

Bias in artistic practice using AI is an often assumed inevitable consequence as a result of the dataset used. Datasets are often the foundational material for artistic systems using AI. Artists and developers can take an active role in curating datasets that reflect a broad range of cultural, social, and aesthetic perspectives, where appropriate. This workshop seeks to examine whether insights from the Arts suggest more transparent and ethical processes for the creation of and adoption of AI, to subvert current practice, and ensure fairness in AI models and their use. Or, to explain the inherent biases of AI to mitigate its mismanagement?

Adapting AI for Arts Practice

Across XAIxArts so far, opportunities have been identified for current AI tools to be pushed for creative endeavour and preserve artists’ agency in creative practices. For example, artists have navigated generative AI models to expose their bias or explored latent spaces by way of exposing its limits. These methods and techniques resonate with qualities of the creative user experience of surprise, ambiguity and reflection – often in opposition to the technocentric and functional goals of traditional XAI. However, there are still open questions on how to balance these techniques to offer both explainability and opportunity for surprise.

Evaluating XAI for Arts

Many existing methods for evaluating XAI models focus on issues such as accuracy or how helpful they are to productivity. This misses key qualities of the creative user experience. Existing HCI research methods also tend to prioritise data collection across representative samples – at odds with creative practitioners who have a more individualised practice, or where explanations must be tailored on a case by case basis e.g. to an individual’s AI literacy or ability. Whether an AI system produces explanations conducive to an artist and their artistic identity is more open-ended and subjective, yet crucial to its meaningfulness in the arts. A system that does not align with an artist’s workflow and identity may undermine their agency, while one that supports their creative vision can become a useful tool for artistic exploration and expression. XAIxArts could help to develop ways of promoting research that accounts for first-person perspectives and explanations of AI art’s aesthetic, rather than more generalised approaches.

Mapping for Real-time Interaction

Explaining AI models through interaction is complex in artistic domains. For example, explanation is often needed in-the-moment as people interact with AI in real-time settings, such as in jazz improvisation. This contrasts with post-hoc explanations more frequently used in broader XAI research. Mapping the interaction to the output of an AI system in a way that is conducive to the creative process also poses open questions. Through this theme, complexities of designing AI systems are highlighted that not only generate real-time explanations but also align with the fluid and embodied nature of artistic practice.

AI to Spark Reflection

The Arts have an opportunity to be playful with AI, to expose inherent biases and imperfections of AI systems to audiences – turning potential flaws into opportunities for sparking reflection and creative expression. There is potential for the Arts to suggest how to design playful AI explanations which could spark reflection, beyond current understandings in more productivity-focused domains. How to evaluate whether reflection on AI and its processes has occurred is an open-ended question.

Proceedings

Proceedings of the workshop will be published after the conference.

Schedule

Schedule for the workshop will be published soon.

Organizers

Acknowledgements

This work was supported by the Engineering and Physical Sciences Research Council through the Turing AI World Leading Researcher Fellowship in Somabotics: Creatively Embodying Artificial Intelligence [grant number APP22478], AI UK: Creating an International Ecosystem for Responsible AI Research and Innovation [EP/Y009800/1] (RAI UK/RAKE) and the STAHR Collective https://www.stahrc.org

image image image

Contacts

If you have any questions feel free to contact Nick, Corey, Lizzie or Shuoyang at the following email addresses: