We are in the process of curating a list of this year’s publications — including links to social media, lab websites, and supplemental material. Currently, we have 96 full papers, 19 posters, one journal paper, two interactive demos, two student mentoring programs and we lead six workshops. Three papers received a best paper award and 11 papers received an honorable mention.
Your publication from 2026 is missing? Please enter the details in this Google Forms and send us an email that you added a publication: contact@germanhci.de
AI CHAOS! 2nd Workshop on the Challenges for Human Oversight of AI Systems
Malik Khadar (Department of Computer Science & Engineering, University of Minnesota), Julia Cecil (Department of Psychology, LMU Munich), Leon Van Der Neut (Delft University of Technology), Nikola Banovic (Electrical Engineering, Computer Science, University of Michigan), Dr. Kevin Baum (Center for European Research in Trusted AI (CERTAIN), German Research Center for Artificial Intelligence (DFKI), Saarbrücken), Stevie Chancellor (Computer Science, Engineering, University of Minnesota), Enrico Costanza (UCL Interaction Centre, University College London), Motahhare Eslami (School of Computer Science, Carnegie Mellon University), Anna Maria Feit (Saarland Informatics Campus, Saarland University), Susanne Gaube (Global Business School for Health (GBSH), University College London (UCL)), Ujwal Gadiraju (Web Information Systems, Delft University of Technology), Harmanpreet Kaur (University of Minnesota)
Abstract | Tags: Workshops | Links:
@inproceedings{Khadar2026AiChaos,
title = {AI CHAOS! 2nd Workshop on the Challenges for Human Oversight of AI Systems},
author = {Malik Khadar (Department of Computer Science & Engineering, University of Minnesota), Julia Cecil (Department of Psychology, LMU Munich), Leon Van Der Neut (Delft University of Technology), Nikola Banovic (Electrical Engineering and Computer Science, University of Michigan), Dr. Kevin Baum (Center for European Research in Trusted AI (CERTAIN), German Research Center for Artificial Intelligence (DFKI), Saarbrücken), Stevie Chancellor (Computer Science and Engineering, University of Minnesota), Enrico Costanza (UCL Interaction Centre, University College London), Motahhare Eslami (School of Computer Science, Carnegie Mellon University), Anna Maria Feit (Saarland Informatics Campus, Saarland University), Susanne Gaube (Global Business School for Health (GBSH), University College London (UCL)), Ujwal Gadiraju (Web Information Systems, Delft University of Technology), Harmanpreet Kaur (University of Minnesota)},
url = {https://cix.cs.uni-saarland.de/, website},
doi = {10.1145/3772363.3778736},
year = {2026},
date = {2026-04-13},
urldate = {2026-04-13},
abstract = {As AI systems are increasingly adopted in high-stakes domains such as healthcare, autonomous driving, and criminal justice, their failures may threaten human safety and rights. Human oversight of AI systems is therefore critically important as a potential safeguard to prevent harmful consequences in high-risk AI applications. The global regulatory and policy landscape for AI governance remains understandably fragmented and diverse. While frameworks like the European AI Act require human oversight for high-risk AI systems, there is currently a lack of well-defined methodologies and conceptual clarity to operationalize such oversight effectively. Independent of policy and regulation, poorly designed oversight can create dangerous illusions of safety while obscuring accountability. This interdisciplinary workshop aims to bring together researchers from various disciplines, including AI, HCI, psychology, law, and policy, to address this critical gap. We will explore the following questions: (1) What are the greatest challenges to achieving effective human oversight of AI systems? (2) How can we design AI systems that enable meaningful human oversight? (3) How do we assign responsibilities to and support the various stakeholders involved in oversight? Through talks and interactive group discussions, participants will identify oversight challenges; examine stakeholder roles; discuss supporting tools, methods, and regulatory frameworks; and establish a collaborative research agenda. Our central goal is to further a roadmap that enables effective human oversight for the responsible deployment of AI in society.},
keywords = {Workshops},
pubstate = {published},
tppubtype = {inproceedings}
}
Augmented Body Parts: Bridging VR Embodiment and Wearable Robotics
HyeonBeom Yi (Electronics, Telecommunications Research Institute, Daejeon, Republic of Korea), Myung Jin (MJ) Kim (Electronics, Telecommunications Research Institute, Daejeon, Republic of Korea), Seungwoo Je (Southern University of Science, Technology, Shenzhen, China), Seungjae Oh (Kyung Hee University, Yongin, Republic of Korea), Shuto Takashita (University of Tokyo, Tokyo, Japan), Hongyu Zhou (University of Sydney, Sydney, Australia), Marie Muehlhaus (Saarland University, Saarbrücken, Germany), Dr. Eyal Ofek (University of Birmingham, Birmingham, United Kingdom), Andrea Bianchi (KAIST, Daejeon, Republic of Korea)
Abstract | Tags: Workshops | Links:
@inproceedings{Yi2026AugmentedBody,
title = {Augmented Body Parts: Bridging VR Embodiment and Wearable Robotics},
author = {HyeonBeom Yi (Electronics and Telecommunications Research Institute, Daejeon, Republic of Korea), Myung Jin (MJ) Kim (Electronics and Telecommunications Research Institute, Daejeon, Republic of Korea), Seungwoo Je (Southern University of Science and Technology, Shenzhen, China), Seungjae Oh (Kyung Hee University, Yongin, Republic of Korea), Shuto Takashita (University of Tokyo, Tokyo, Japan), Hongyu Zhou (University of Sydney, Sydney, Australia), Marie Muehlhaus (Saarland University, Saarbrücken, Germany), Dr. Eyal Ofek (University of Birmingham, Birmingham, United Kingdom), Andrea Bianchi (KAIST, Daejeon, Republic of Korea)},
url = {https://hci.cs.uni-saarland.de, website
https://www.linkedin.com/company/saarhcilab/, lab's linkedin},
doi = {10.1145/3772363.3778688},
year = {2026},
date = {2026-04-13},
urldate = {2026-04-13},
abstract = {Recent work across HCI/HRI and wearable robotics has investigated how people control and perceive extra body parts in both virtual and physical settings. Virtual embodiment in XR has shown that users can experience ownership and agency with non-anthropomorphic avatars, while wearable robotics has introduced supernumerary limbs such as third arms and robotic tails. Despite these shared goals, connections between findings remain limited because VR and hardware studies rely on different assumptions about sensory feedback, human perception, and physical constraints, making insights difficult to transfer across contexts. This workshop brings together researchers in XR, wearable robotics, haptics, and neuroscience to explore how to foster embodiment and adaptation with augmented body parts, and how to bridge virtual embodiment to effective use with wearable devices. Through a keynote, brief position shares, and two hands-on group activities, participants will examine control mappings and sensory-feedback strategies and identify which aspects of VR-based embodiment realistically transfer when accounting for hardware limits, sensor variability, and cognitive load. Ultimately, the workshop aims to articulate a focused research agenda connecting VR-based insights to feasible wearable robotics implementations, enabling future work on augmenting the human body with new parts and capabilities.},
keywords = {Workshops},
pubstate = {published},
tppubtype = {inproceedings}
}
Human-AI-UI Interactions Across Modalities
Kewen Peng (University of Utah, United States), Jeffrey Nichols (Apple Inc., United States), Christof Lutteroth (University of Bath, United Kingdom), Tiffany Knearem (MBZUAI, United Arab Emirates), Felix Kretzer (Karlsruhe Institute of Technology (KIT), Germany). Jeffrey Bigham (Carnegie Mellon University & Apple Inc., United States), Alexander Maedche (Karlsruhe Institute of Technology (KIT), Germany), Yue Jiang (University of Utah, United States)
Abstract | Tags: Workshops | Links:
@inproceedings{Peng2026HumanaiuiInteractions,
title = {Human-AI-UI Interactions Across Modalities},
author = {Kewen Peng (University of Utah, United States), Jeffrey Nichols (Apple Inc., United States), Christof Lutteroth (University of Bath, United Kingdom), Tiffany Knearem (MBZUAI, United Arab Emirates), Felix Kretzer (Karlsruhe Institute of Technology (KIT), Germany). Jeffrey Bigham (Carnegie Mellon University & Apple Inc., United States), Alexander Maedche (Karlsruhe Institute of Technology (KIT), Germany), Yue Jiang (University of Utah, United States)},
url = {https://h-lab.win.kit.edu/, website
https://www.linkedin.com/company/68838007/, lab's linkedin},
year = {2026},
date = {2026-04-13},
urldate = {2026-04-13},
abstract = {Designing and developing user-friendly interfaces has long been a cornerstone of HCI research, yet today we are at a turning point where UIs are no longer designed solely for humans but also for intelligent agents that act on users’ behalf, while UIs are also expanding beyond 2D screens into extended reality environments with inherently multimodal characteristics, together challenging us to rethink the role of the UI as a mediator of human–AI interaction. This workshop will explore how UI agents bridge human intent and system behavior by interpreting multimodal inputs and generating adaptive outputs across surfaces from screens to extended reality (XR), and we will examine not only their technical capabilities but also their broader impact, including how agents reshape daily workflows, how bidirectional alignment between human and AI activity can be achieved, and how generative models may transform UI creation. XR provides a compelling testbed for these questions and highlights challenges around accuracy, efficiency, transparency, accessibility, and user agency, setting the stage for the next generation of intelligent and adaptive UIs.},
keywords = {Workshops},
pubstate = {published},
tppubtype = {inproceedings}
}
Human-Centered Explainable AI (HCXAI): Re-examining XAI in the Era of Agentic AI
Upol Ehsan (Khoury College of Computer Sciences, Northeastern University , Boston, Massachusetts, United States), Amal Alabdulkarim (Georgia Institute of Technology, Atlanta, Georgia, United States), Kenneth Holstein (Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States), Min Kyung Lee (School of Information, University of Texas at Austin, Austin, Texas, United States), Andreas Riener (Human-Computer Interaction Group, Technische Hochschule Ingolstadt, Ingolstadt, Bavaria, Germany), Justin D. Weisz (IBM Research, Yorktown Heights, New York, United States)
Abstract | Tags: Workshops | Links:
@inproceedings{Ehsan2026HumancenteredExplainable,
title = {Human-Centered Explainable AI (HCXAI): Re-examining XAI in the Era of Agentic AI},
author = {Upol Ehsan (Khoury College of Computer Sciences, Northeastern University , Boston, Massachusetts, United States), Amal Alabdulkarim (Georgia Institute of Technology, Atlanta, Georgia, United States), Kenneth Holstein (Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States), Min Kyung Lee (School of Information, University of Texas at Austin, Austin, Texas, United States), Andreas Riener (Human-Computer Interaction Group, Technische Hochschule Ingolstadt, Ingolstadt, Bavaria, Germany), Justin D. Weisz (IBM Research, Yorktown Heights, New York, United States)},
url = {https://hcig.thi.de/, website
https://www.linkedin.com/in/andreas-riener-19233710/, author's linkedin},
doi = {10.1145/3772363.3778728},
year = {2026},
date = {2026-04-13},
urldate = {2026-04-13},
abstract = {Making AI explainable requires more than algorithmic transparency: it demands understanding who needs explanations and why. In our sixth CHI workshop on Human-Centered XAI (HCXAI), we shift focus to agentic AI systems. LLM-based agents foundationally challenge existing explainability paradigms. Unlike traditional AI that produces single outputs, agents plan multi-step strategies, invoke tools with real-world consequences, and coordinate with other systems; yet current XAI approaches fail to address these complexities. Users need to understand not just what an agent might do, but the cascade of actions it could trigger, the risks involved, and why responses take time. Even our expanded HCXAI frameworks struggle with these new demands. Through our workshop series, we have built a strong community making important conceptual, methodological, and technical impact. This year, we re-examine what human-centered explainable AI means in the agentic era, bringing together researchers and practitioners to shape explainability for both users and developers of these systems.},
keywords = {Workshops},
pubstate = {published},
tppubtype = {inproceedings}
}
The AI Accomplice: Exploring Generative Artificial Intelligence in Facilitating and Amplifying Deceptive Designs
Thomas Kosch (HU Berlin), Veronika Krauß (HS Ansbach), Christopher Katins (HU Berlin), Dominik Schön (TU Darmstadt), Mark McGill (University of Glasgow), Jan Gugenheimer (TU Darmstadt)
Abstract | Tags: Workshops | Links:
@inproceedings{Kosch2026AiAccomplice,
title = {The AI Accomplice: Exploring Generative Artificial Intelligence in Facilitating and Amplifying Deceptive Designs},
author = {Thomas Kosch (HU Berlin), Veronika Krauß (HS Ansbach), Christopher Katins (HU Berlin), Dominik Schön (TU Darmstadt), Mark McGill (University of Glasgow), Jan Gugenheimer (TU Darmstadt)},
url = {https://hcistudio.org, website},
doi = {10.1145/3772363.3778770},
year = {2026},
date = {2026-04-13},
urldate = {2026-04-13},
abstract = {As generative Artificial Intelligence (AI) becomes increasingly embedded and utilized for digital design, it presents both opportunities and risks. One major concern is its potential to facilitate and incorporate deceptive design patterns into computing technologies, which could manipulate or mislead users to their disadvantage. Similar to the concept of precedent-based design, a common approach in design theory that suggests reapplying previous design solutions to similar or identical problems, generative AI can integrate deceptive design patterns included in the training data a model has seen before. Our workshop explores how generative AI suggests and enacts deceptive design patterns in digital design. The goal of the workshop is to explore the ethical challenges of utilizing generative AI models and develop strategies to detect or prevent manipulative practices, thereby creating more transparent and equitable AI-generated experiences.},
keywords = {Workshops},
pubstate = {published},
tppubtype = {inproceedings}
}
XR for Challenging Environments - Enabling Human Performance and Agency under Stress
Raimund Schatz (AIT), Helmut Schrom-Feiertag (AIT), Guglielmo Papagni (AIT), Frank Steinicke (Universität Hamburg), Lea Skorin-Kapov (University of Zagreb), Mark Billinghurst (Adelaide University), Georg Aumayr (Johanniter), Leif Oppermann (Fraunhofer FIT)
Abstract | Tags: Workshops | Links:
@inproceedings{Schatz2026XrChallenging,
title = {XR for Challenging Environments - Enabling Human Performance and Agency under Stress},
author = {Raimund Schatz (AIT), Helmut Schrom-Feiertag (AIT), Guglielmo Papagni (AIT), Frank Steinicke (Universität Hamburg), Lea Skorin-Kapov (University of Zagreb), Mark Billinghurst (Adelaide University), Georg Aumayr (Johanniter), Leif Oppermann (Fraunhofer FIT)},
url = {https://www.fit.fraunhofer.de/en/business-areas/cooperation-systems/mixed-reality.html, website
https://www.linkedin.com/company/fraunhofer-fit/, lab's linkedin
https://www.linkedin.com/in/leifoppermann/, author's linkedin https://www.youtube.com/@fit4xr, social media},
year = {2026},
date = {2026-04-13},
urldate = {2026-04-13},
abstract = {This workshop brings together researchers and practitioners to tackle the challenges of designing eXtended Reality (XR) assistance and augmentation for professionals in challenging environments. While XR, combined with Artficial Intelligence (AI), shows promise in high-stakes domains like emergency response, public safety or advanced manufacturing, current research paradigms often fail to address the unique requirements and risks of embodied, mission-critical work. We therefore emphasize three crucial shifts in perspective: from static "trust" to calibrated trust under stress; from fragile "seamlessness" to resilience by design; and from screen-based "transparency" to situated and embodied explainability. Through a curated set of activities, we aim to build a cross-disciplinary community that identifies key research questions, co-creates novel design approaches, and defines a shared research agenda for trustworthy, resilient and explainable XR systems. By anchoring the discussion in stressful (and sometimes extreme) contexts our workshop offers the CHI community a unique opportunity to forge new theories and tangible design principles for the next generation of XR-based augmentation and assistance.},
keywords = {Workshops},
pubstate = {published},
tppubtype = {inproceedings}
}