Program/Schedule
Full At-A-Glance Schedule (draft)
Below is the draft at-a-glance schedule for EduCHI 2026. Session details are provided below. All times are listed in local Toronto time (EDT / UTC -4).
| Time | Wed May 20 | Thu May 21 | Fri May 22 |
|---|---|---|---|
| 8:00 – 9:00 | Check-in/register Coffee/tea + Breakfast | Check-in/register Coffee/tea + Breakfast | Coffee/tea + Breakfast |
| 9:00 – 10:30 | Pedagogy Workshop (invite only) | Opening + Keynote | Session 3 : Ethics, Equity, and Responsible Practice |
| 10:30 – 11:00 | Coffee/tea break | Coffee/tea break | Coffee/tea break |
| 11:00 – 12:15 | Pedagogy Workshop (continued) | Session 1: AI in HCI Education I – The Student Experience | Session 4: AI in HCI Education II – Curriculum Responses |
| 12:15 – 1:30 | Lunch (provided) | Lunch (provided) + Birds of a Feather | Lunch (provided) + EduCHI Town Hall |
| 1:30 – 3:15 | Pedagogy Workshop (continued) | Session 2: Teaching Durable Skills for HCI Practice | Session 5: Methods, Frameworks, and Studio Practice |
| 3:10 – 3:45 | Coffee/tea break | Coffee/tea break | Coffee/tea break |
| 3:45 – 5:00 | Pedagogy Workshop (continued) | Lightning Talks | Session 6: Program & Curriculum Design |
| 6:00 – 8:30 | Optional Wednesday Night Social Gathering [in-person access only] | Welcome Reception [in-person access only] | Optional Friday Night Social Gathering [in-person access only] |
Wednesday, May 20
Registration opens at 8:00am
9:00am – 5:00pm EDT
Pedagogy Workshop (invite only)
The pedagogy workshop is a space for doctoral students, postdocs, and/or early-career faculty members teaching HCI or design to build their skills and develop the community of practice.
Attendance is by invite-only.
Thursday, May 21
Registration opens at 8:00am
9:00 – 9:15am EDT
Opening: Welcome to EduCHI 2026
Hear from the symposium organizers and representatives from the Faculty of Information as we welcome attendees to EduCHI 2026.
9:15 – 10:30am EDT
Opening Keynote
Dr. Elizabeth Churchill, Professor and founding Department Chair of Human Computer Interaction at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi, UAE
Details coming soon.
10:30 – 11:00am EDT
Coffee/Tea Break
11:00am – 12:15pm EDT
Session 1: AI in HCI Education I – The Student Experience
Each paper presentation will include brief Q&A, followed by a structured reflection, discussion, and report out in the final portion of the session.
The Hidden Curriculum of LLM-Assisted Qualitative Analysis in HCI Education
Franceli L. Cibrian (Chapman University); Jesus A. Beltran (California State University); Lizbeth Escobedo (Dalhousie University)
Abstract
Large Language Models (LLMs) are rapidly becoming part of qualitative research workflows and, as a result, part of HCI classrooms. Drawing on more than a decade of teaching undergraduate and graduate HCI and more recent experience teaching graduate qualitative methods, we reflect on a pedagogical inflection point that has become unavoidable over the last year: integrating LLM-assisted qualitative analysis into instruction on coding, theme development, and codebook construction. When students can prompt a model to generate “codes”, “themes”, and even a “codebook”, a deceptively simple question emerges: what counts as learning qualitative analysis? We argue that LLM-assisted workflows introduce a hidden curriculum, an unspoken set of norms about rigor, speed, privacy, and responsibility that students learn while “getting the assignment done”. Framed as an EduCHI provocation, we surface four unresolved questions: (1) What counts as learning qualitative analysis when using LLMs? (2) Should LLM output be treated as “data,” “analysis,” or “peer feedback?” (3) Which harms are uniquely likely for novices learning qualitative analysis with LLMs? Lastly, and (4) What are the ethical implications of letting students use LLM for qualitative analysis? Overall, we aim to help HCI educators more clearly identify what must remain accountable, interpretable, and ethically grounded as LLMs enter qualitative methods education.
How Novice Designers Use, Embed, and Make Sense of AI: Implications for AI-Integrated Design Education
Raju Maharjan, Sang Ryu (University of Oklahoma)
Abstract
As Artificial Intelligence (AI) becomes ubiquitous in design practice, understanding how novice designers learn to work with this technology is critical for Human-Centered Design (HCD) education. This study reports findings from a semester-long (16-week) diary study with 23 third-year undergraduate students in an HCD course, examining how novice designers engage with AI in their design processes, how they embed AI into their solutions, and how they make sense of designing both with and for AI. Findings reveal that students used AI primarily as a cognitive scaffold during user research and prototyping, developing collaboration literacy through iterative prompting and critical evaluation of AI outputs. Students’ experiences using AI, including navigating tensions around creativity, control, and authorship, directly shaped how they designed AI features in their solutions, which prioritized user agency and oversight over automation. Based on these findings, this work conceptualizes AI as both a cognitive scaffold and reflective mirror, supporting design work while prompting students to critically examine their own reasoning. We conclude with pedagogical implications for AI-integrated HCD education, highlighting the need for (i) stage-appropriate AI scaffolding; (ii) explicit instruction in prompting as a form of design literacy; (iii) supportive engagement with authorship concerns; and (iv) reflective practices connecting AI experiences to design decisions.
“It Would Take Too Much to Explain”: Task-Contingent AI Use in Design Education
Noga Dines, Osnat Mokryn (University of Haifa)
Abstract
Self-regulated learning research emphasizes strategic tool selection as central to effective learning. Generative AI complicates this process by offering broad capabilities whose value depends on task context. While concerns have been raised about shifts in authority and reduced learner agency, there is limited empirical evidence of how students decide when to use or decline AI across different tasks.
We report a naturalistic study of 23 students in a semester-long design course who worked with custom LLM agents across four sequential tasks. Aside from one explicit boundary prohibiting AI use for stakeholder interviews, students retained autonomy over tool use. AI engagement varied systematically. Students sought validation during interview design, largely declined AI when documenting how interviews reshaped system logic, showed mixed use during envisioning exercises, and selectively delegated during final solution development.
These patterns suggest task-contingent judgment rather than uniform reliance or resistance. We argue that making such judgment explicit is an important learning goal in AI-mediated education.
12:15 – 1:45pm EDT
Lunch (provided) + Birds of a Feather
During the lunch break, attendees will participate in themed discussions around a topic of interest to HCI educators. Lunch will be provided.
1:45 – 3:15pm EDT
Session 2: Teaching Durable Skills for HCI Practice
Each paper presentation will include brief Q&A, followed by a structured reflection, discussion, and report out in the final portion of the session.
Stop Grading the Artifact: Traceability as the New Core Assessment in HCI
Houda Elmimouni, Andrea Bunt, Daniel J Rea, Patrick Dubois, James E Young, Celine Latulipe (University of Manitoba)
Abstract
Generative AI is making a familiar HCI assessment question unavoidable: what, exactly, should we grade when production is cheap? Many HCI courses already value process, critique, and iteration; the challenge is making those practices consistently assessable and usable for timely feedback when fluent artifacts can be produced on demand. We propose traceability as a new core assessment object: a lightweight, learner-authored evidence trail that makes design reasoning inspectable, including provenance, decision rationale, iteration history, claim integrity, and responsibility. Importantly, traceability is not positioned as authorship proof; its purpose is pedagogical, supporting fair evaluation of reasoning and enabling targeted formative feedback. We introduce receipt-based assessment, in which instructors grade a small set of decision receipts and provenance annotations, keeping workload manageable while reducing “policing vibes.” We contribute a concrete rubric, receipt templates and implementation strategies to reduce labor and conflict while maintaining rigor. We end with open questions to the EduCHI community about minimum viable traceability, fairness norms, and when hybrid oral or in-class checks are necessary.
Wrangling and Supporting Uncertainty in HCI Education Practices
Abdul Moeed Asad, Colin Gray (Indiana University)
Abstract
Uncertainty is a central concept in design. Its embrace is seen as a portal to unlocking new understanding of design for students. While we know that uncertainty is endemic to the design process, we do not have a clear understanding of the affective implications of this reality. In this provocation and unsolved challenge, we use the Informed Designed Matrix as an analytic tool to both align existing literature on uncertainty and consider critical moments where uncertainty impacts students’ experience of the design process. Our findings demonstrate that the emotional experiences that arise through the design process can take a real and affective toll on students, and our use of the matrix pinpoints specific moments where this affective toll can result in tangible pedagogical consequences. We conclude this paper by considering how affect can function as a diagnostic lens to help educators frame design activities in ways that are resonant with student experiences of encountering uncertainty. By encouraging critical attention to uncertainty and its role in the HCI classroom, we hope to enable students to view uncertainty as both a resource and friend.
“…the foundation you didn’t know you needed”: Integrating design process awareness with resilience to facilitate students learning to navigate complex challenges
Saumik Shashwat, Xiaoyi Xue, Eileen Zhang, Jennifer Turns, Cynthia J. Atman (University of Washington)
Abstract
This paper presents Design Process Resilience (DPR), a graduate-level course for Human Centered Design & Engineering students. The curriculum focuses on awareness, metacognition, and resilience in design. We share our teachable moment with a mixed-methods approach to understanding students’ responses about the DPR’s 2024 autumn academic term offering. Our first finding is that through DPR, students developed awareness, metacognition, and resilience in a nuanced way. Our second finding is that the pedagogical tools utilized in DPR were effective and rated highly by the students. We reflected on the success of three pedagogical tools using a complexity theory lens. Through this paper, we show how DPR, alongside other attempts in the HCI education community, collectively re-imagined and created a new pedagogical approach for future HCI education.
Understanding storytelling through analysis: A practical framework for the UX classroom
Cynthia Putnam (DePaul University), Emma J. Rose (University of Washington, Tacoma), Craig M. MacDonald (Pratt Institute)
Abstract
While storytelling is a key skill in user experience (UX), it is an elastic concept that is hard for students to understand and challenging for instructors to teach. In this teachable moment paper, we share results from a pedagogical study examining the effectiveness of an assignment designed to teach students about UX storytelling through an analysis of UX practitioners’ stories. In the assignment, students in a graduate HCI program used a research-based set of five storytelling heuristics to qualitatively code videos of UX practitioners telling stories about their work. We share background on storytelling in UX, explain the storytelling heuristics, and describe the assignment given to students. We then present our analysis of the students’ reflections about the assignment to assess its effectiveness. Students’ major takeaway from the interviews was the importance of tailoring UX stories for intended audiences and context. Students found the storytelling heuristics useful and somewhat easy to apply in the interview analysis. We conclude with our reflections about how the assignment provided an informative framework for students to learn about UX storytelling and propose ideas for other instructors who want to use/adapt the assignment. In addition, we share the full assignment details and the practitioners’ videos for others to replicate this storytelling assignment in their HCI and UX classes.
3:15 – 3:45pm EDT
Coffee/Tea Break
3:45 – 5:00pm EDT
Lightning Talks
Lightning talk The UX Empathy Game: Learning Design Principles by Breaking Them
Zhenhua “Aaron” Yang (Iowa State University)
Abstract
In User Experience (UX) pedagogy, a critical gap separates memorizing design principles from understanding their human impact. Traditional instruction fails to capture the visceral frustration of a “bad” user experience, leaving students to learn the consequences of poor design through high-stakes failure. This talk introduces the “UX Empathy Game,” a browser-based simulation that transforms abstract rules into an experiential lesson in empathy. The core mechanic, “Diagnosis and Repair,” forces students to embody the user in high-pressure tasks within intentionally flawed interfaces. These systems violate core UX principles, from usability heuristics and accessibility standards to deceptive dark patterns. As students navigate these broken designs, the game measures their “rage clicks” and error rates. After feeling this frustration firsthand, they must shift to a designer’s mindset, identify the specific violations, and “repair” the interface to progress. This scalable model moves students from passive observers to active diagnosticians. Attendees will leave with a fresh perspective on teaching design, demonstrating that to build a good interface, students must first survive a bad one.
Lightning talk From Solving Problems to Addressing Complexity: Fostering Student Agency through a Design Simulation
Karin Schmidlin (University of British Columbia)
Abstract
Design educators increasingly face the challenge of preparing students to engage with complex, systemic problems that resist linear problem-solving and definitive solutions. In HCI and design education, traditional project-based assignments often prioritize polished prototypes and predetermined briefs, positioning students as responders rather than as agents capable of reframing problems, navigating uncertainty, and acting meaningfully within evolving sociotechnical contexts.
This lightning talk presents a simulation-based teaching approach from an undergraduate design course, INFO 300: Information and Data Design, at the University of British Columbia. Rather than assigning a conventional team project, a five-week design simulation was embedded into the course, in which students work in small interdisciplinary teams to address a complex plastic waste challenge situated in a fictional coastal region. The simulation is intentionally designed to foster student agency by foregrounding decision-making, reframing, collaboration, and adaptation over the production of final solutions.
Throughout the simulation, students encounter three planned “roadblocks”, including policy changes and conflicting stakeholder priorities, which require them to revisit assumptions, renegotiate roles, and reframe their project direction. A central learning activity is iterative ecosystem mapping using collaborative digital tools, through which students make visible stakeholders, power dynamics, feedback loops, and unintended consequences. These maps are expected to become increasingly complex and messy over time, reinforcing a systems-oriented understanding of the design challenge.
The simulation is not graded, a deliberate choice that reduces performance pressure and creates space for risk-taking and productive failure. The instructor adopts a responsive stance, acknowledging and amplifying student decisions rather than evaluating outcomes. This talk offers a concrete example of how simulation-based learning can reposition students as active participants who learn how to act within complexity, with transferable insights for HCI, UX, and design educators.
Lightning talk Extensible by Design, Unsustainable by Structure: Interdisciplinarity and Transition in Graduate Design Research
Richard Yanaky (McGill University)
Abstract
HCI research often values extensible, interdisciplinary design artefacts that are intentionally created to be used, adapted, and built upon by others. Extensibility is commonly framed as a virtue, supporting reuse, knowledge transfer, and impact beyond a single project. However, for students, this emphasis introduces a critical tension at moments of transition, particularly graduation, when responsibility for sustaining such work becomes unclear.
This provocation argues that HCI education may be producing forms of design work that existing institutional, funding, and employment structures are unevenly equipped to support. Addressing complex societal or technical problems frequently requires deep domain expertise, which can situate design work in non-design departments out of necessity. While these environments enable problem-appropriate inquiry, they often lack the infrastructural, financial, and career pathways required to sustain extensible design artefacts over time.
At the level of the artefact, extensibility creates dependencies that extend beyond the duration of a degree. Large, interdisciplinary, or infrastructural artefacts commonly require ongoing designer intervention to remain usable, yet such labor is rarely supported once student funding or supervision ends. As a result, extensible work may stall, be abandoned, or persist only through unpaid effort.
Finally, these dynamics disproportionately shape the experiences of graduate students and early-career researchers whose work sits between disciplines and aligns with no clear institutional or professional pathway. Rather than offering solutions, this paper presents a set of provocations that invite reflection on how HCI education conceptualizes extensibility, interdisciplinarity, and transition, and on where responsibility for sustaining such work is implicitly placed.
Lightning talk Models of and for Transdisciplinary, Humane HCI Pedagogy
Tania Schlatter (Wheaton College)
Abstract
In this lightning talk I present models to visualize myriad topics related to inter and transdisciplinary HCI pedagogy, explain why I created them, how I’m using them, and open them and my sources up for review, critique, and extension by the EduCHI community.
My initial goal was to define a loose boundary for this sprawling space for myself, and to have a tool to discuss what HCI education is at present with others.
The first model began by focusing on making and ethics. By creating the model and including topics from current HCI pedagogy, I realized that methods for measuring human and environmental impacts of technologies we design for are missing at scale. This led me to develop a second and third model: one reconsiders topics if technologies are not a focus; another centers costs and benefits.
Through this exercise, I realized that more critical and complete methods of evaluation are needed to design with sustainable values, described as a fourth, transdisciplinary wave. I hope the models help the community reflect and imagine how we might continue to practically incorporate sustainable values for people and the environment in HCI research and pedagogy. As an addition to the lightning talk, I propose an asynchronous participatory diagramming activity during a break in the conference in person, and/or via an interactive whiteboard online.
Lightning talk A Shared Library of Tangible and Embodied Teaching Artifacts for HCI Educators
Amy Melniczuk (Carnegie Mellon University)
Abstract
Tangible and embodied activities can make HCI concepts click by shifting learning from talking about interaction to physically enacting it. Yet, these activities are often shared informally (a slide deck, a hallway tip, or a one-off worksheet), making it difficult to adopt, adapt, and evaluate them across courses and institutions. This lightning talk proposes a community collaboration: a shared, open library of classroom-ready tangible/embodied teaching artifacts for HCI/UX educators. Each artifact will be packaged as a small “adoption bundle” (including goals, materials, facilitation, debrief, accessibility notes, and variations) and paired with a lightweight evaluation kit (a 5-minute instructor log and 1-2 reflection prompts) to facilitate comparison of outcomes across sites. We are seeking 5-10 partner instructors to pilot one artifact in 2026 and contribute brief implementation notes and de-identified outcomes. The result will be a curated starter library, along with cross-site lessons learned, that help HCI educators teach interaction through the body, not just the screen.
6:00 – 7:30pm EDT
Welcome Reception
Join EduCHI attendees and guests for a welcome reception featuring drinks and hors d’oeuvres. For in-person attendees only.
Friday, May 22
9:00 – 10:30am EDT
Session 3 : Ethics, Equity, and Responsible Practice
Each paper presentation will include brief Q&A, followed by a structured reflection, discussion, and report out in the final portion of the session.
Teaching Equity and Accessibility with the Equitable Design Toolkit
Sang Eun Lee (Drake University)
Abstract
This paper examines how “teachable moments” in design and HCI education can be used to surface issues related to equity, inclusion, bias awareness, and accessibility through the Equitable Design Toolkit (EDT). While accessibility is often introduced through formal guidelines, students frequently struggle to recognize how bias, marginalization, and social power influence design decisions. Initially developed for UX classrooms, the EDT has been adapted to support equity-centered learning across multiple design contexts. In this study, the toolkit was integrated at key instructional moments within the design process, including user research, problem definition, design execution, critique, and iteration. During these moments, students engaged with Intersectional Personas, Identity Definition Cards, Biased Design Case Studies, and a Wheel of Power and Privilege to examine how design choices affect marginalized users across interfaces, environments, and everyday objects. Findings suggest that implementing the EDT within the design classroom setting prompted meaningful shifts in student thinking, including changes in how students structured information, justified design decisions, and considered marginalized users in their work. Rather than treating accessibility as a discrete or compliance-driven requirement, students increasingly understood inclusivity as a transferable design competency grounded in social context, power, and lived experience. By intentionally aligning instruction and reflection with key stages of the design process, this paper demonstrates how “teachable moments” can foster deeper understanding of equity and accessibility in design education.
Value Lanes: A framework to inculcate ethical argumentation skills in design classrooms
Sai Shruthi Chivukula, Aayushi Bharadwaj, Shikha Mehta (Pratt Institute)
Abstract
In design classrooms, argumentation is often treated as a soft skill, learned implicitly through intentionally framed contexts, projects, design methods, and action. In professional design work, argumentation is central to mediating an individual’s expertise and values, organizational policies and practices, situated practice, and ethical considerations in product development. In this paper, we bridge these two contexts for educators by positioning ethical argumentation as a means to critically problematize the design space and offering a framework for explicitly teaching it in design classrooms. We provide a framework through Value Lanes-User, Technical, Business, Legal, and Personal-to structure ethical arguments and employ specific strategies within and across lanes. To further support educators to use Value Lanes in their classrooms, we propose four class activities-multi-lanes, single-lane, debate, and pitch-as a way to activate Value Lanes. We identify specific student learning objectives to articulate value-based arguments, build persuasive and real-time argumentation skills, and navigate across lanes.
Teaching Usable Privacy in HCI Education: Designing, Implementing, and Evaluating an Active Learning Graduate Course
Sanchari Das, Dhiman Goswami, Michelle Melo, Aditya Johri, Vivian Genaro Motti (George Mason University)
Abstract
As digital systems increasingly rely on pervasive data collection and inference, educating future designers and researchers about Usable Privacy has become a critical need for HCI. However, privacy education in higher education is often fragmented, theory-heavy, or detached from real-world applications. Thus, in this paper, we present the design, implementation, and evaluation of a 15-week graduate-level course on Usable Privacy that addresses this through active, practice-oriented pedagogy. The course integrates use cases, structured role playing, case-based discussions, guest lectures, and a multi-phase research project to support students in reasoning about privacy from multiple stakeholder perspectives. Grounded in contemporary privacy research and the Modern Privacy framework, the curriculum emphasizes both conceptual understanding and applied research skills. We report findings from two course offerings in consecutive years (2024-2025) using a mixed-methods evaluation that combines quantitative teaching evaluations with qualitative analysis of student reflections and instructor observations. Results indicate increased student engagement, improved ability to articulate trade-offs in privacy design, and stronger connections between theory and practice. To support adoption and replication, we also release detailed assignment descriptions and grading rubrics. This work contributes an empirically informed model for teaching Usable Privacy in HCI education and offers actionable guidance for educators seeking to integrate privacy into their curricula.
From Protocol to Practice: Teaching the Relational Practice of Interviewing in HCI Education
Hayoun Noh (University of Oxford); Yvon Ruitenburg (Eindhoven University of Technology); Max Van Kleek (University of Oxford); Younah Kang (Yonsei University)
Abstract
Qualitative interviewing is a foundational method in Human Computer-Interaction (HCI) research and a common component of methods education. Yet interview instruction is often framed around procedural competencies, developing protocols, minimizing bias, and ensuring ethical compliance, while the relational tensions that shape real interview encounters remain largely implicit. This provocation argues that interviews are not neutral data-gathering practices but socially situated interactions co-constructed through relational dynamics. Drawing on interdisciplinary scholarship from qualitative methodology, feminist epistemology, and ethics-of-care research, we synthesize three relational dimensions of interview practice. These include power and positionality, emotional work, and ethics-in-practice, and demonstrate how interviews inherently involve negotiation, interpretation, and responsibility that exceed procedural models of training. Building on this synthesis, we propose pedagogical approaches for HCI education that make relational work explicit and teachable, including reflexive exercises that surface positionality and interactional dynamics, case-based analyses to cultivate emotional literacy, researcher-centered protocols, and structured rehearsals for ethical decision-making. Our aim is not to replace procedural instruction but to expand it by framing interviewing as both a technical and relational practice, inviting educators to reconsider how methods are taught and supported while bridging the gap between the formal curriculum of training and the hidden relational work that unfolds in practice.
10:30 – 11:00am EDT
Coffee/Tea Break
11:00am – 12:15pm EDT
Session 4: AI in HCI Education II – Curriculum Responses
Each paper presentation will include brief Q&A, followed by a structured reflection, discussion, and report out in the final portion of the session.
Scaling Authentic Assessment – Interaction Design and Student Sentiment in AI-Facilitated Oral Examinations
Brian Harrington, Steve Joordens (University of Toronto Scarborough)
Abstract
Oral examination has been the gold standard of assessment for millennia, offering a depth of student-focused evaluation that structured selected-response examinations cannot replicate. However, modern university courses have often shifted to rigid rubric or multiple-choice exams in service of scalability and consistency. This study investigates whether current state-of-the-art conversational AI models can provide individualized, equitable, consistent, and replicable oral exams at the scales required for a modern large university course.
We administered an AI-facilitated mock oral evaluation to 1,131 students in an introductory psychology course to assess the feasibility of implementation, as well as student perceptions of the model. Students overwhelmingly reported the experience as positive, specifically highlighting sense of fairness, depth of processing, and ease of use of the conversational modality. While student preferences between AI facilitated oral exams and multiple-choice remain mixed, open-ended feedback revealed that students found the oral assessment model led to a more accurate evaluation of their learning, and allowed them to express their understanding in a more natural way. Crucially, the assessment model appears to have had an impact on behavior, with 37% of students reporting that the impending oral assessment encouraged them to take the learning activity more seriously and avoid the use of AI in their own work completion.
This work demonstrates that recent advancements in conversational AI can finally make the “gold standard” of assessment at scale a practical, student-centered reality.
Integrating GenAI into HCI Pedagogy: Supporting Students’ Learning of Evaluation Methods
Abeer Aziz, Ahmed Kharrufa, Ian Johnson (Newcastle University)
Abstract
While recent focus on generative AI (GenAI) in education has centred on ideation and content creation, its role in supporting methodological learning remains underexplored. This study explores how GenAI can help undergraduate HCI students understand evaluation methods. We incorporated GenAI into group activities by enabling students to compare GenAI’s suggestions with their own evaluation ideas and reflect on the differences. Additionally, we used GenAI as a facilitator throughout the steps of user evaluation methods by providing comprehensive and detailed prompts tailored for use in their preferred tool. Results show that GenAI helps students understand evaluation concepts, compare methods, and clarify differences, thereby fostering reflection on their choices. However, concerns about oversimplification, trust, and reduced critical thinking were noted. We suggest that, with proper scaffolding, GenAI can serve as a valuable supplementary teaching tool rather than a replacement, offering new insights into its responsible use in HCI education.
AI Is Here. Is UX Ready? A Four-Dimension Framework for Curriculum Design
Nadya Shalamova (Milwaukee School of Engineering); Kat Richards (Digital North America); Cindy Miller (GE HealthCare)
Abstract
As artificial intelligence (AI) is rapidly transforming the landscape of design and research, user experience (UX) programs face a growing dilemma: How do we prepare students for a world where AI systems are increasingly embedded in tools, workflows, and decision-making processes? Meanwhile, UX education remains structurally amorphous, lacking formal accreditation, unified standards, or shared visions for what AI readiness for UX students should look like. This article proposes a framework to guide the integration of AI into undergraduate User Experience (UX) and Human-Computer Interaction (HCI) education, developed through years of iterative curriculum design and collaboration with an Industrial Advisory Committee (IAC) at an undergraduate UX program. The framework emphasizes: 1) Foundational knowledge of AI technologies 2) Practical application of AI tools in design and research contexts, 3) Critical Evaluation of AI outputs and assumptions, and 4) Social Responsibility in relation to AI ethics, privacy, trust, and societal impact. We offer the framework as a practical scaffold: specific enough to guide curriculum audits and learning outcome design, flexible enough to adapt across institutional contexts. We invite HCI educators to debate, adapt, and build upon the proposed framework and to confront the unresolved challenges of teaching human-centered AI in a rapidly evolving technological world.
12:15 – 1:45pm EDT
Lunch (provided) + EduCHI Town Hall
During the lunch break, attendees will be invited to participate in an open discussion about the future of the HCI education community of practice. Lunch will be provided.
1:45 – 3:15pm EDT
Session 5: Methods, Frameworks, and Studio Practice
Each paper presentation will include brief Q&A, followed by a structured reflection, discussion, and report out in the final portion of the session.
Compass: A System to Guide Adaptive Planning Practice in HCI Studios
Leesha Shah (Northeastern Illinois University); Haoqi Zhang (Northwestern University)
Abstract
Learning to iteratively plan is a metacognitive skill core to leading design-research work. There exist many socio-technical scaffolds to train component skills of planning (e.g. tools and processes to visualize the design problem, diagnose design risks, form iterations plans that address risks). Recent work weaves multiple planning supports into learning ecosystems that layer tools, processes, and feedback venues to promote authentic iterative planning. Despite such designs in place, our needfinding reveals students often struggle to recognize and act on opportune moments to adapt plans before and after feedback venues. Few works consider how to train students to regulate through these replanning moments by flexibly adapting iteration plans mid-execution. We thus introduce Compass, a system that guides design-research students to recognize and act on replanning moments as they execute a project plan. An 8-week deployment demonstrates that design-research students using Compass adapted their plans more before and after feedback venues, and executed plans that were more structurally aligned and better integrated feedback. These findings suggest a need to design frameworks that guide learners to build deeper metacognitive practice, by engaging with the learning interactions and supports designed to help them regulate.
Advancing Creative Redesign in HCI Education: Introducing the i3 Method
Michael Mose Biskjaer, Jonas Frich, Kim Halskov (Aarhus University)
Abstract
This paper introduces the i3 method for creative redesign—a structured, teachable approach for HCI education, grounded in over a decade of classroom practice. Addressing the limited pedagogical tools specifically for redesigning, i3 builds on design space theory and three creativity mechanisms—ideation, combinational creativity, and constraints—to support both analytical and generative modes of thinking. The method involves three phases: i1: Identify (mapping key elements of an existing design to construct an initial design space), i2: Introduce (generating grounded alternatives to enrich this design space), and i3: Integrate (combining selected elements into new, coherent configurations). Unlike open‑ended ideation techniques, i3 foregrounds situated, analytical ideation, making it adaptable across design curricular contexts. By framing redesign as a distinct, teachable practice rather than an intuitive extension of original design, the method helps students navigate the boundary between ‘what-is’ and ‘what-could-be,’ offering a conceptually robust contribution ready for application in HCI education.
Teaching Provenance: Implementing Industry Traceability Practices in Studio-Based HCI Design Units
Morteza Pourmohamadi (The University of Sydney)
Abstract
Studio-based HCI design units often assess students through end-of-semester artefacts such as prototypes, reports, and presentations. While these products matter, a product-centric evidentiary model can limit educators’ capacity to (a) provide timely, actionable feedback on process, (b) evaluate individual contributions in team projects, and (c) prepare students for professional practice where traceability, documentation, and review are routine. These tensions are sharpened in a post-generative-AI context, where production can be accelerated while decision-making and contribution become harder to evidence.
This Teachable Moment paper describes a practical, tool-agnostic approach for embedding provenance in studio assessment by translating common industry traceability practices into an assessable submission bundle called the Provenance Pack. The pack operationalises three industry mechanisms: version history (what changed and when), task and issue tracking (what was planned and done by whom), and lightweight decision records (why we chose this). We show how the approach can be implemented using the core features of an institutional learning management system (LMS), including individual discussion threads for weekly journaling and group homepages for shared documentation.
We contribute (1) an educator-facing implementation blueprint with templates, rubric language, and marking strategies that scale to large cohorts without requiring new institutional software, and (2) an illustrative case example from a large capstone studio where institutional student surveys reported high satisfaction with teaching and strong perceptions of intellectual reward, feedback usefulness, and in-class engagement. We position provenance as a transferable teaching technique for HCI educators who want to assess process, improve feedback, and strengthen employability alignment in studio units.
Beyond the Double Diamond: Teaching Risk-Based Decision Making through the Uncertainty Model and Assumption Artifacts
Raelin Sawka Musuraca, Aniket Kittur (Carnegie Mellon University); Megan Guidi (Open)
Abstract
Human-Computer Interaction (HCI) educators often use Human-Centered Design (HCD) models, such as the Double Diamond and Design Thinking, to help students approach complex and ambiguous problems. In our courses, we have identified a common pedagogical challenge: students tend to treat these frameworks as “recipes,” strictly adhering to the process model rather than exercising judgment about the next steps. By following the prescribed steps, students often overlook where uncertainty poses the greatest risks to their projects, as well as how time and budget constraints should inform their process decisions.
In this Teachable Moment paper, we present an instructional approach that integrates risk-based decision-making with the standard HCD framework. We introduced two tools in our HCI graduate-level curriculum: the Uncertainty Model, a framework for identifying the next best research action for reducing risk, and Assumption Artifacts, low-fidelity experimental probes that test high-risk assumptions through observable behaviors. Together, these tools provide students with a shared language for reasoning about their research choices and for adapting their processes to evolving real-world contexts.
We trace the implementation of this approach from its introduction in a user research course to its application in industry-sponsored capstone projects. By showcasing examples of student work, we demonstrate how these tools have helped teams invalidate early ideas and align project scope. The paper concludes with actionable guidance for HCI educators looking to move beyond the Double Diamond framework and foster adaptive, risk-aware HCI practitioners.
3:15 – 3:45pm EDT
Coffee/Tea Break
3:45 – 4:45pm EDT
Session 6: Program & Curriculum Design
Each paper presentation will include brief Q&A, followed by a structured reflection, discussion, and report out in the final portion of the session.
Threading the Needle: Designing an Undergraduate HCI Major for a 2+2 Computer Science Model
Derek Reilly, Hanieh Shakeri, Bonnie MacKay, Oladapo Oyebode, Rina Wehbe, Joseph Malloch, Rita Orji (Dalhousie University); Mayra Barrera Machuca (University of Calgary); Lizbeth Escobedo (Dalhousie University)
Abstract
We present as a case study the design and development of an undergraduate major in Human-Computer Interaction (HCI), as one of several majors developed simultaneously within a computer science (CS) faculty, all sharing a common core in the first two years of the degree. This 2+2 model poses unique challenges: defining the common core so that majors can build on skills and knowledge gained, balancing design, evaluation, systems, and theoretical aspects of HCI within the condensed timeframe, and determining how upper-year courses can be shared across majors. We began by building an extensive list of prospective topics, skills, and activities, and then iteratively composing these elements while identifying plausible high-level course topics. We next provided feedback on the proposed 2-year CS core and how it can prepare students for the major, and mapped proposed courses to learning outcomes. In the final phase, we identified required courses for the degree, ensuring adequate coverage of core HCI topics and adjusting course content as needed, and adapting, combining, or scrapping existing HCI elective courses to complement the major. While our bottom-up approach democratized the program design process and encouraged emergent course definitions and structures, the 2+2 CS program structure was a prevailing tension. We reflect on our process and the resulting program design, offering insights and recommendations for HCI educators with a similar opportunity to define a major.
Visualizing Human-Centered Design: A Replication and Extension Study
Alannah Oleson, Yuliya Kim (University of Denver)
Abstract
Replication studies are rare in human-computer interaction (HCI) and computing education research, but they are a foundational part of building robust, generalizable HCI pedagogies. In this paper, we present the results of a replication of Gorichanaz’s 2025 EduCHI paper “Visual Representations of Human-Centered Design by Students in Computer and Information Science” in a novel setting, following the original study’s quantitative content analysis & metaphor analysis methods augmented by our own inductive analyses to characterize unique features of our data set. We also extend the insights of the original study by adding a temporal component: We had students complete the human-centered design (HCD) drawing activity early in the course and once at the end, to explore how depictions changed over time. We found that the end-of-course drawings resembled the original study’s results in most aspects, with some notable exceptions in the overall shape of the design process depicted and ratio of text to imagery used. However, the early drawings were much more diverse and seemed to contain more creative portrayals of design, with a notable focus on natural images (flowers, trees, etc.) that was lost in the end-of-class drawings. Our insights contribute to broader discussions about HCI learning across diverse contexts, creativity vs. conformity in HCI education, and what we as educators should emphasize in our pedagogy to best support students’ design learning.
Between Evaluation and Facilitation: Teaching Assistants’ Challenges in HCI Education
Jixiang Fan, WeiLu Wang (Virginia Tech); Lei Xia (Tongji University); Yuhao Chen (Georgia Institute of Technology); Shuai Liu (Standard Chartered Bank); D. Scott McCrickard (Virginia Tech)
Abstract
In Human–Computer Interaction (HCI) education, Teaching Assistants (TAs) play a central role in project-based and collaborative learning environments, while operating under persistent structural tensions. Drawing on interviews with graduate TAs who supported HCI-related courses, this paper examines how TAs navigate responsibilities that extend beyond conventional instructional support. TAs are expected not only to evaluate students’ assignments and project outcomes, but also to facilitate learning by encouraging innovation, supporting collaboration, and maintaining the day-to-day functioning of the classroom. The coexistence of evaluative and facilitative responsibilities places TAs in a position where competing expectations are difficult to reconcile in practice. This paper outlines four unresolved challenges faced by TAs in HCI education. First, assessment in HCI courses often relies on open-ended and descriptive grading standards, concentrating a substantial amount of interpretive judgment in the hands of TAs. Second, within grade-oriented project environments, TAs are frequently positioned between encouraging innovative and high-risk design attempts and maintaining fairness and predictability in grading outcomes. Third, the widespread use of group-based projects requires TAs to take on responsibilities for coordinating collaboration and mediating group conflicts alongside evaluating learning outcomes. Finally, as generative AI tools become increasingly embedded in learning activities, judging the appropriateness and educational value of students’ AI use has emerged as an additional responsibility for TAs, yet shared criteria and guidance remain limited. This paper aims to foreground the perspectives of TAs and to encourage broader discussion within the HCI education community about the expanding responsibilities of TAs and the structural challenges these roles involve.
4:45 – 5:00pm EDT
Closing
5:00 – 7:00pm EDT
Friday Night Social Gathering
Although EduCHI officially concludes at 5pm, we invite attendees to participate in an (optional) Friday night social gathering. Details will be shared during the conference. For in-person attendees only.
