AI as Pedagogical Collaborator: New Dialogue of Learning

February 26, 2026

Artificial intelligence has rapidly evolved from a peripheral innovation into a pedagogical force shaping how students learn, engage, and create across disciplines. A growing body of research indicates that large language models (LLMs) and conversational agents can enhance engagement, motivation, and self-regulated learning when embedded within sound pedagogical structures (Chen et al., 2025; Guan et al., 2025; Ma et al., 2024; Yetişensoy & Karaduman, 2024). Particularly in design and creative education, conversational and agentic uses of AI have been shown to broaden ideation, reduce cognitive load, and expand creative search spaces when positioned as collaborative partners rather than answer machines (Guo et al., 2025; Zhu et al., 2025). It increasingly appears that the educational value of AI lies not in automation itself, but in how dialogue with the system is structured and interpreted.

Beyond creative disciplines, AI-supported chatbots have been associated with greater flexibility and responsiveness in higher education contexts (Bilquise et al., 2024). Experiential chatbot workshops have been shown to build AI literacy among non-STEM students by demystifying system capabilities and limitations (Shim et al., 2023). In programming and conceptual remediation tasks, structured chatbot systems such as concept-map–guided agents have demonstrated advantages over traditional correction approaches by making knowledge structures explicit (Kuo & Chen, 2023). However, caution has been raised in both empirical and review studies, which suggest that unguided reliance on AI may weaken deep learning, critical thinking, and epistemic vigilance (Doleck et al., 2025; Li et al., 2025; Ma et al., 2024). Teacher expertise remains central in this equation, with LLMs consistently described as augmentative tools rather than replacements for human judgment and ethical guidance (Jeon & Lee, 2023; Dietrich & Grassini, 2025).

Across this body of work, three converging themes emerge. First, when AI is framed as a dialogic partner, students tend to articulate reasoning more clearly and engage in reflective cycles aligned with socio-cognitive and self-determination mechanisms (Chen et al., 2025; Guan et al., 2025; Zhang, Zhao, & Li, 2025). Second, AI systems can personalize and democratize participation, supporting inclusive trajectories and diverse learners when aligned with universal design principles and emotionally supportive scaffolds (Adler & Kletenik, 2025; Bilquise et al., 2024; Ma et al., 2024). Third, outcomes are strongly context-dependent. Measurable gains tend to appear when tasks, tools, and instructional coaching are deliberately aligned, whereas mixed or negative outcomes arise when AI substitutes for metacognition or becomes an unquestioned authority (Doleck et al., 2025; Guan et al., 2025; Yetişensoy & Karaduman, 2024). These patterns suggest that AI effectiveness is mediated by pedagogical architecture rather than technological capability alone.

This interpretive stance resonates with contemporary design research that reframes AI from tool to collaborator. Within such models, LLMs act as interlocutors that surface assumptions, broaden ideation, and render reasoning visible, while educators orchestrate critique and ethical awareness (Gu et al., 2025; Edwards et al., 2024; Holmes et al., 2022; Peng et al., 2024; Saeidnia & Ausloos, 2024; Shneiderman, 2020). From a distributed cognition perspective, AI functions as an external cognitive artifact that supports representational expansion and iterative refinement, yet abductive reasoning and situated judgment remain human responsibilities (Cross, 2018). The pedagogical challenge therefore lies in shaping AI dialogue so that outputs invite interpretation rather than compliance.

Within inclusive design education, synthetic users and AI-generated personas have been explored as tools for expanding perspective-taking and surfacing tacit assumptions (Peng et al., 2024; Edwards et al., 2024). These artifacts can externalize latent design frames and reveal edge cases that learners may overlook. Yet studies caution that without explicit scaffolding, synthetic personas may drift toward stereotype essentialism or oversimplified narratives detached from lived complexity (Ndiaye, 2025; Gu et al., 2025). The cultivation of critical empathy, understood as a composite of cognitive, affective, and ethical competencies, requires guided reflection on power, bias, and representation (Bardzell & Bardzell, 2015; Bennett & Rosner, 2019; Gebru et al., 2021). When learners interrogate how AI-generated representations are constructed, these synthetic users can function as reflective mirrors that surface embedded norms and data assumptions, thereby fostering inclusive awareness in human–AI co-creation (Haxvig et al., 2025).

Evidence from higher education further nuances the mechanism. Self-regulated learning processes such as planning, monitoring, and adaptation are positively influenced when AI suggestions are interrogated through metacognitive prompts rather than accepted passively (Guan et al., 2025; Jeon & Lee, 2023). Conversational AI pedagogical agents have been shown to reduce cognitive load while sustaining performance when social cues and structured prompts are embedded (Chen et al., 2025). However, in programming and data-science contexts, AI assistance has sometimes been associated with decreased performance when it enables cognitive offloading without reflective checks (Doleck et al., 2025). Structured approaches, such as concept-map chatbots, appear to mitigate this risk by making reasoning pathways visible (Kuo & Chen, 2023). These findings support a cognitive partnership model in which AI broadens and accelerates ideation, while educators design constraints and prompts that sustain epistemic control.

Despite robust evidence that LLMs can elevate engagement and creativity, limited empirical work has directly examined how structured, dialogic collaboration with LLMs cultivates inclusive design competences as explicit learning outcomes. Many studies focus on generic motivation or performance metrics (Chen et al., 2025; Ma et al., 2024), report mixed effects without analyzing ethical growth (Doleck et al., 2025), or treat inclusivity implicitly through access and participation rather than measuring empathy and ethical reasoning directly (Adler & Kletenik, 2025; Yetişensoy & Karaduman, 2024). This gap highlights the need for workshop-based interventions that position the LLM as a pedagogical collaborator and assess both quantitative changes in inclusive competences and qualitative transformations in reflective reasoning.

From an instructional design standpoint, recurring strategies can be identified in the literature. Conversational personalization appears to improve engagement and reduce cognitive load when carefully structured (Chen et al., 2025). Structural scaffolds such as design rubrics, concept maps, or challenge prompts help channel generative output into conceptual coherence (Kuo & Chen, 2023; Edwards et al., 2024). Reflect–validate cycles that embed peer dialogue and ethical checkpoints are associated with more durable metacognitive development (Torkestani et al., 2025). Where these elements are absent, outcomes are inconsistent and sometimes negative (Doleck et al., 2025). Inclusive design research further suggests that when empathy and accessibility goals are explicitly aligned with AI-mediated tasks, retention and affective engagement improve (Yetişensoy & Karaduman, 2024; Adler & Kletenik, 2025).

In conclusion, current evidence suggests that AI can function as a powerful pedagogical collaborator when embedded within intentional, ethically grounded instructional frameworks. Engagement, creativity, and self-regulated learning can be strengthened through dialogic interaction, yet these gains remain contingent on structured reflection and human oversight. AI does not automatically cultivate inclusive competences; rather, inclusivity emerges when learners are guided to interrogate assumptions, question outputs, and situate generative suggestions within broader ethical contexts. The future of AI in education will therefore depend less on technological sophistication and more on the cultivation of critical, reflective, and human-centered learning practices.


References 


Adler, R. F., & Kletenik, D. (2025). Empowering access: Integrating multiple means of engagement when teaching accessible design principles. Education and Information Technologies, 30, 12661–12680. https://doi.org/10.1007/s10639-025-13324-y


Bardzell, S., & Bardzell, J. (2015). Humanistic HCI. Morgan & Claypool.


Bennett, C. L., & Rosner, D. K. (2019). The promise of empathy: Design, disability, and knowing the “other.” Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300528


Bilquise, G., Ibrahim, S., & Salhieh, S. M. (2024). Investigating student acceptance of an academic advising chatbot in higher education institutions. Education and Information Technologies, 29, 6357–6382. https://doi.org/10.1007/s10639-023-12076-x


Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.


Chen, J., Mokmin, N. A. M., Shen, Q., & Su, H. (2025). Leveraging AI in design education: Exploring virtual instructors and conversational techniques in flipped classroom models. Education and Information Technologies, 30, 16441–16461. https://doi.org/10.1007/s10639-025-13458-z


Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE.


Cross, N. (2018). Engineering design methods: Strategies for product design (5th ed.). Wiley.

Dietrich, L. K., & Grassini, S. (2025). Assessing ChatGPT acceptance and use in education: A comparative study among German-speaking students and teachers. Education and Information Technologies, 30, 22151–22176. https://doi.org/10.1007/s10639-025-13658-7


Doleck, T., Agand, P., & Pirrotta, D. (2025). Generative AI in data science programming: Differences in performance between groups with and without AI-assistance. Education and Information Technologies. https://doi.org/10.1007/s10639-025-13801-4


Durak, H. Y. (2023). Conversational agent-based guidance: Examining the effect of chatbot usage frequency and satisfaction on visual design self-efficacy, engagement, satisfaction, and learner autonomy. Education and Information Technologies, 28, 471–488. https://doi.org/10.1007/s10639-022-11149-7


Edwards, K. M., Man, B., & Ahmed, F. (2024). Sketch2Prototype: Multimodal support for early-stage design. Proceedings of the International Design Conference.


Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723 


Gu, H., Chandrasegaran, S., & Lloyd, P. (2025). Synthetic users in design cognition. Artificial Intelligence for Engineering Design, Analysis and Manufacturing.


Guan, R., Raković, M., Chen, G., & Gašević, D. (2025). How educational chatbots support self-regulated learning? A systematic review of the literature. Education and Information Technologies, 30, 4493–4518. https://doi.org/10.1007/s10639-024-12881-y


Guo, H., Zhou, Z., & Ma, F. (2025). The influence of artificial intelligence generated content-based problem-solving on engineering students’ creativity: A controlled experimental study. Education and Information Technologies. https://doi.org/10.1007/s10639-025-13709-z


Haxvig, H. A., D’Andrea, V., & Teli, M. (2025). “I’ve never seen a glass ceiling better represented”: Bias and gendering in LLM-generated synthetic personas from a participatory design perspective. International Journal of Human-Computer Studies, 103651.


Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial intelligence in education: Promises and implications for teaching and learning (2nd ed.). Center for Curriculum Redesign.


Hwang, Y., & Lee, J. H. (2025). Exploring students’ experiences and perceptions of human-AI collaboration in digital content making. International Journal of Educational Technology in Higher Education, 22(1), 44. 


Jeon, J., & Lee, S. (2023). Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT. Education and Information Technologies, 28, 15873–15892. https://doi.org/10.1007/s10639-023-11834-1


Kasneci, E., Seegerer, S., Sedlmair, M., et al. (2023). ChatGPT for good? Opportunities and challenges of LLMs for education. Learning and Individual Differences, 103, 102274.


Koskinen, I., Zimmerman, J., Binder, T., Redström, J., & Wensveen, S. (2011). Design research through practice: From the lab, field, and showroom. Morgan Kaufmann.


Kuo, Y.-C., & Chen, Y.-A. (2023). The impact of chatbots using concept maps on correction outcomes A case study of programming courses. Education and Information Technologies, 28, 7899–7925. https://doi.org/10.1007/s10639-022-11506-6


Li, Y., Sadiq, G., Qambar, G., & Zheng, P. (2025). The impact of students’ use of ChatGPT on their research skills: The mediating effects of autonomous motivation, engagement, and self-directed learning. Education and Information Technologies, 30, 4185–4216. https://doi.org/10.1007/s10639-024-12981-9


Ma, W., Ma, W., Hu, Y., & Bi, X. (2024). The who, why, and how of AI-based chatbots for learning and teaching in higher education: A systematic review. Education and Information Technologies, 30, 7781–7805. https://doi.org/10.1007/s10639-024-13128-6


van der Maden, W., Lomas, D., & Hekkert, P. (2024). Developing and evaluating a design method for positive artificial intelligence. AI EDAM, 38, e14.


Miles, M. B., Huberman, A. M., & Saldaña, J. (2019). Qualitative data analysis (4th ed.). SAGE.


Ndiaye, Y. (2025, May). Exploring AI personas and empathy in co-creation workshops. RAIFFET 2025 Conference Proceedings.


Norman, D. A., & Verganti, R. (2014). Incremental and radical innovation: Design research vs. technology and meaning change. Design Issues, 30(1), 78–96.


Peng, X., Koch, J., & Mackay, W. E. (2024, July). DesignPrompt: Multimodal prompting for creativity in design education. Proceedings of the Designing Interactive Systems Conference.


Saeidnia, H. R., & Ausloos, M. (2024). Integrating artificial intelligence into design thinking: A comprehensive examination of the principles and potentialities of AI for design thinking framework. InfoScience Trends, 1(2), 1-9.

Sankar, B., & Sen, D. (2025). A novel idea generation tool using a structured conversational AI (CAI) system. AI EDAM, 39, e11. 


Shim, K. J., Menkhoff, T., Teo, L. Y. Q., & Ong, C. S. Q. (2023). Assessing the effectiveness of a chatbot workshop as experiential teaching and learning tool to engage undergraduate students. Education and Information Technologies, 28, 16065–16088. https://doi.org/10.1007/s10639-023-11795-5


Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, 36(6), 495–504.


Torkestani, M. S., Alameer, A., Palaiahnakote, S., & Manosuri, T. (2025). Inclusive prompt engineering for large language models: a modular framework for ethical, structured, and adaptive AI. Artificial Intelligence Review, 58(11), 348.


Yetişensoy, O., & Karaduman, H. (2024). The effect of AI-powered chatbots in social studies education. Education and Information Technologies, 29, 17035–17069. https://doi.org/10.1007/s10639-024-12485-6


Wang, P., Khinvasara, Y., Creijghton, G. J., Scholing, T., Wang, Y., Zhou, Z., ... & Yin, Y. (2025). Enhancing designer creativity through human–AI co-ideation: a co-creation framework for design ideation with custom GPT. AI EDAM, 39, e22. 


Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International journal of educational technology in higher education, 16(1), 1-27.


Zhang, F., Gou, J., Shen, K. N., Camarinha-Matos, L. M., & Wang, Z. (2025). Effects of AI teammates on learning behavior in human–AI collaboration environments: A perspective on self-regulated learning. Education and Information Technologies. https://doi.org/10.1007/s10639-025-13717-z


Zhang, X., Zhao, X., & Li, X. (2025). AI-driven academic engagement in fine arts education: The chain-mediating roles of self-efficacy and achievement emotions. Education and Information Technologies. https://doi.org/10.1007/s10639-025-13756-6


Zhu, Z., Ren, Y., & Shen, A. (2025). Exploring the acceptance of generative AI-assisted learning and design creation among students in art and design specialties: Based on the extended TAM model. Education and Information Technologies, 30, 18651–18678. https://doi.org/10.1007/s10639-025-13551-3