Artificial Intelligence in Education

February 26, 2026

Artificial intelligence in education has increasingly been examined not as a speculative future development, but as a structural transformation already embedded in institutional practice. A targeted selection of highly cited journal articles published within the past decade was identified through Google Scholar on 6 and 7 January, using broad search terms related to artificial intelligence, student learning, design frameworks, and human-centered AI. From this body of work, a consistent pattern has been observed: measurable gains in performance and efficiency have been reported, yet deeper questions about agency, ethics, and long-term learning remain unresolved. In my view, this tension defines the current moment in AI-enhanced education.

AI has been described as operating across three core areas of education: administration, instruction, and learning. In Chen, Chen, and Lin (2020), it was reported that grading, feedback, and administrative workflows have been improved in efficiency through AI-supported systems. At the same time, personalization and adaptive content delivery have been presented as major advantages for learners, with improved engagement and retention being highlighted. While these improvements are difficult to dismiss, it should be acknowledged that efficiency gains do not automatically translate into deeper learning. It appears that the real educational value of AI depends not on automation alone, but on how thoughtfully such automation is embedded into pedagogical design.

A broader mapping of the field was provided by Chen et al. (2022), where two decades of publications were analyzed and recurring themes such as intelligent tutoring systems, natural language processing, educational data mining, affective computing, and recommender systems were identified. The field was shown to be expansive and technically diverse. However, what is striking is that technological sophistication has often outpaced theoretical clarity about what kind of learning is ultimately being optimized. It can reasonably be argued that predictive accuracy and engagement metrics are sometimes prioritized over epistemic depth or reflective capacity.

Evidence of positive impact on student performance has been consistently reported. García-Martínez et al. (2023) concluded that AI and computational sciences generally exert a positive effect on student achievement and motivation, particularly in STEM contexts. Similarly, Wang, Sun, and Chen (2023) demonstrated that institutional AI capability influences student self-efficacy and creativity, which in turn affect learning performance. These findings suggest that AI can enhance not only outcomes but also psychological drivers of learning. Yet it should be noted that such benefits appear strongest when AI is integrated within a coherent institutional strategy that includes teacher expertise, digital literacy, and innovation-oriented culture. Technology alone has not been shown to be sufficient.

At the same time, more cautious evidence has been provided regarding student agency. In Darvishi et al. (2024), it was found that while students performed effectively when supported by AI prompts, performance shifted significantly once that support was removed. It was suggested that reliance on AI assistance may develop more readily than independent self-regulated learning strategies. This finding, in my opinion, represents one of the most important insights in the current literature. If AI support strengthens short-term task completion while weakening long-term autonomy, then educational design must deliberately address this trade-off. Performance metrics alone cannot be treated as indicators of meaningful learning.

Human-centered perspectives have been proposed as a corrective to purely technical approaches. In Riedl (2019), human-centered AI was framed as requiring that systems both understand humans and help humans understand the systems. Similarly, in Shneiderman (2020), it was argued that high levels of automation and high levels of human control can coexist when systems are well designed. From an educational standpoint, this suggests that transparency, reversibility of decisions, and interpretability should not be optional features but core design principles. When learners cannot understand how recommendations are generated, their ability to calibrate trust and responsibility may be diminished.

Ethical concerns have also been repeatedly emphasized. In Popenici and Kerr (2017), AI was described as augmenting rather than replacing teachers, and a warning was issued against allowing technological enthusiasm to substitute for pedagogical judgment. In Airaj (2024), privacy, integrity, and non-discrimination were positioned as foundational conditions for trustworthy AI in higher education. These arguments appear particularly urgent as AI systems increasingly shape assessment, feedback, and academic decision making. It would be naïve to assume that algorithmic processes are neutral, especially when bias amplification and data governance issues remain active concerns.

Design-focused research further complicates the narrative. In Saritepeci and Yildiz Durak (2024), AI integration in design-based learning was shown to support creative self-efficacy and reflective thinking, yet meaningful shifts in design thinking mindset were not observed within the study period. This suggests that while AI may accelerate ideation and iteration, deeper cognitive dispositions may require sustained human mentorship and structured practice. In this respect, the optimism surrounding generative AI should perhaps be tempered. Creative assistance does not automatically produce creative identity.

A broader theoretical shift has been articulated by Verganti, Vendraminelli, and Iansiti (2020), who argued that as AI automates problem-solving loops, human roles increasingly shift toward sensemaking and problem framing. This interpretation resonates strongly with education. If algorithmic systems continuously optimize solutions, then human responsibility may lie in defining which problems are worth solving and under which values. In my view, this reframing places educators not at the margins of AI systems, but at their ethical and conceptual core.

Taken together, the literature suggests that AI in education delivers genuine benefits in efficiency, personalization, and, in many contexts, measurable performance gains. However, it also raises substantial concerns regarding dependency, agency, interpretability, and ethical governance. The most convincing evidence supports a position in which AI is treated neither as a replacement for teachers nor as a neutral tool, but as a powerful socio-technical system whose educational impact is shaped by design choices. It seems increasingly clear that the future of AI in education will not be determined by the sophistication of algorithms alone, but by the degree to which human-centered principles, pedagogical depth, and ethical responsibility are intentionally embedded into their use.


References


Airaj, M. (2024). Ethical artificial intelligence for teaching-learning in higher education. Education and Information Technologies, 1–23.

Cagan, J., Grossmann, I. E., & Hooker, J. (1997). A conceptual framework for combining artificial intelligence and optimization in engineering design. Research in Engineering Design, 9, 20–34.

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278.

Chen, X., Zou, D., Xie, H., Cheng, G., & Liu, C. (2022). Two decades of artificial intelligence in education. Educational Technology & Society, 25(1), 28–47.

Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education, 210, 104967.

García-Martínez, I., Fernández-Batanero, J. M., Fernández-Cerero, J., & León, S. P. (2023). Analysing the impact of artificial intelligence and computational sciences on student performance: Systematic review and meta-analysis. Journal of New Approaches in Educational Research, 12(1), 171–197.

Liao, J., Hansen, P., & Chai, C. (2020). A framework of artificial intelligence augmented design support. Human–Computer Interaction, 35(5–6), 511–544.

Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 22.

Riedl, M. O. (2019). Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33–36.

Rong, A., & Li, G. (2023). Integrating artificial intelligence in design: An exploration of ChatGPT’s role in inclusive design research. In Proceedings of the 5th International Conference on Literature, Art and Human Development (ICLAHD 2023) (pp. 910–924). Atlantis Press.

Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26, 582–599.

Saeidnia, H. R., & Ausloos, M. (2024). Integrating artificial intelligence into design thinking: A comprehensive examination of the principles and potentialities of AI for design thinking framework. InfoScience Trends, 1(2), 1–9.

Saritepeci, M., & Yildiz Durak, H. (2024). Effectiveness of artificial intelligence integration in design-based learning on design thinking mindset, creative and reflective thinking skills: An experimental study. Education and Information Technologies, 1–35.

Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.

Sreenivasan, A., & Suresh, M. (2024). Design thinking and artificial intelligence: A systematic literature review exploring synergies. International Journal of Innovation Studies.

Verganti, R., Vendraminelli, L., & Iansiti, M. (2020). Innovation and design in the age of artificial intelligence. Journal of Product Innovation Management, 37(3), 212–227.

Wang, S., Sun, Z., & Chen, Y. (2023). Effects of higher education institutes’ artificial intelligence capability on students’ self-efficacy, creativity and learning performance. Education and Information Technologies, 28(5), 4919–4939.

Yang, S. J., Ogata, H., Matsui, T., & Chen, N. S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence, 2, 100008.

Zhou, C., Liu, X., Yu, C., Tao, Y., & Shao, Y. (2024). Trust in AI-augmented design: Applying structural equation modeling to AI-augmented design acceptance. Heliyon, 10(1).