The Effects of Trust and Cognitive Load on Human-Generative AI Communication in Higher Education: Evidence from Indonesian Undergraduate Students
DOI:
https://doi.org/10.18326/inject.v11i1.6235Keywords:
Generative Artificial Intelligence, Human-Machine Communication, Human-AI Communication, Trust, Cognitive LoadAbstract
This study examines the influence of trust and cognitive load as fundamental factors associated with Human-Generative AI Communication in a higher education context. As student interactions with generative artificial intelligence systems through natural language increase, understanding the conditions that support effective and meaningful communication between humans and AI becomes crucial. Using a quantitative research design, survey data were collected from 400 Indonesian undergraduate students who had actively used a generative AI platform for academic purposes in the past three months. Partial Least Squares Structural Equation Modeling (PLS-SEM) was used to test the theoretically hypothesized influence of trust and cognitive load on Human-Generative AI Communication within a theory-driven modeling framework. Results indicate that trust has a strong and significant influence. Students who perceive AI systems as reliable and credible are more likely to engage in open, dialogic communication. Cognitive load also showed a significant influence, indicating that lower cognitive load facilitates clearer, more efficient, and more dialogic communication with AI systems. Furthermore, the combined effects of trust and cognitive load accounted for a significant portion of the variance in Human-AI Generative Communication, underscoring the role of both factors in shaping interaction quality in academic settings. The findings indicate that students' communication with generative AI is closely related to relational evaluations and perceived cognitive demands, supporting the view of generative AI as a communicative participant and not simply an instrumental tool. This study contributes to communication research by providing empirical evidence on the socio-cognitive underpinnings of Human-Generative AI communication in higher education.
References
Ahmed, S. A. (2025). Learning, meaning, and teaching in the age of AI: Communication development checkpoints. Frontiers in Education, 10, 1720706. https://doi.org/10.3389/feduc.2025.1720706
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00411-8
Chegg Global Student Survey (2025). (2025). Chegg.org.
Chen, S., & Cheung, A. C. K. (2025). Effect of generative artificial intelligence on university students' learning outcomes: A systematic review and meta-analysis. Educational Research Review, 49, 100737. https://doi.org/10.1016/j.edurev.2025.100737
Daher, W., & Hussein, A. (2024). Higher Education Students’ Perceptions of GenAI Tools for Learning. Information (Switzerland), 15(7). https://doi.org/10.3390/info15070416
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1). https://doi.org/10.3390/soc15010006
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2). https://doi.org/10.5465/annals.2018.0057
Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human–Machine Communication research agenda. New Media & Society, 22(1), 70–86. https://doi.org/10.1177/1461444819858691
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). MULTIVARIATE DATA ANALYSIS EIGHTH EDITION. Cengage Learning. www.cengage.com/highered
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2022). ASSESSING PLS-SEM RESULTS PART III Evaluation of the Structural Model CASE STUDY ILLUSTRATION-HOW ARE PLS-SEM STRUCTURAL MODEL RESULTS REPORTED? In A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). Sage. www.pls-sem.net
Hair, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R. Springer International Publishing. https://doi.org/10.1007/978-3-030-80519-7
Huynh, M. T., & Aichner, T. (2025). In generative artificial intelligence we trust: Unpacking determinants and outcomes for cognitive trust. AI and Society. https://doi.org/10.1007/s00146-025-02378-8
Jiang, T., Sun, Z., Fu, S., & Lv, Y. (2024). Human-AI interaction research agenda: A user-centered perspective. Data and Information Management. https://doi.org/10.1016/j.dim.2024.100078
Jin, Z., Goyal, S. B., & Rajawat, A. S. (2024). The Informational Role of Artificial Intelligence in higher Education in the New era. Procedia Computer Science, 235, 1008–1023. https://doi.org/10.1016/j.procs.2024.04.096
Koltovskaia, S., Rahmati, P., & Saeli, H. (2024). Graduate students’ use of ChatGPT for academic text revision: Behavioral, cognitive, and affective engagement. Journal of Second Language Writing, 65, 101130. https://doi.org/10.1016/j.jslw.2024.101130
Krieglstein, F., Beege, M., Rey, G. D., Sanchez-Stockhammer, C., & Schneider, S. (2023). Development and Validation of a Theory-Based Questionnaire to Measure Different Types of Cognitive Load. Educational Psychology Review, 35(1). https://doi.org/10.1007/s10648-023-09738-0
Lee, E. J. (2024). Minding the source: Toward an integrative theory of human–machine communication. Human Communication Research, 50(2), 184–193. https://doi.org/10.1093/hcr/hqad034
Littlejohn, S. W., & Foss, K. A. (2022). Ensiklopedia Teori Komunikasi. Kencana.
Maral, S., Naycı, N., Bilmez, H., Erdemir, E. İ., & Satici, S. A. (2025). Problematic ChatGPT Use Scale: AI-Human Collaboration or Unraveling the Dark Side of ChatGPT. International Journal of Mental Health and Addiction. https://doi.org/10.1007/s11469-025-01509-y
McGrath, M. J., Lack, O., Tisch, J., & Duenser, A. (2025). Measuring trust in artificial intelligence: Validation of an established scale and its short form. Frontiers in Artificial Intelligence, 8. https://doi.org/10.3389/frai.2025.1582880
Nazaretsky, T., Mejia-Domenzain, P., Swamy, V., Frej, J., & Käser, T. (2025). The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptions. Computers and Education: Artificial Intelligence, 8. https://doi.org/10.1016/j.caeai.2025.100368
Pangkalan Data Pendidikan Tinggi. (2025). Perbandingan Jumlah Mahasiswa Berdasarkan Program. Pangkalan Data Pendidikan Tinggi.
Qu, J., Ma, K., & Hommel, B. (2021). Cognitive load dissociates explicit and implicit measures of body ownership and agency. Psychonomic Bulletin and Review, 28(5). https://doi.org/10.3758/s13423-021-01931-y
Rong, H., & Chun, C. (2024). Digital Education Council Global AI Student Survey 2024. Digital Education Council.
Schlicker, N., Baum, K., Uhde, A., Sterz, S., Hirsch, M. C., & Langer, M. (2025). How do we assess the trustworthiness of AI? Introducing the trustworthiness assessment model (TrAM). Computers in Human Behavior, 170. https://doi.org/10.1016/j.chb.2025.108671
Skulmowski, A., & Xu, K. M. (2022). Understanding Cognitive Load in Digital and Online Learning: A New Perspective on Extraneous Cognitive Load. Educational Psychology Review, 34(1), 171–196. https://doi.org/10.1007/s10648-021-09624-7
Sweller, J. (2011). Cognitive Load Theory. The Psychology of Learning and Motivation, 55, 37–76. https://doi.org/10.1016/B978-0-12-387691-1.X0001-4
Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12(1). https://doi.org/10.1057/s41599-025-04787-y
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 INJECT (Interdisciplinary Journal of Communication)

This work is licensed under a Creative Commons Attribution 4.0 International License.




