Exploring Human-AI Interaction: A Systematic Review of Trust, Ethics, and Adoption in Artificial Intelligence Systems
DOI:
https://doi.org/10.5281/zenodo.15628541Keywords:
Artificial Intelligence, human-AI communication, natural language processing, ethicsAbstract
Artificial Intelligence (AI) is rapidly reshaping the landscape of human communication through advances in natural language processing (NLP), speech recognition, and conversational interfaces. This paper critically examines the dual impact of AI on both interpersonal and human-machine communication, emphasizing the balance between innovation and its ethical, psychological, and social implications. Drawing from an extensive review of scholarly literature, the study identifies key themes such as human-AI collaboration, the role of trust and transparency, ethical responsibility, cognitive dissonance, and the evolving dynamics of social interaction in AI-mediated contexts. The paper explores how AI systems influence user perception, emotional engagement, and identity formation, while also raising concerns about manipulation, overreliance, and authenticity in communication. It highlights the importance of integrating ethical design principles—such as fairness, accountability, and inclusivity—into the development of AI communication tools. The study concludes by proposing guidelines for creating AI systems that support and enrich human discourse while safeguarding psychological well-being and social cohesion. This work contributes to a growing body of interdisciplinary research that seeks to understand and guide the responsible evolution of AI in communication.
References
Alawadhi, M., Almazrouie, J., Kamil, M., & Khalil, K. A. (2020). A systematic literature review of the factors influencing the adoption of autonomous driving. International Journal of System Assurance Engineering and Management, 11(6), 1065–1082. https://doi.org/10.1007/s13198-020-00961-4
Aldarmaki, H., Ullah, A., Ram, S., & Zaki, N. (2022). Unsupervised automatic speech recognition: A review. Speech Communication, 139, 76–91. https://doi.org/10.1016/j.specom.2022.02.005
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). https://doi.org/10.1145/3290605.3300233
Babel, F., Kraus, J. M., & Baumann, M. (2021). Development and testing of psychological conflict resolution strategies for assertive robots to resolve human–robot goal conflict. Frontiers in Robotics and AI, 7, Article 591448. https://doi.org/10.3389/frobt.2020.591448
Babel, F., Vogt, A., Hock, P., Kraus, J., Angerer, F., Seufert, T., & Baumann, M. (2022). Step aside! VR-based evaluation of adaptive robot conflict resolution strategies for domestic service robots. International Journal of Social Robotics, 14(5), 1239–1260. https://doi.org/10.1007/s12369-021-00858-7
Bharti, P. K., Ranjan, S., Ghosal, T., Agrawal, M., & Ekbal, A. (2021). PEERAssist: Leveraging on paper-review interactions to predict peer review decisions. In Towards open and trustworthy digital societies (pp. 421–435). https://doi.org/10.1007/978-3-030-84769-9_30
Čaić, M., Mahr, D., & Oderkerken-Schröder, G. (2019). Value of social robots in services: Social cognition perspective. Journal of Services Marketing, 33(4), 463–478. https://doi.org/10.1108/JSM-02-2018-0080
Canaan, R., Salge, C., Togelius, J., & Nealen, A. (2019). Leveling the playing field: Fairness in AI versus human game benchmarks. In Proceedings of the 14th International Conference on the Foundations of Digital Games (pp. 1–8).
Cañas Delgado, J. J. (2022). AI and ethics when human beings collaborate with AI agents. Frontiers in Psychology, 13, Article 836650. https://doi.org/10.3389/fpsyg.2022.836650
Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially responsible AI algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research, 71, 1137–1181. https://doi.org/10.1613/jair.1.12986
Chien, S.-Y., Lin, Y.-L., & Chang, B.-F. (2022). The effects of intimacy and proactivity on trust in human-humanoid robot interaction. Information Systems Frontiers, 26, 75–90. https://doi.org/10.1007/s10796-022-10324-y
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055
D’Alfonso, S. (2020). AI in mental health. Current Opinion in Psychology, 36, 112–117. https://doi.org/10.1016/j.copsyc.2020.04.005
Dang, J., & Liu, L. (2022). Implicit theories of the human mind predict competitive and cooperative responses to AI robots. Computers in Human Behavior, 134, Article 107300. https://doi.org/10.1016/j.chb.2022.107300
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
De Dreu, C. K. W., & Weingart, L. R. (2003). Task versus relationship conflict, team performance, and team member satisfaction: A meta-analysis. Journal of Applied Psychology, 88(4), 741–749. https://doi.org/10.1037/0021-9010.88.4.741
Domingos, P., & Veve, A. (2018). Our digital doubles. Scientific American, 319(3), 88–93. https://www.jstor.org/stable/27173625
Esterwood, C., & Robert, L. P. (2021). Do you still trust me? Human-robot trust repair strategies. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (pp. 183–188). https://doi.org/10.1109/RO-MAN50785.2021.9515536
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Jason Isaac III A. Rabi (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.