From AI Stack to Cognitive Stack: A Semantic–Spatial–Relational Architecture for Trustworthy AI

Main Article Content

Liz Johnson
Arham Hussain Inamdar
Bruh Lemma Yadecha

Abstract

This paper introduces a layered Semantic-Spatial-Relational (SSR) Cognitive Stack designed to embed trust calibration, epistemic boundaries, and human authority directly within AI system architecture. Contemporary large language model deployments often emphasize generative performance while under-specifying mechanisms for environmental grounding, relational governance, and epistemic restraint. The SSR architecture addresses this gap by integrating three complementary layers: semantic reasoning for intent interpretation, spatial grounding through contextual perception, and a relational governance layer that regulates interaction boundaries and trust calibration. Rather than treating ethical constraints as post hoc safeguards, the architecture embeds structural mechanisms that govern when and how AI systems should act.
This paper presents the conceptual design of the SSR Cognitive Stack and analyzes its implications for trustworthy AI, embodied systems, and human-AI decision support. We argue that ethical reliability in AI systems depends not solely on output accuracy but on structurally embedded governance mechanisms that regulate action, restraint, and authority. The framework contributes to ongoing debates in AI ethics by proposing an integrity-centered model of AI design in which responsibility, transparency, and relational stability function as architectural principles rather than external controls. Finally, we outline directions for future empirical research needed to evaluate relationally structured AI architectures across educational, commercial, and public-sector environments.

Downloads

Download data is not yet available.

Article Details

Johnson, L., Inamdar, A. H., & Yadecha, B. L. (2026). From AI Stack to Cognitive Stack: A Semantic–Spatial–Relational Architecture for Trustworthy AI. Trends in Computer Science and Information Technology, 018–026. https://doi.org/10.17352/tcsit.000105
Research Articles

Copyright (c) 2026 Johnson L, et al.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Abishethvarman V, Sabrina F, Kwan P. Knowledge integrity in large language models: A state-of-the-art review. Information. 2025;16(12):1076. Available from: https://doi.org/10.3390/info16121076

Kushwah S, Dave N. AI hallucination and strategies to overcome: Enhancing human-AI interaction. In: 2025 International Conference on Artificial Intelligence and Machine Vision (AIMV). IEEE; 2025; 1-6. Available from: https://doi.org/10.1109/AIMV66517.2025.11203756

Yang Z, Meng Z, Zheng X, Wattenhofer R. Assessing adversarial robustness of large language models: An empirical study. arXiv. 2024. Available from: https://doi.org/10.48550/arXiv.2405.02764

Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: Can language models be too big? In: Proceedings of the 2021 ACM Conference. 2021. Available from: https://dl.acm.org/doi/10.1145/3442188.3445922

Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, et al. Survey of hallucination in natural language generation. ACM Computing Surveys. 2023;55(12):1-38. Available from: https://dl.acm.org/doi/10.1145/3571730

Floridi L, Cowls J, King TC, Taddeo M. How to design AI for social good: Seven essential factors. In: Ethics, governance, and policies in artificial intelligence. Cham: Springer International Publishing; 2021; 125-151. Available from: https://doi.org/10.1007/s11948-020-00213-5

Amershi S, Weld D, Vorvoreanu M, Fourney A, Nushi B, Collisson P, et al. Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019; 1-13. Available from: https://doi.org/10.1145/3290605.3300233

Zhang XY, Ye HJ, Zhan DC. Trustworthy evaluation of large language models. Frontiers of Computer Science. 2026;20:Article 2002324. Available from: https://doi.org/10.1007/s11704-025-50442-9?urlappend=%3Futm_source%3Dresearchgate.net%26utm_medium%3Darticle

Brooks RA. Intelligence without representation. Artificial Intelligence. 1991;47(1-3):139-159. Available from: https://people.csail.mit.edu/brooks/papers/representation.pdf

Pfeifer R, Bongard J. How the body shapes the way we think: A new view of intelligence. MIT Press; 2006. Available from: https://mitpress.mit.edu/9780262537421/how-the-body-shapes-the-way-we-think/

Duan J, Yu S, Tan HL, Zhu H, Tan C. A survey of embodied AI: From simulators to research tasks. IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI). 2022. Available from: https://doi.org/10.48550/arXiv.2103.04918

Duchon D, Dunegan KJ, Barton SL. Framing the problem and making decisions: The facts are not enough. IEEE Transactions on Engineering Management. 1989;36(1):25-27. Available from: https://doi.org/10.1109/17.19979

Tian H. A literature review on the influence of information overload on individual decision-making. University of Illinois Urbana-Champaign; 2023. Available from: https://sundaram.cs.illinois.edu/Example_reports/Review_overload.pdf

Johnson L, Cochran J. The feel of thinking: Toward relational intelligence in generative AI and complex systems. International Journal of Humanities and Social Science. 2026. Available from: https://ijhss.thebrpi.org/journal/index/5079

Barad K. Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press; 2007. Available from: https://www.dukeupress.edu/meeting-the-universe-halfway

Maturana HR, Varela FJ. The tree of knowledge: The biological roots of human understanding. Shambhala; 1987. Available from: https://uranos.ch/research/references/Maturana1988/maturana-h-1987-tree-of-knowledge-bkmrk.pdf

Whitehead AN. Process and reality: An essay in cosmology. Corrected ed. Griffin DR, Sherburne DW, editors. Free Press; 1978.

Saussure FD. Course in general linguistics. The Library of The University Of California Los Angeles; 1916.

Wittgenstein L. Philosophical investigations. Oxford: Blackwell; 1953. Available from: https://static1.squarespace.com/static/54889e73e4b0a2c1f9891289/t/564b61a4e4b04eca59c4d232/1447780772744/Ludwig.Wittgenstein.-.Philosophical.Investigations.pdf

Chomsky N. Aspects of the theory of syntax. 50th ed. The MIT Press; 1965. Available from: https://www.jstor.org/stable/j.ctt17kk81z

Russell S, Norvig P. Artificial intelligence: A modern approach. 4th ed. Global ed. Pearson Education; 2021. Available from: https://api.pageplace.de/preview/DT0400.9781292401171_A41586057/preview-9781292401171_A41586057.pdf

Atf Z, Lewis PR. Is trust correlated with explainability in AI? A meta-analysis. IEEE Transactions on Technology and Society. 2025. Available from: https://doi.org/10.1109/TTS.2025.3558448

Kaplan AD, Kessler TT, Brill JC, Hancock PA, Endsley MR. Trust in artificial intelligence: Meta-analytic findings. Human Factors. 2023;65(2):337–359. Available from: https://doi.org/10.1177/00187208211013988

Smith PJ, Hoffman RR, editors. Cognitive systems engineering: The future for a changing world. CRC Press; 2017. Available from: https://doi.org/10.1201/9781315572529

Xu W, Dainoff MJ, Ge L, Gao Z. From human-computer interaction to human-AI interaction: New challenges and opportunities for enabling human-centered AI. arXiv preprint. 2021. Available from: https://www.researchgate.net/publication/351545957_From_Human-Computer_Interaction_to_Human-AI_Interaction_New_Challenges_and_Opportunities_for_Enabling_Human-Centered_AI

Yang Q, Steinfeld A, Rosé C, Zimmerman J. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 2020; 1-13. Available from: https://dl.acm.org/doi/10.1145/3313831.3376301

Crawford K. Atlas of AI. Yale University Press; 2021. Available from: https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/

Floridi L, Cowls J. A unified framework of five principles for AI in society. Harvard Data Science Review. 2019. Available from: https://doi.org/10.1162/99608f92.8cd550d1

Lee JD, See KA. Trust in automation: Designing for appropriate reliance. Human Factors. 2004;46(1):50-80. Available from: https://doi.org/10.1518/hfes.46.1.50_30392

Parasuraman R, Riley V. Humans and automation: Use, misuse, disuse, abuse. Human Factors. 1997;39(2):230-253. Available from: https://web.mit.edu/16.459/www/parasuraman.pdf

Sweller J. Cognitive load theory. Psychology of Learning and Motivation. 2011;55:37-76. Available from: https://psycnet.apa.org/record/2011-17503-002

Domingos P. The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books; 2015. Available from: https://ebooks.karbust.me/Technology/The%20Master%20Algorithm_%20How%20the%20Quest%20for%20the%20Ultimate%20Learning%20Machine%20Will%20Remake%20Our%20World%20-%20Pedro%20Domingos%20-%20Basic%20Books%20(2015).pdf

Ng A. What artificial intelligence can and can’t do right now [conference session]. Stanford MSx Future Forum, Stanford University; 2017. Available from: https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now

Nadella S, Shaw G, Nichols JT. Hit refresh: The quest to rediscover Microsoft’s soul and imagine a better future for everyone. Harper Business; 2017. Available from: https://books.google.co.in/books/about/Hit_Refresh.html?id=SeGMDAAAQBAJ&redir_esc=y

Bengio Y, Mindermann S, Privitera D, Besiroglu T, Bommasani R, Casper S, et al. International AI Safety Report. arXiv preprint. 2025. Available from: https://doi.org/10.48550/arXiv.2501.17805

Rometty G. Good power: Leading positive change in our lives, work, and world. Harvard Business Press; 2023. Available from: https://hbsp.harvard.edu/product/10550-PDF-ENG

Conference on Fairness, Accountability, and Transparency (FAccT ’21). Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM; 2021. 610–623.