Why One-Time Approval is No Longer Enough for AI Systems
Main Article Content
Abstract
The advent of artificial intelligence (AI) systems that learn, evolve, and interact with their environment after initial deployment requires a shift from traditional "one-off" approval models of governance. This article discusses problems with one-time approvals and recommends a life-cycle approach to oversight. It explains how changes in data, operating environments, and interactions can result in model drift, new risks, and a loss of performance over time. Drawing on recent advances in AI monitoring and governance approaches, the article offers a pragmatic view that combines technical monitoring, human oversight, and organisational accountability. It presents key governance factors such as types of risk, monitoring approaches, and decision-making processes, and considers the challenges of implementation for large and small to medium-sized enterprises. The key point is that approval should be the first step, not the last, in AI governance, allowing organisations to sustain trust, compliance, and performance over the lifetime of AI systems.
Downloads
Article Details
Copyright (c) 2026 Hemachandran K.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Rao AK, Keller AJ, Kalra N, Steed R, Kwegyir-Aggrey K, Klyman K, et al. Challenges to the monitoring of deployed AI systems (NIST AI 800-4). Gaithersburg (MD): National Institute of Standards and Technology; 2026. Available from: https://doi.org/10.6028/NIST.AI.800-4
Oxford Internet Institute. When AI systems learn during deployment, our safety evaluations break [Internet]. Available from: https://aigi.ox.ac.uk/blog-post/when-ai-systems-learn-during-deployment-our-safety-evaluations-break/
Smith A, Severn M. An overview of continuous learning artificial intelligence-enabled medical devices: emerging health technologies [Internet]. Ottawa (ON): Canadian Agency for Drugs and Technologies in Health; 2022 May. Available from: https://www.ncbi.nlm.nih.gov/books/NBK605105/
Hurtado J, Salvati D, Semola R, Bosio M, Lomonaco V. Continual learning for predictive maintenance: overview and challenges. Intell Syst Appl. 2023;19:200251. Available from: https://doi.org/10.1016/j.iswa.2023.200251
Deloitte. From static to dynamic AI governance [Internet]. Available from: https://www.deloitte.com/us/en/insights/industry/government-public-sector-services/static-to-dynamic-ai-governance.html
Morales J, Antunes L, Earl P, Edman R, Hamed J, Reynolds D, et al. Insights on implementing a metrics baseline for post-deployment AI container monitoring. In: Proceedings of the 2024 International Conference on Software and Systems Processes (ICSSP '24). New York (NY): Association for Computing Machinery; 2024. p. 46–55. Available from: https://dl.acm.org/doi/10.1145/3666015.3666018
Diligent Corporation. AI governance: what it is and why it matters [Internet]. Available from: https://www.diligent.com/resources/blog/ai-governance
Papagiannidis E, Mikalef P, Conboy K. Responsible artificial intelligence governance: a review and research framework. J Strateg Inf Syst. 2025;34(2):101885. Available from: https://dl.acm.org/doi/abs/10.1016/j.jsis.2024.101885
IBM. What is human-in-the-loop? [Internet]. Available from: https://www.ibm.com/think/topics/human-in-the-loop
European Union. Regulation (EU) 2024/1689 [Internet]. Available from: https://eur-lex.europa.eu/eli/reg/2024/1689/oj