New research from assurance and risk management company DNV has identified the foundations to achieving trustworthy artificial intelligence in the context of safety critical industrial processes. According to the paper, Assurance of AI-Enabled Systems, established risk management principles can be adapted to meet the complexity and uncertainty of AI enabled systems. While AI introduces new risks, proven assurance methods from safety critical industries already provide a robust starting point for addressing them
The paper shows that AI reshapes risk because it does not operate as a fixed, predictable component. This makes traditional one‑time assurance insufficient, and highlights the need for continuous and adaptive assurance throughout the lifecycle
Christian Agrell, Programme Director for AI Assurance at DNV, said, “Creating trustworthy artificial intelligence does not require us to start from zero. We already have strong foundations in modern assurance and risk science and our long experience managing digital technologies in high‑risk environments. Applying these principles thoughtfully allows us to build systems that remain safe and reliable, even as they evolve. Trustworthy AI depends on predictable behaviour under uncertainty, and that is exactly what these foundations help deliver.”
The research draws on DNV’s decades-long assurance and risk management experience in critical infrastructure, including the maritime and energy sectors. The foundational principals to create trustworthy AI include:
- A system model that captures the entire AI-enabled system
This model reflects how AI interacts with humans, digital and physical components, and its operational environment. It enables understanding of emergent behaviour, unintended interactions and context specific risks that cannot be detected by examining the AI component alone. - Taking a modular approach
A risk model, applying uncertainty-based assessment and modular risk principles to break down complex systems with their complex and emergent risks into manageable parts across system levels. - Linking claims to evidence
These structured arguments connect claims such as “the system is safe” to verifiable evidence, assumptions and rationale. This provides a transparent, auditable framework for demonstrating trustworthiness throughout the lifecycle. - Continuous, context aware assurance that adapts as AI evolves
AI-enabled systems change over time as models are updated, data shifts and operating conditions vary. To maintain trustworthiness, assurance must be ongoing rather than a one-time check. This includes real-time monitoring, regular updates to evidence, and reevaluating risks and requirements so that confidence in the system remains valid throughout its lifecycle
“These foundations give industry a clear, actionable way to build and maintain trustworthy AI. We are already working with companies that recognize the potential of AI, as well as the risks it can pose to the critical services they deliver. I urge more organizations to join us in addressing and managing the risks associated with artificial intelligence,” Agrell added.
The position paper is part of DNV’s broader work to help industry adopt AI responsibly and aligns with the company’s recommended practice (DNV‑RP‑0671) for AI assurance.


