09 January 2026

Your Leading International Construction and Infrastructure News Platform
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Building Trust Into AI for Safety Critical Systems with Keysight

Building Trust Into AI for Safety Critical Systems with Keysight

Building Trust Into AI for Safety Critical Systems with Keysight

Artificial intelligence is now deeply embedded in safety critical systems, particularly across the automotive sector where advanced driver assistance and autonomous functions increasingly depend on complex machine learning models. As these systems move closer to widespread deployment, scrutiny from regulators, insurers, and the public has intensified. Trust is no longer an abstract concept. It has become a measurable requirement that must be demonstrated with evidence, documentation, and repeatable processes.

Against this backdrop, Keysight Technologies, Inc. has introduced AI Software Integrity Builder, a new software solution designed to fundamentally change how AI enabled systems are validated, deployed, and maintained. The focus is not performance alone. Instead, the solution addresses transparency, explainability, and continuous assurance, areas where many AI programmes still struggle to move beyond theory.

Why Traditional AI Validation Falls Short

Modern AI systems behave less like deterministic software and more like evolving ecosystems. Their outputs are shaped by training data, model architecture, environmental context, and ongoing updates. Yet despite this complexity, many development teams rely on fragmented toolchains that only examine isolated stages of the AI lifecycle. One tool checks datasets, another inspects models, and a third attempts to test behaviour after deployment. Important connections are often missed.

This fragmentation creates genuine risk. Developers can struggle to identify whether unexpected behaviour stems from biased data, hidden correlations, or real world conditions that were never adequately represented during training. For industries such as automotive, these gaps are unacceptable. Safety cases must be defensible, and regulatory compliance must be provable rather than assumed.

Regulation Demands Evidence, Not Assumptions

The regulatory landscape surrounding artificial intelligence is evolving rapidly. In automotive applications, ISO PAS 8800 introduces expectations around AI safety, while the EU AI Act raises the bar further by mandating transparency, traceability, and risk based validation for high risk systems. What these frameworks share is a clear definition of objectives, but far less guidance on how engineering teams should practically achieve them.

This uncertainty places additional pressure on developers. They must demonstrate explainability and validation without a standardised path forward. The result is often over engineered documentation, inconsistent testing approaches, and an uncomfortable reliance on best guesses. AI Software Integrity Builder is positioned squarely at this intersection, offering a structured and evidence driven framework rather than another isolated testing utility.

A Lifecycle Based Approach to AI Integrity

At the heart of Keysight’s new solution is a unified, lifecycle oriented framework that follows an AI system from data ingestion through deployment and into ongoing operation. Rather than asking teams to stitch together multiple tools, AI Software Integrity Builder creates a single environment where validation activities are connected and traceable.

This approach answers a question that regulators, safety assessors, and engineering leaders increasingly ask: what is actually happening inside the AI system, and how can its behaviour be justified once it is live in the field. By maintaining continuity across the lifecycle, the solution enables teams to build and preserve the safety evidence required for long term compliance.

Dataset Analysis as the Foundation of Trust

Every AI system begins with data, and weaknesses at this stage often ripple through the entire lifecycle. AI Software Integrity Builder includes dataset analysis capabilities that use statistical methods to uncover bias, gaps, and inconsistencies within training data. These insights allow teams to understand whether datasets genuinely represent the operational environment the system will face.

By highlighting areas of concern early, developers can take corrective action before models are trained. This reduces the risk of embedding systematic bias or blind spots that only emerge after deployment, when remediation becomes far more costly and reputationally damaging.

Making Model Decisions Explainable

One of the most persistent criticisms of AI is its perceived opacity. Black box models may deliver impressive accuracy, but without explainability they struggle to earn trust in regulated settings. Model based validation within AI Software Integrity Builder focuses on revealing how models reach their decisions and which features most strongly influence outcomes.

This level of insight allows engineers to identify hidden correlations and unintended behaviours that might otherwise go unnoticed. More importantly, it provides the documentation and rationale needed to support safety arguments, internal reviews, and external audits. Explainability shifts from an academic concept to a practical engineering deliverable.

Testing AI Where It Actually Operates

Training environments rarely capture the full complexity of real world operation. Weather conditions, sensor noise, edge cases, and human behaviour can all influence how an AI system performs once deployed. Inference based testing within the solution evaluates models under realistic conditions, comparing live behaviour with expectations derived from training.

When deviations occur, the system does more than flag a failure. It helps teams understand why behaviour has changed and recommends targeted improvements for future iterations. This feedback loop supports continuous improvement rather than one off validation exercises, aligning AI assurance with the realities of long term system operation.

Bridging the Gap Between Training and Deployment

Many existing AI validation tools stop at the point where a model appears ready for deployment. What happens next is often left to operational teams with limited visibility into how models were trained or validated. Keysight’s approach explicitly closes this gap by maintaining traceability from initial datasets through to live inference.

For high risk applications such as autonomous driving, this continuity is critical. It ensures that confidence in a model is not lost the moment it leaves the lab. Instead, assurance becomes an ongoing process that evolves alongside the system itself.

Industry Perspective on AI Safety

Thomas Goetzl, Vice President and General Manager of Automotive and Energy Solutions at Keysight, highlighted the growing urgency of the challenge: “AI assurance and functional safety of AI in vehicles are becoming critical challenges. Standards and regulatory frameworks define the objectives, but not the path to achieving a reliable and trustworthy AI deployment. By combining our deep expertise in test and measurement with advanced AI validation capabilities, Keysight provides customers with the tool to build trustworthy AI systems backed by safety evidence and aligned with regulatory requirements.”

This emphasis on evidence reflects a broader industry shift. Trustworthy AI is no longer achieved through claims or performance metrics alone. It must be demonstrated through rigorous, repeatable validation supported by clear data.

Supporting Engineers Under Real World Pressure

Engineering teams today face competing demands. They must innovate quickly while also satisfying increasingly complex regulatory expectations. AI Software Integrity Builder is designed to reduce this tension by providing a coherent framework that integrates with existing workflows rather than disrupting them.

By consolidating validation activities and aligning them with regulatory objectives, the solution allows teams to focus on improving system behaviour instead of managing disconnected tools and documentation. The result is a more efficient path to deployment that does not compromise on safety or transparency.

A Practical Path to Trustworthy AI

As AI becomes more deeply embedded in vehicles and other safety critical systems, the question is no longer whether assurance is needed, but how it can be achieved at scale. Fragmented approaches may suffice for low risk applications, but they fall short when lives and livelihoods are at stake.

With AI Software Integrity Builder, Keysight positions itself as an enabler of responsible AI deployment. By combining dataset analysis, model explainability, inference testing, and continuous monitoring within a single lifecycle framework, the company offers a practical route to AI systems that are transparent, auditable, and compliant by design.

Building Trust Into AI for Safety Critical Systems with Keysight

About The Author

Anthony brings a wealth of global experience to his role as Managing Editor of Highways.Today. With an extensive career spanning several decades in the construction industry, Anthony has worked on diverse projects across continents, gaining valuable insights and expertise in highway construction, infrastructure development, and innovative engineering solutions. His international experience equips him with a unique perspective on the challenges and opportunities within the highways industry.

Related posts