23 Apr 2023

Grant to 'Determine Real World AI Trustworthiness and Robustness'

UKRI (UK Research and Innovation)

Words by
Alex Carruthers
Research Grants
UKRI Innovate UK


Advai is delighted to announce our latest research grant focusing on enhancing the trustworthiness of AI, particularly in the realm of Computer Vision. This project aims to address the challenges of deploying Computer Vision systems, especially in safety-critical environments, where unexpected failures can lead to substandard performance, reputational damage, or regulatory issues.

Key Aspects of the Project:

Development of Sandbox Test Environments: We are creating independent testing environments for Computer Vision systems. These environments will act as proxies for real-world deployment, enabling us to test systems without the risk associated with actual deployment failures.

Establishing Reliable Metrics: Our goal is to develop metrics for predicting failure modes in AI systems. These metrics will help in identifying potential engineering failures (like imbalanced data or adversarial attacks) and issues where AI systems may not meet societal expectations (such as biases related to race or gender).

Enhancing Trust in AI: By accurately predicting and mitigating failure modes, we aim to foster trust in AI technologies. Our project's approach will improve real-world performance predictions during development and establish tools indicative of real-world model failures.

Benefits for Trustworthy AI: The project will enable more efficient model development by providing early indicators of potential deployment success or failure. Additionally, it will accelerate the production of data that can help in establishing benchmarks for model performance in real-world settings.

This initiative is a significant step in Advai's commitment to advancing trustworthy and ethical AI applications, ensuring that AI systems are reliable, fair, and effective in various real-world scenarios.

Who are Advai?

Established in 2020, Advai is a leading UK Deep Tech specialist focussed on AI Safety and Security. We test and evaluate Artificial Intelligence and Machine Learning systems, enabling our customers to assure their Large Language Models, Computer Vision and other AI-enabled technologies for deployment in business-critical or regulated environments. 

Our tooling stress-tests, measures and improves AI robustness and real world performance, finding reliable operating boundaries and creating early warning systems to predict natural or adversarial issues. As one of the most successful companies to have come through the Defence and Security Accelerator, we work with both the UK Ministry of Defence and a range of safety-conscious enterprises.

Advai is a proud partner of the UK Government’s Frontier AI Taskforce, the research unit behind the world's first global AI Safety Summit.

If you would like to discuss this in more detail, please reach out to contact@advai.co.uk