Research overview

We provide collections of bespoke testing tools, configured for a specific function.



Python architecture enabling data scientists to test AI models and data across the MLOps lifecycle.



Comparative testing of two or more AI models on a Whitebox (open access) or Blackbox (closed) basis.



A model and dataset registry for determining which model is fit-for-purpose.



Advanced Adversarial AI tooling, designed to identify and reveal AI susceptibility to malicious influence and attacks.

Inhuman intelligences make inhuman mistakes.

Our aim is to convey to non-technical stakeholders that the mistakes AI systems can make are counterintuitive: they can be incredible at one advanced task but can get tripped up over something a human would think is obvious. 

So: you need inhuman assurance methods!

Fix Your Ai
Don't wait for critical AI to break before you fix it. Discover the operational limits of AI in the lab, don't wait to stumble into these limits in the field.

Keep your models at peak performance by

identifying the limits



Dots Black
Dots Diagnal

Real people, thinking outside the [closed] box

As well as our next-generation tools and platform, Advai provides consultancy services to improve model robustness, and protect against adversarial AI.

We can work with your business to increase awareness and understanding as well as provide, training, red-teaming and reports for topics related to Adversarial AI, Robust AI and AI Regulation.

  1. Specialist AI challenges

  2. Solving deployment issues

  3. Red teaming for adversial AI

  4. Collaborative working

Alignment Technology Being Kept Under Control

Box Breaking

  1. Black Box Assessment

    No knowledge of architecture, model or training data. Based on transferable, one-shot attacks. Can be done from just API access, assessing very few inputs/outputs.

  2. White Box Assessment

    Grades – e.g. full access to model / dataset / architecture.

    Nuanced understanding of strengths and weaknesses.  

    Development of tailored assessment.

    Model emulation possible for disruptive adversarial attack

Dots Diagnal

We can work with you on your AI preparedness across a range of applications.

  1. Computer vision applications

    Classification, detection, segmentation.

  2. Natural language processing

    Auto generated text and speech.

  3. Optical character recognition.

    Interpretation of written information.

  4. Complex models

    Hybrid, combined and complex systems.

Model Upload


You can trust robust AI.

Book Call FAQs