13 Mar 2024

Fit for Duty Artificial Intelligence

How do we trust Security and Police? Do we deploy fresh recruits to complex field operations?

No, we don't.

Organic intelligences are put through established cognitive exams, physical tests and medicals. They are then monitored and periodically re-evaluated.

We should also ensure AI systems are Fit for Duty before deploying them. Read the full article below.

Words by
Alex Carruthers
Neuronal And Neural, In Policing

Fit for Duty Artificial Intelligence

This article was originally published on LinkedIn: Fit for Duty Artificial Intelligence | LinkedIn

Adv Perturb


Police work hard to catch a crook, but the chain of evidence breaks on a software tool.

Exhausting, costly investigations ruined because the wrong tool is trusted.

Expensive state-of-the-art facial recognition systems have algorithmic biases, or vulnerabilities that can be exploited for fraud or other malicious ends.

Systems that help you identify, track and catch criminals, end up judged to be biased.

Case thrown out. Baddies pass go, again.

If you're concerned, you're not alone. Welcome to the age of artificial intelligence (AI). Welcome to a new level of incomprehensible challenges.

Welcome to the intersection of AI and Security & Police.

Neuron Versus Neural

Are humans so simple?

A silver lining in our bit-obfuscated world is undoubtedly the 'digital fingerprint'. Criminals are leaving detectable traces despite their best efforts.

It’s unsurprising then that the modern criminal cohort is being met with a digital-centric response. Have faith! The S&P services are capable of change, they have successfully adapted before.

Now, data-powered systems are the only hope.

We must ensure they are robust enough, reliable enough for the real world, to not let us all down.

Fingerprints are all well and good but, as trees can hide in a forest, the sheer volume of data processing required to reveal these digital traces is staggering. It has grown apparent that systems will need to begin to pull their own weight. Systems need to become intelligent in their own right. Security & Police need to join forces with AI and work cohesively.

Let's not mince words:

  1. S&P must adopt new AI tools they can trust to keep up with AI-armed criminals and the volume of crime, despite reduced resources.
  2. Furthermore, S&P will need to continually adopt new tools (increasingly faster) because the rate of tool evolution itself is speeding up.


Trend spotting, cohort analysing, face recognising, pattern predicting, social tracking, likelihood simulating, auto analysing, … we know you're hearing it everywhere. The S&P Conference, this year and for the coming ten, will surely have the words 'Artificial' and 'Intelligence' strewn everywhere.

There is an understandable lack of trust in AI technology owing to the complexity of its decision-making.


Are humans so different? Are the neurological mechanisms driving human decision-making so simple?

Of course not! At least here, we have a precedent: how do we trust fellow S&P teammates?

Do we deploy fresh recruits to complex field operations or security details?

No, we don't.

Organic intelligences are put through established cognitive exams, physical tests and medicals. They are then monitored and periodically re-evaluated.

AI might be 'narrow intelligence', but it is a teammate – good at only one thing (come now, we've all had a slightly, erm, narrow colleague). Understanding not only the strengths of any teammate but their weaknesses, too, is crucial.

At Advai, we do precisely this: we ensure artificial teammates are tested and evaluated. What we might call 'Fit for Duty AI'?

Advai doesn’t create AI. We focus on stress testing and enhancing state-of-the-art AI systems.

We help mitigate flaws that sit at the heart of modern AI by identifying the ‘unknown unknowns’ that cause unpredictable failures. Clearly defining these ‘failure modes’ implies the safe boundaries for their use. This enables:

  1. Choosing the right tools for the job.
  2. Confidence in AI investments.
  3. Increased speed of adoption by S&P officers.
  4. Reliability in ongoing use.And thus
  5. The increased productivity AI has promised, with fewer risks.


Advai is a UK Ministry of Defence backed consultancy with world-leading tooling for evaluating and monitoring artificial intelligence model robustness. We offer the ability to identify the reliable operating boundaries of AI models and providing early warning systems for undesirable behaviour.

We also provide guidance for human-AI collaboration and enhance confidence in AI investments through outlined mitigation strategies.

Two years of Defence backed R&D has enabled Advai to ensure AI Police recruits are ‘Fit for Duty’.

Once configured to any environment, Advai will enable Security and Police personnel to:

1. Evaluate any new AI software against rigorous benchmarks so informed tooling decisions can be made.

2. Monitor all live AI software, alerting staff when tools aren't working as they should.

3. Identify when AI is the weak link in the chain of evidence.

4. Provide evidential justification (robustness scores) for the use of an AI model or tool.

In conclusion, the adoption of AI tools in the field of S&P is becoming increasingly necessary. The use of AI in S&P comes with the need to trust these artificial teammates. It is crucial to know they are fit for duty.

Advai’s toolset to assure ‘Fit for Duty’ AI provides a solution to evaluate and continuously monitor AI systems. Helping the use of AI to catch baddies on robust, defendable grounds.

We look forward to meeting everyone at Security & Policing 2024.

Further reading

Read our two opinion pieces on the AI Act: