• Highlights
    • AI Safety across categories
    • Defence Military solutions as exhibited at DSEI 2023.
    • ID Verification Benchmark identity verification providers with world-first research methods.
    • LLM Alignment Assuring Large Language Models.
    • Solving AI Adoption Problems How we improve AI success rates.
    • Products
    • How to work with Advai
    • Advai Advance Intensive discovery process.
    • Advai Versus Developer workbench.
    • Advai Insight Better AI decision making.
    • Company
    • Company Who we are and what we're trying to achieve.
    • Careers Help us build a world with trustworthy AI.
    • Our Approach A little glimpse under the hood.
    • Company News Explore our news, mentions and other Advai happenings.
    • Thinking
    • Journal Thought leadership and editorial relating to AI Assurance.
    • AI Ethics AI Ethics from an Assurance perspective.
    • AI Adoption Playbook A must-have download. An assurance-engineered guide to AI adoption.
    • Solve LLM Adoption A focus on how Advai's services solve LLM adoption challenges.
  • FAQs
  • contact@advai.co.uk
  • LinkedIn
  • Twitter
  • Book Call
Contact Us

Journal

View All
Image Copy
Learn Article News

11 Mar 2025

The AI Revolution: Turning Promise into Reality

The AI revolution is here, but in high-stakes sectors like security and policing, adoption depends on trust, rigorous testing, and assurance. Without clear evaluation frameworks, AI’s potential remains untapped—and the risks outweigh the rewards.

We explore how the UK’s AI Opportunities Action Plan lays the groundwork for progress, but also why testing and assurance must be the priority. From defining evaluation metrics to embedding standards into procurement, the key to unlocking AI’s potential is ensuring it works safely, ethically, and reliably.

AI Safety AI Robustness Language Models Trends and Insights Adversarial Attacks AI Ethics Police and Security AI Risk

DWP Law Change For PIP Claimants 'Increases Payments' For Pensioners Yorkshirelive
Learn Article

28 Feb 2025

AI Bias: The Hidden Flaws Shaping Our Future

Our own Michaela Coetsee dives deep into the pressing issue of bias in AI systems. While AI has the potential to revolutionise industries, its unchecked development can perpetuate harmful societal biases.

From data to algorithm design, and even human processes, understanding and mitigating bias is crucial for ensuring AI serves humanity equitably.

As we continue to build and refine AI, it's vital that fairness, diversity, and inclusivity remain at the forefront of development. Read more on how we can shape the future of AI responsibly.

AI Safety AI Robustness Language Models Trends and Insights AI Ethics

Apple Logo 3
Learn Article

03 Feb 2025

Apple’s AI News Debacle: How Assurance-Driven Evaluation Could Have Prevented It

A few weeks ago, Apple News made headlines for all the wrong reasons. Its AI summarisation tool generated inaccurate—and sometimes offensive—summaries of news articles. While some errors were laughable, others seriously damaged trust in the platform.

AI Safety AI Robustness Language Models Trends and Insights

Screenshot 2025 02 10 At 12.06.42
Learn Article

03 Feb 2025

Aye Aye AI Podcast

Our very own Chris Jefferson and Matt Sutton were guests on the latest episode of the Aye Aye AI podcast!


AI Safety AI Robustness Language Models Trends and Insights

Assurance Techniques
Learn Article

11 Sep 2024

A Look at Advai’s Assurance Techniques as Listed on CDEI

In lieu of standardisation, it is up to the present-day adopters of #ArtificialIntelligence systems to do their best to select the most appropriate assurance methods themselves.

Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety
sector.

AI Safety AI Robustness Language Models Trends and Insights AI Assurance Adversarial Attacks AI Governance AI Ethics AI Compliance AI Risk Case Study

Synthetic Listing
Learn Article

16 Jul 2024

Authentic is Overrated: Why AI Benefits from Synthetic Data.

When assuring AI systems, we look at a number of things. The model, the people, the supply chain, the data, and so on. In this article, we zoom into a small aspect of this you might not have come across -#SyntheticData 

We explain how 'fake' data can improve model accuracy, enhance robustness to real world conditions, and strengthen adversarial resilience. And why it might be critical for the next step forwards in #ArtificialIntelligence

AI Safety AI Robustness Language Models Trends and Insights

Ant Inspiration
Learn Article

26 Jun 2024

Ant Inspiration in AI Safety: Our Collaboration with the University of York

What do ants have to do with AI Safety? Could the next breakthrough in AI Assurance come from the self-organising structures found in ecological systems?

The UK Research and Innovation funded a Knowledge Transfer Partnership between Advai and the University of York.

This led to the hire of Matthew Lutz "AI Safety Researcher / Behavioural Ecologist".

In this blog, we explore Matt's journey from architecting, through the study of Collective Intelligence in Army Ant colonies, and how this ended up with him joining as our 'KTP Research Associate in Safe Assured AI Systems'.

AI Safety AI Robustness Adversarial Attacks Language Models Trends and Insights

Advai Day Out With Military Cover
Learn Article

14 May 2024

Advai’s Day Out Teaching the Military how to Exploit AI Vulnerabilities

"It’s in this moment where the profound importance of adversarial AI really clicks. The moment when a non-technical General can see a live video feed, with a small bounding box following their face, identifying them, and pictures the enemy use-case for such a technology.

Then, a small amount of code is run and in a heartbeat the box surrounding their face disappears.

Click."

Read more about our day with the UK Ministry of Defence…

AI Safety AI Robustness Adversarial Attacks Computer Vision Defence Case Study

Ncsc List Section
Learn Article

18 Apr 2024

Uncovering the Vulnerabilities of Object Detection Models: A Collaborative Effort by Advai and the NCSC

Object detectors can be manipulated. -The car is no longer recognised as a car. -The person is no longer there. ...As the use of these detection systems becomes increasingly widespread, their resilience to manipulation becomes increasingly important.

The purpose of this work is to both demonstrate vulnerabilities of these systems and to showcase how manipulations might be detected and ultimately prevented.

In this blog, we retell of our technical examination of five advanced object detectors' vulnerabilities, with sponsorship and strategic oversight from the National Cyber Security Centre (NCSC).

AI Safety AI Robustness Adversarial Attacks Computer Vision Defence Case Study

Neuron Versus Neural
Learn Article

13 Mar 2024

Fit for Duty Artificial Intelligence

How do we trust Security and Police? Do we deploy fresh recruits to complex field operations?

No, we don't.

Organic intelligences are put through established cognitive exams, physical tests and medicals. They are then monitored and periodically re-evaluated.

We should also ensure AI systems are Fit for Duty before deploying them. Read the full article below.

AI Robustness AI Safety Trends and Insights AI Assurance Computer Vision Police and Security

Ethics Listing
Learn Article

22 Feb 2024

The Unwitting AI Ethicist

If you're curious about the types of ethical decisions AI engineers are faced with, this article is for you. TLDR; AI engineers should take on some ethical responsibilities, others should be left for society. Read to find out more...

AI Robustness AI Safety Trends and Insights AI Assurance AI Risk AI Ethics

AI Orchestra
Learn Article

09 Jan 2024

Welcome to the Era of AI 2.0

The paradigm has shifted: AI 2.0 is the amalgamation of intelligent language agents capable of collaboration, whose behaviour is guided by natural language, rather than code. 

'AI 2.0' is marked distinctly by the orchestration of LLM-based agents. It is AI language models that are capable of managing, directing and modulating other AI. This not merely an incremental step. It’s a leap in artificial intelligence that redefines what is possible for both business and government.

AI Robustness AI Safety Trends and Insights AI Assurance AI Risk Language Models

Ai Bureaucracy
Learn Article

12 Dec 2023

The AI Act-ually Happening

Some strengths, some weaknesses and 3 key implications for businesses seeking to adopt artificial intelligence, now the EU has finalised The AI Act.

Let the regulatory driven transformation commence. 

AI Robustness AI Safety Trends and Insights AI Regulation AI Governance AI Assurance AI Risk

When Computers Beat Us Listing Icon
Learn Article

05 Dec 2023

When Computers Beat Us at Our Own Game

You’ve probably seen the Q* rumours surrounding the OpenAI-Sam-Altman debacle. We can’t comment on the accuracy of these rumours, but we can provide some insight by interpreting Q* in the context of reinforcement learning.

It's fun, inspiring and daunting to consider that we may be approaching another one of 'those moments', where the world’s breath catches and we're forced to contemplate a world where computers beat us at our own game.

Language Models AI Robustness Adversarial Attacks AI Safety Trends and Insights

Achilles Website
Learn Article

23 Nov 2023

The Achilles Heel of Modern Language Models

Maths can scare many non-technical business managers away. However, a brief look at the maths is great to remind you quite how 'inhuman' artificial intelligence is, and how inhuman their mistakes can be.

Read our short post on why the inclusion of 'sight' makes language models like Chat GPT-4 so vulnerable to adversarial attack and misalignment.

Language Models AI Robustness Computer Vision Adversarial Attacks AI Safety Trends and Insights

Criminal Instruction Manual
Article Learn

20 Nov 2023

Securing the Future of LLMs

Exploring generative AI for your business? Discover how Advai contributes to this domain by researching Large Language Model (LLM) alignment, to safeguard against misuse or mishaps, and prevent the unlocking of criminal instruction manuals!

AI Safety Adversarial Attacks AI Robustness AI Assurance Language Models Trends and Insights AI Risk

LLM Listing Image 3
Article Learn

18 Oct 2023

In-between memory and thought: How to wield Large Language models. Part III.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics Language Models Trends and Insights

LLM Listing Image 2
Article Learn

11 Oct 2023

In-between memory and thought: How to wield Large Language models. Part II.

This is the second part of a series of articles geared towards non-technical business leaders. We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.


"Language models are designed to serve as general reasoning and text engines, making sense of the information they've been trained on and providing meaningful responses. However, it's essential to remember that they should be treated as engines and not stores of knowledge."

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Governance AI Ethics Language Models Trends and Insights AI Risk

Adversarial Ai
Article Learn

11 Oct 2023

Assurance through Adversarial Attacks

This blog explores adversarial techniques to explain their value in detecting hidden vulnerabilities. Adversarial methods offer insight into strengthening AI against potential threats, safeguarding its use in critical sectors and underpinning AI trustworthiness for end users.

Adversarial Attacks AI Robustness AI Assurance AI Safety Language Models Trends and Insights

LLM Listing Image 1
Article Learn

04 Oct 2023

In-between memory and thought: How to wield Large Language models. Part I.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

Language Models Trends and Insights AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics AI Compliance

Cyber Privacy Ai
Article Learn

13 Sep 2023

AI Powered Cybersecurity: Leveraging Machine Learning for Proactive Threat Detection 

Every day the attack surface of an organisation is changing and most likely growing.


An environment where petabytes of both traditional and AI enhanced data are transferred across private and public networks creates a daunting landscape for cybersecurity professionals.  This data rich world is now even more accessible to cyber criminals as new AI-enabled strategies facilitated by open-source tooling become available to them.


How is the modern CISO, IT manager or cybersecurity professional meant to keep up? The answer, probably unsurprisingly, is that to detect, and deal with these new threats, AI is also the solution. 

AI Safety AI Governance AI Compliance AI Investment Trends and Insights AI Robustness Adversarial Attacks

PECB Webinar
Article Learn

31 Aug 2023

Risk Framework for AI Systems

In 2023, AI has become a pivotal business tool, posing both opportunities and risks. Understanding AI, its regulatory landscape, and integrating it into risk management frameworks are essential. This involves staying informed about global regulations, recognising AI-specific threats, and adopting a structured approach to risk management. Stress testing AI systems is crucial for assessing performance and reliability. Businesses must continually adapt, leveraging risk assessments and monitoring to safely harness AI's potential.

AI Safety AI Robustness Adversarial Attacks AI Governance AI Assurance Trends and Insights

3. Stakeholder Slide
Article Learn General

31 Jul 2023

Aligning MLOps with Regulatory Principles

A recap of Chris Jefferson's talk centred on MLOps and impending AI regulations. He highlighted challenges in deploying ML systems, focusing on compliance, risk, and harm prevention. Key discussions included security risks in ML models, the importance of understanding the impact of use cases, and managing failure modes. Chris also emphasised the societal impact of AI and the need to bridge the gap between public perception and technological realities. He advised on streamlining MLOps processes for efficiency and efficacy, and aligning development with the principles underlying regulation.  

RMF Redesign Rounded Other Graphics 04
Article Learn

17 Mar 2023

What is The NIST Artificial Intelligence Framework

Welcome to the What is.. series? Bite size blogs on all things AI.

In this instalment we explore the what, where and why of The NIST AI Framework. What is it? Where is it used? Why is it important?

Security Vision
Article Learn

15 Mar 2023

Assuring Computer Vision in the Security Industry

Advai assessed an AI's performance, security, and robustness in object detection, identifying imbalances in data and model vulnerabilities to adversarial attacks. Recommendations included training data augmentation, edge case handling, and securing the AI's physical container.

Computer Vision AI Governance AI Assurance Adversarial Attacks AI Robustness Case Study AI Risk

DALL·E 2023 02 23 22.24.20 Draw Me A Picture Of Artificial Intelligence With Lifecycle Diagrams On It, Digital Art
Article Learn

25 Feb 2023

What is Responsible AI

Welcome to the What is.. series? Bite size blogs on all things AI.

In this instalment we explore the what, where and why of Responsible AI. What is it? Where is it used? Why is it important?

AI Safety AI Robustness AI Assurance AI Ethics

DALL·E 2023 02 23 22.55.30
Article Learn

18 Feb 2023

What is Robust AI

Welcome to the What is.. series? Bite size blogs on all things AI.

In this instalment we explore the what, where and why of Robust AI. What is it? Where is it used? Why is it important?

AI Robustness AI Assurance Trends and Insights

A G89c0f37de 1280
Article Learn

04 Feb 2023

What is Artificial Intelligence

Welcome to the What is.. series? Bite size blogs on all things AI.

In this instalment we explore the what, where and why of Artificial Intelligence. What is it? Where is it used? Why is it important?

Trends and Insights

Jason Leung Bpt0 Iquhk8 Unsplash
Article Learn

08 Jan 2021

Fooling an Image Classifier with Adversarial AI

A Simple Adversarial AI Black Box Attack on an Image Classifier.

Markus Spiske Xekxe Vr0ec Unsplash Scaled
Article Learn

17 Dec 2020

Machine Learning: Automated Dev Ops and Threat Detection

Getting to grips with ML Ops and ML threats.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Risk Trends and Insights AI Governance

View All

Contact

Contact

Join the Team

Address

20-22 Wenlock Road
London N1 7GU

Social

LinkedIn

Twitter

Company Journal
© 2025 Advai Limited.

Cookie Policy

|

Privacy Policy

|

Terms and Conditions