07 Jul 2023

Biased Age Estimation Algorithms

Biased age estimation is a great example of algorithmic #discrimination. Such #AI algorithms are therefore unfit for use. Right?

Well, their use is threatening to happen anyway. With multiple US federal bills and the UK’s Online Safety Bill looking to legislate online age verification, improving the robustness of these systems is becoming growingly urgent.

Words by
Alex Carruthers
Age Estimation Bias

Women’s ages were underestimated, South Americans had their ages generally overestimated, while the ages of Asians were also underestimated.

Biased age estimation is a great example of algorithmic #discrimination. Such #AI algorithms are therefore unfit for use. Right?

Well, their use is threatening to happen anyway. With multiple US federal bills and the UK’s Online Safety Bill looking to legislate online age verification, improving the robustness of these systems is becoming growingly urgent.

*NIST’s 2014 finding of bias across age estimation algorithms was eye opening. They’ve now reopened the call for developers to comment. We’re in strong support of NIST continuing to engage the AI community, challenging us to remove the discrimination from #artificialintelligence systems.

Read on to find out more!

Biased age estimation is a great example of algorithmic discrimination. Such AI algorithms are therefore unfit for use. Right?

What’s this, you ask?

NIST’s (National Institute of Standards and Technology, USA) research reviewed many age verification algorithms back in 2014.

https://nvlpubs.nist.gov/nistpubs/ir/2014/NIST.IR.7995.pdf

NIST was offering the chance for empirical validation to any provider of an AI age estimation algorithm who could demonstrate their models were free from bias.

In short, they weren’t.

Women’s ages were underestimated, South Americans had their ages generally overestimated, while the ages of Asians were also underestimated.

This is clearly worrying given their increasingly widespread use.

 

It’s urgent because the world is busying itself with online age validation.

Government assigned IDs might seem the most obvious option, but leave the question remains: how do you prove the holder of the ID is the one using the computer?

  • Not to mention the extra cost to government issuing, updating, and maintaining a new ID system for minors.
  • Also, any age validation process which also identifies the individual comes with enormous privacy and security concerns.

What’s the AI Act again?

There are already examples of age estimation without an ID. Facebook and Instagram uses for their dating service:

  1. Meta is expanding its use of AI face scanning to verify users’ age on Facebook Dating - The Verge.
  2. France’s National Commission on Informatics and Liberty(CNIL), outline a possible solution of “guessing” your age based on your online activity.
  3. In China, gamers who want to log on to play mobile games after 10pm must prove their age: Explainer: Why and how China is drastically limiting online gaming for under 18s | Reuters

NIST are now reopening a call to developers

developers to comment on their methodology and continue to contribute their age estimation algorithms (FRVT Age Estimation (nist.gov)).

We’re in strong support of NIST continuing to engage the AI community, challenging us to remove the discrimination from AI systems.

‘Reducing the inscrutability of AI systems’ is one of our favourite themes from NIST’s guiding Responsible AI framework. This framework, at core, defines #responsibleAI as human-centric, socially responsible, and sustainable.

 

 

Why is it important?

Robustness and fairness considerations are crucial when implementing age estimation technology.

Use cases which NIST refer to are for are:

1) purchases of restricted substances – perhaps the most obvious, but also

2) the age determination of people at a crime scene to aid with identification,

3) age-adaptive targeted marketing, to prevent restricted substances being advertised to minors, and

4) person identification, such as in missing-children’s cases.

It’s intuitively handy for businesses and parents of end users when considering nightclubs, tattoos, lottery sales, piercing, pharmaceutical medicine, adult entertainment, online dating… and perhaps some sections of the internet?

 

First

The demographic sensitivity (the bias) is one thing, yet the core issue is that the cause for this bias remains a mystery.

This is where Advai lives and breathes: “Yes… but *why* does it fail??”

 

Building trustworthy AI is extremely difficult and beyond the capability of most organisations. It requires the focus and investment of organisations like NIST and the concentrated R&D efforts of businesses like Advai.

We work tirelessly to ingest the best robustness and adversarial techniques from across industry.

There are underlying principles we’ve extracted from multiple frameworks, whitepapers, and draft legislations. We see several consistent principles across the board and have designed our metrics and tooling to cater to these areas specifically.

 

  1. Risk management. Risk assessment and considered management of AI systems.
    Advai = uncover fault tolerances of AI systems and define the operational boundaries for their use.

 

  1. Human centricity. Ethical considerations; understand the context of use/deployment.

Advai = reduce discrimination with training data analysis, injecting synthetic data to mitigate for any intrinsic bias.

 

  1. Accountability. Traceable, compliant, maintain human oversight and responsibility.

Advai = keeping clear records of testing, enabling developers to call tests inline and offering non-technical senior leadership dashboards to monitor compliance broadly.

 

  1. Security. Design for security, mitigate ML security threats and resist adversarial attacks.

Advai = extensive red-teaming to exploit vulnerabilities in AI models, therefore identifying ways to mitigate such attacks.

 

  1. Trustworthy. Reliable, understandable, and interpretable; apply continual learning.

Advai = understanding when systems will break informs their appropriate use; workbenches are customised to industry compliance requirements.

 

Give us a call to help make your AI regulation ready.

Advai is a deep tech AI start-up based in the UK that has spent several years working with UK government and defence to understand and develop tooling for testing and validating AI in a manner that allows for KPIs to be derived throughout its lifecycle that allows data scientists, engineers, and decision makers to be able to quantify risks and deploy AI in a safe, responsible, and trustworthy manner.

If you would like to discuss this in more detail, please reach out to contact@advai.co.uk

Useful Resources