31 Oct 2020

Tricking The Trade

How High Frequency Trading AI Can Be Manipulated by Adversarial AI

Words by
Damian Ruck
Lo Lo Cevj8lpbjsc Unsplash

IT’S NOT JUST PICTURES OF CATS AND STICKERS ON STOP SIGNS

If you take an introduction to adversarial AI, you will likely be presented with a series of cases of neural networks not being quite as clever as many think they are. You’ll see images of cats misidentified as boats or a person with funny glasses being ignored by a facial recognition system. However, these adversarial examples cease being a sideshow curiosity when they align with nefarious intent.

Adversarial attacks are flaws in the artificial reasoning of algorithms. These misalignments with human expectation have serious consequences if a driverless car fails to see a stop sign, or a police database fails to recognize the face of a perpetrator. The greater the incentive to fool an AI system, the more we need to build up defenses.

Within the commercial realm, the rewards for gaining an edge are no greater than in finance, where AI is increasingly taking over key aspects of trading strategy. High frequency trading has taken the human almost completely out of day-to-day decision making. Engineers set up trading algorithms to exploit fleeting inefficiencies in the market, often on the scale of micro-seconds. These systems are loaded with AI and are vulnerable to adversarial attacks that could seriously damage the interests of their owners.

Tricking the Trade

A trading algorithm is a pipeline made up of two parts: a ‘price predictor’ and a ‘trading strategy’. It is important to think about the incentives of an attacker, but it’s just as vital to identify the ways an attacker can access your system’s inputs. In high frequency trading, the ‘trading strategy’ part of the pipeline is fairly secure; it makes choices based on hardcoded rules or internal data that cannot be easily affected by an outside attacker.

A price predictor, on the other hand, is vulnerable because it relies on external public data on trades in a particular market. This means an attacker can influence the predictor through strategically placed buy and sell requests. (COMMENT: we should note that in this blog, we are just talking about an external attacker – when it comes to insider threat, this becomes a whole new ballgame – at Advai, we are interested in both).

But surely an attacker would have to be fabulously wealthy to buy or sell enough stock to distort a market? Well actually, no. First, it is possible to minimize the cost, by careful design of the attack. Aside from that, high frequency trading intrinsically lends itself to low cost attacks because high frequency traders are making many decisions every second. This means the number of new trades at every interval will be small. Even in a market with 10,000 trades in an hour, this would average out at less than 3 new trades per second, making it much easier to move the market.

Computer scientist Micah Goldblum, and his colleagues from the University of Maryland, have used financial data to design a ‘universal attack’ on trading systems. This means they can generate a series of buy/sell requests for a particular stock that reliably diminishes the performance of any trading system, even a system that they have never seen before. This discovery should alarm folks in high frequency trading.

Next Generation Threats

These adversarial attacks harness the speed of modern computers and state-of-the-art optimization algorithms. This opens the door to new and unseen market manipulations that no human has ever thought to defend against before. What is striking, however, is that these algorithms also rediscover old and well-known market manipulations, even ones long since made illegal. One such rediscovery is “spoofing” – where a trader makes disingenuous buy/sell requests at the margins of a market, only to cancel them after the market changes in their favour. This practice was made illegal in the USA under the 2010 Dodd-Frank Act.

This ability to uncover new market manipulations should be of interest to financial regulators. Manipulating markets through ‘spoofing’ was handcrafted by generations of traders, and practiced for decades before finally being outlawed. What should our response be when adversarial attackers could discover hundreds of these manipulations in a matter of hours rather than years?

Next Generation Defense

But all is not lost for those building the next generation of high frequency trading systems. Firstly, these systems can be hardened against adversarial attackers through a process called “adversarial training”. This means exposing a trading algorithm to adversarial manipulations, so training it to recognize them in the wild. This involves a careful balancing act between increasing robustness, whilst maintaining predictive performance (another area of Advai’s research and offer to our clients).

Secondly, the fact that financial companies jealously protect their trading algorithms helps with adversarial defence. We mentioned that ‘universal attacks’ are possible, and these can be designed without the attacker knowing the particulars of a given trading strategy. However, such attacks are fewer and cruder than the subtle array of attacks generated with full access to the trading algorithm. In high frequency trading, securing your algorithm can give you the edge over the attackers in an adversarial arms-race.

Another piece of advice for designing robust trading algorithms: keep things simple. Adversarial attacks are less effective on simple linear models than fancy neural networks. Therefore, it would be wise to hold back on rolling out your new LSTM price predictor, if it does not significantly out-perform your old linear auto-regression. It’s true that deep neural networks often offer better prediction, but they also bring their own unique set of problems.

Alternatively, bring in an expert. Advai was established specifically to identify, understand and mitigate Adversarial AI attacks. If you would like to talk to us about your system and whether it may be vulnerable, or simply to understand the risk, get in touch at contact@advai.co.uk.