Calibration can be a balancing act, with the tension between what customers want and what regulators require creating a technological challenge for vendors. The key? Use AI-like capabilities, but know where to turn them off.
The biggest cost of operating a trade surveillance system is not software licence fees but paying staff to wade through the vast number of false positives that such systems generate. Minimising those false positives is therefore a top priority for banks, brokers and asset managers.
Using advanced statistics or machine learning to capture multiple dimensions of behaviours and activities in one system can help, particularly for detecting active market manipulation strategies such as spoofing or layering.
However, “no matter how good, how smart, how crisp or how machine learning-based a surveillance system is, the majority of its findings are always going to be false positives,” acknowledges Dermot Harriss, senior vice president at OneMarketData.
Clarity and appropriate calibration
Efforts to reduce false positives can also tempt financial institutions to change their system settings to generate fewer alerts. This puts them at risk of deficiency with regulators, which are increasingly hawkish about the calibration of surveillance systems, Harriss notes.
The UK’s FCA, for example, reiterated in its September 2018 Market Watch newsletter concerns about firms using ‘out of the box’ or ‘industry standard’ settings to calibrate alert parameters and relying on average peer alert volumes as a measure of calibration effectiveness. In the US too, J.P. Morgan Securities was fined $800,000 by the enforcement divisions of the country’s three major exchange groups in July 2017, partly for mis-setting the parameters of a third-party surveillance system to the degree that it failed to detect potentially violative spoofing activity over three months in 2015.
Regulators are calling for full transparency from the firms they oversee, requesting that surveillance systems use rules that describe behaviour clearly and result in easily auditable alerts. This creates another potential tension, says Harriss. While financial institutions want to employ machine learning to help reduce false positives, the dynamic parameters associated with such technologies can generate alerts that are harder to explain or audit, he explains.
A multi-layered solution
Satisfying all these imperatives – financial institutions’ desire to minimise effort and regulators’ insistence that systems are crisp, revision-controlled, auditable and appropriately parametered – is a technological challenge for vendors.
OneMarketData’s approach is to initially take the ‘smartness’ out of its main system so that it satisfies regulatory expectations by showing every single alert that matches a pattern. An external smart system then conducts a further, deeper analysis, bringing in exogenous data sources such as trader profiling information or market data sources, and uses them to rank the same set of alerts.
“It puts the most interesting ones first, according to what can be a very sophisticated ranking algorithm like one based on machine learning,” Harriss explains. “That way, it’s the customer’s choice. They can focus on the ones at the top, and if they don’t have time, they can do a quicker, less deep analysis of the ones that are below that in rank.”
In line with guidance from most regulators that no alerts be ignored, OneMarketData always advises customers to look at every single one, Harriss stresses. However, “it makes sense to pay the most attention to the ones that have the best supporting evidence, because those will be the ones you can go after people for and shut them down. This also saves you a lot of money, because if you shut down the spoofer or the manipulator, they’re not going to be creating more alerts for you to spend your time on.”
Best of both worlds
The system also helps firms satisfy regulatory requirements in terms of appropriate calibration, giving clients a degree of control over their parameters.
While patterns in the system are based on a fixed rule, the market statistics that the model is sensitive to are dynamic and constantly being recalculated. In this way, it is able to mimic how a machine learning model calibrates itself as, for example, average spread or instrument-specific characteristics of the market change. This removes the need for clients to have manual control of their parameters, while still meeting, for example, the FCA’s requirements that parameters be appropriately tuned to the conditions of the market and to a firm’s flow.
And while parameters are dynamic, the system also meets some regulators’ requirements that users are able to demonstrate what version of the code and what version of the parameters resulted in any individual historical alert, Harriss adds.
Ultimately, machine learning and AI type capabilities clearly have much to contribute to trade surveillance, Harris concludes. However, “we advocate for a very clear separation. That way, you can get the best of both worlds.”