2.3: The lexicon is dead, long live the lexicon

Lexicons may be conceptually crude and lacking refinement, but they are still needed to form the building blocks to the more sophisticated, layered approaches to communications surveillance being developed.

There is nothing inherently wrong with the concept of lexicon-based surveillance. After all, the start of any investigation into market abuse, at least until AI-driven algos pit black box against black box, is what was communicated. To make the vast datasets of what is said inside even a moderately sized financial institution amenable to some form of analysis, pre-defining word lists that may be able to highlight dubious interactions seems a reasonable first step. 

A combination of general lexicons and lexicons tailored to specific forms of market abuse and misconduct, such as wash trades, cross-market and venue manipulation, and insider dealing, are still regarded by regulators and industry standards groups as a core part of best practice, as well as for signalling other types of financial crime. 

For example, in the FCA’s June 2019 Thematic Review, Understanding the Money Laundering Risks in the Capital Markets,[1] we read: “We observed good practice where participants revised terminology or lexicons for their electronic communications surveillance systems to incorporate lessons learned from money-laundering case studies in the media, such as the Deutsche Bank case.”

However, the continued emergence of significant misconduct, from the big FX and Libor scandals to bad practice in US residential mortgage-backed security markets, suggests that current communications surveillance models are still not particularly good at picking up even relatively straightforward and old-fashioned types of misconduct, let alone more complex problems like the current multibillion-euro cum-ex scandal unfolding across European markets. 

It’s what we don’t say
These are clearly not failures of lexicons alone, but it is clear from the published transcripts how difficult it is to determine what people mean from their conversations, let alone more subtle signals like intent.

Partly this is a function of market jargon, but mostly it is because speech is messy and in guarded conversations speakers often look to each other to finish their sentences. In case after case, a handful of transcribed words tailing off into silence (as both parties supplied the ends of the sentences internally) represented the key wrongdoing. But to explain what was going on required pages of explanation simply to understand what it was the traders were debating, let alone whether it was prohibited. Investigators had to use human analysts experienced in the same roles as the wrongdoers.

An example from this year’s Singapore F1: “It was a very late call, I thought it was a bit early,” Vettel told David Coulthard in the immediate post-race interviews. This looks nonsensical – how can a late call be early? But what Vettel meant was that the call came almost too late for him to react and divert to the pits, but looked too early in the race to be the best tyre strategy.

In addition, simple word-based alert systems are programmed to highlight every occurrence of a particular word, regardless of context. This largely contributes to the infamous ‘alert factory’ model of surveillance with which banks currently struggle. 

Lexicons produce unmanageably high levels of false positives, making the alerts almost as ineffective as no alerts at all. Furthermore, many systems produced alerts based only on a random sample (2–5% of total communications), which further decreases true effectiveness. As highlighted in the 1LoD 2020 Surveillance Benchmark Survey, 69% believe that data cleaning and validation was high priority in order to reduce the manual effort required to review false positives.

Intelligent communications surveillance
To reduce false positives (and so reduce compliance costs), and to improve the detection of actual conduct risks, banks are looking at more sophisticated voice and text analytics such as natural language processing (NLP) and various forms of AI. Some claim that by understanding the context of communications they can reduce alerts by more than 90%, as well as automating trade reconstruction and compliance workflows – all this while reviewing 100% of communications.

These solutions work by using NLP to analyse speech and text and extract quote and trade details from the communications. They allow human analysts to classify communications and then learn from that tagging system to improve their detection and identification algorithms. The hope is that their ability to identify patterns of communication will form the basis of more accurate alerts than lexicon-based systems. 

They also provide critical metadata in a voice file alongside the transcription. As well as timestamping and language detection, this data includes emotion detection signals from, for example, tone or pauses in speech. This is the beginning of the ability to detect intent. 

Not dead yet
However, currently these solutions are seen, particularly by regulators, as not sufficiently proven to be relied upon by themselves. So lexicons live on, and in fact are being used to strengthen even some of the most sophisticated methodologies. 

For example, one approach to voice surveillance is voice-to-text, in which software accurately transcribes voice channels into digitalized text, which itself can then be fed into the lexicon-based e-comms monitoring process. Another is to analyse voice communications by recognising phonemes but apply a lexicon to that phonetic analysis to improve transcription accuracy and make search and retrieval of analysed comms easier. 

And NLP itself benefits from lexicons, too. NLP-based models are only more accurate than lexicon-based ones if they understand financial industry language. So third-party vendors find it beneficial to ‘tune’ their models using the very lexicons the new systems are designed to render obsolete. 

As one explains: “Most of the surveillance systems that use AI and NLP also provide lexicons, as those are still useful for some needs, and very powerful if combined with AI and NLP.”

Nice to share
For the time being, a combination of imperfect lexicons and equally imperfect next-generation solutions looks most likely to be the next step in the evolution of communications surveillance.

Additionally, banks continue to find other ways to refine their lexicons, which will continue to serve as a first defence, with AI/NLP solutions operating as a filter to lower false positives and to identify anomalous behavioural patterns. 

We’ll be hosting a Boardroom Debate: E-Comm Surveillance at the Surveillance Summit, March 18th, London. Participating in the debate is Paul Clulow-Phillips, Managing Director, Global Head of Capital Markets Surveillance, Société Générale. Find out more here.

Based on an international benchmarking survey collecting the views of industry leading experts from 15 of the largest financial institutions globally, the 2020 Surveillance Benchmark Report provides a unique insight into the maturity and development of surveillance functions over the last 12 months, as well as predictions for the future. Including in-depth commentary from regulators, practitioners, consultants and technology experts, it is the only report for professionals in the industry.

Lead sponsor

Soteria_CMYK

Partner sponsors

ACA_Logo_CMYK

DR

Eventus Systems logo 1

OneTick

Researched and published by

1loD