Why AI in AML makes sense.
By Alaina Webster
Recently, federal regulatory agencies issued a joint statement that, boiled down, encourages financial institutions to take advantage of the possibilities artificial intelligence (among other new technologies) might offer banks in meeting their Bank Secrecy Act/anti-money laundering compliance requirements.
In a December 2018 American Banker article, Kevin Wack noted that some in the financial industry have questions about the soundness of introducing AI into the BSA/AML landscape — notably how much more effective and efficient would it really be, and could compliance, in which transparency is key, flourish in AI’s so-called “black box?”
I sat down with Wayne, Pa.-based QuantaVerse’s Chief Revenue Officer Kamil Kaluza to find out.
In terms of efficiency, Kaluza points out that using machine learning and natural language processing (two concepts that fall under the wider umbrella of AI) to review data saves time by cleaning up the data tremendously.
“The idea of improving that data — existing transaction monitoring systems, with AI, are able to produce significantly fewer false positives, much less noise,” he says. “We’ve seen about a 40 percent reduction on those, easily, from that technology.”
Moreover, AI allows for the automation of many time-consuming processes. “Human beings, right now, they have to Google, they have to cross-check … we’re able to automate about 70 percent of that … it’s less for the humans to do.”
And AI isn’t just faster than humans, it’s better, Kaluza maintains. Because it is trained to analyze and compare very large samples of data, a trained machine can spot things a human might miss.
“Understanding economic purpose is key,” says Kaluza. “For instance, understanding the supply chain of an organization is really important because if you’ve got a cement company that is buying gravel, that is buying limestone that might make perfect sense, but a cement company that’s investing heavily in Christmas trees might not make sense.
“There’s just a lot of learning that happens in terms of what sorts of industries trade with each other,” he said. “What direction is that trade? What’s the frequency of that trade? What’s the volume, the value of that trade … Why those transactions?” AI can quickly answer these questions and interpret its findings in a fraction of the time it takes a human counterpart.
In terms of accuracy, Kaluza freely admits that measuring a negative is difficult and much of what is done with the data depends on the FI in question. “One of the challenges is there’s no such thing as absolute truth in this realm,” he says. “We’re not in the business of finding crime, per se, we’re in the business of finding suspicious activity. The idea of being accurate becomes a little misleading — it all starts to swing around your understanding of risk, and every financial institution has its own understanding of its own risk appetite.”
As for that infamous “black box,” Kaluza says that’s largely “marketing speak.” For technologies such as self-driving cars, Siri or Google maps, where intellectual property and trade secrets are paramount, the black box is necessary. But in the highly regulated world of finance, “the black box becomes untenable,” he says.
“The thing to remember about these quote-unquote AI technologies is that they are combinations of multiple different kinds of technologies that are applied together. Some of them do learning, some of them sort of translate meanings and sentiments, that sort of thing,” Kaluza says. “If you want, you can, in fact, open up this quote-unquote black box and look at the rationale, the reasoning behind all of its decisions.”
Alaina Webster is Managing Editor, firstname.lastname@example.org.