It’s Time To Embrace AI

We are living through a time described as the ‘exponential age.’ The rapid rate at which technology is developing is staggering, particularly in relation to artificial intelligence (AI). Inevitably, this has led to the question: can we – as compliance practitioners – keep up?

Silent Eight’s Vice President UKI & MEA, Pia Ffoulkes, has a key message on AI for financial crime practitioners looking to prepare themselves for the future: “Embrace it.”

I came from a background of being at traditional screening vendors. When I took the plunge into the start-up, scale-up life I was so excited to learn everything about AI, and the advice I would give to anybody that’s starting in AI, or if it’s starting to be brought into your organization, is: consume everything you can. This is the time to be a sponge,” she advises.  

Real-Time Results

AI encapsulates a range of technologies that have applications across all areas of modern life. Within the financial crime compliance context, natural language processing (NLP), natural language generation (NLG), and machine learning are rapidly coming to the fore.

NLP enables unstructured data – such as emails, chat messages, web pages, news articles, legal documents, and SWIFT transaction messages – to be analyzed quickly to extract useful information and insights. NLG can then be used to summarise in plain English the resulting key findings and decision rationales.

Meanwhile, machine learning solutions and algorithms can automatically classify and cluster transactions, customers, and other data points to rapidly identify anomalies and potential fraud.

All of this happens in real-time (versus sitting in the analyst queues for days and months sometimes),” explains Ffoulkes, “enabling institutions to manage risks better and in real-time. Our clients have observed a significant reduction in response times for escalating and managing financial crime exposure.”

She continues: “Imagine finding a needle in a haystack. That’s how the current compliance processes usually come across. The analysts go through hundreds and thousands of case investigations to identify the true financial crime risks. Sometimes it’s already too late by the time these risks are identified and actioned.”

Job Security

The flip side of these potential improvements in efficiency and effectiveness is, for some, a fear of job security. First and foremost, Ffoulkes is keen to debunk the idea that AI is going to take over people’s jobs.

AI is not here to replace you,” she stresses. “It should be a tool that you can add to your skillset, so if you can understand it, you can work with it, and you will become much more experienced.”

I don’t see the financial crime practitioner disappearing,” she adds. “I see them becoming more skilled and more specialized. There won’t be this need to take on the straightforward investigative processes that they’re doing today.

Hopefully, we’ll see level 1-type tasks are all automated. There’s still going to be a need for practitioners, but they are going to be far more progressed in their skillset and far more focused on, for example, risk assessment, decisioning, and collating data.

Embrace AI And Follow A Pro-Innovation Path

This chimes with the message currently coming from the UK government, which at the end of March, published a policy paper titled, ‘A pro-innovation approach to AI regulation.’[1] 

In it, Michelle Donelan, Secretary of State for Science, Innovation, and Technology, stated: “Our vision for a future AI-enabled country is one in which our ways of working are complemented by AI rather than disrupted by it.

In the modern world, too much of our professional lives is taken up by monotonous tasks – inputting data, filling out paperwork, scanning through documents for one piece of information, and so on. AI in the workplace has the potential to free us up from these tasks, allowing us to spend more time doing the things we trained for.

AI Anxieties

Nevertheless, there is growing unease being expressed around the world towards AI, including recently in an open letter published by thinktank The Future of Life Institute and co-signed by the likes of Elon Musk and Steve Wozniak.[2] 

This letter has provoked fierce debate after it called on all AI labs to immediately pause for at least six months the training of AI systems more powerful than the much-discussed GPT-4.

The open letter states: “AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems. These should, at a minimum, include new and capable regulatory authorities dedicated to AI.”

A lot of concerns around AI relate to the use of data to train it. Data privacy issues, for example, saw JPMorgan recently take the step of restricting their staff’s use of ChatGPT [3] while Italy’s data protection authority blocked it in the country altogether.[4]

Ffoulkes explains: “AI solutions often require large amounts of training data to be successful, so before embarking on any AI journey, a financial institution should understand the applicable laws in the relevant jurisdictions that it operates in.

Every country and every financial firm will have its own unique risk appetite when it comes to sharing and processing client data, so it is important to make sure that any potential RegTech partner is aware of those nuances and is willing to be flexible to meet those specific needs.”

Regulatory Approaches To Embrace AI

Around the world, a variety of different approaches are being taken to AI regulation, ranging from light touch to calls for a full-scale halt of development.

With the Artificial Intelligence Act, the EU has proposed rules tailored to a risk-based approach, aiming to categorize AI technology into four levels of risk: unacceptable, high, limited, and minimal.[5] Late last year, the European Council adopted a general approach position on the legislation, but it remains under discussion in the European Parliament.

In China, the Shanghai Regulations on Promoting the Development of the AI Industry came into effect in October last year. This provincial-level legislation introduced a graded management system, Ethics Council, and incentives for AI development.

Also of note within these regulations is how relevant municipal departments will oversee creating a list of infraction behaviors, with a disclaimer stating there will be no administrative penalty for ‘minor infractions.’[6] 

Meanwhile, many governments are putting regulatory sandboxes at the heart of their AI strategies, allowing live testing of AI innovations but within controlled, regulated environments.

Criminal Capabilities

While the debate rages on around the best approach to AI regulation, its illegal uses continue to flourish. Just as the concepts of machine learning and NLP are moving into mainstream conversation, so too are the terms ‘deep fakes,’ ‘bots,’ and ‘zombies.’

Criminals are now able, with alarming ease and minimal financial cost, to use AI to create fake identity documents and open fraudulent accounts through which to launder illicit proceeds. Major institutions are also seeing an increase in coordinated attacks by cybercriminals using botnets – networks of infected computers controlled remotely.

By increasing their own AI capabilities, criminals are making their attacks more sophisticated and harder to detect, including through ‘smart’ malware that can adapt and evolve to override traditional antivirus software.

All of this is adding to the suspicion around AI, but Ffoulkes believes it’s the key to finding solutions. “If you can prepare yourself by having the best technology and the best people, then at least you stand a chance of being able to fight some of these cyber security attacks that we’re seeing today.”

In The Right Hands

She points out how AI is revolutionizing areas like complex transaction monitoring, where an AI model can investigate in minutes, if not seconds, thousands of alerts. Furthermore, supervised and unsupervised machine learning techniques have proved invaluable when studying large datasets of historical alert investigations and ‘learning’ what suspicious (and non-suspicious) alerts look like.

These learnings can then be applied to anti-money laundering and sanctions detection to find new risks that were previously undetected, or in the case of applications such as those offered by Silent Eight, can be used to learn and replicate the same decision-making process of a human investigator to adjudicate and close alerts automatically.

Yet the human element is always still there and very much required. “It’s the practitioners who will come up with the ideas for the AI developers to use. AI needs to learn from them,” Ffoulkes stresses.

AI can do amazing things, but remember that it still learns from human behavior and decisions. I don’t see AI as a replacement for human investigators but instead as a tool that the most successful investigators will wield to their benefit.

The more knowledge that the practitioners have, the more knowledge it feeds back. It’s just a constant, continuous loop of learning.

Written by Judith Hawkins – Judith Hawkins is the Content Manager (Digital) at the ICA.

Pia Ffoulkes is Vice President of UKI & MEA at Silent Eight, a RegTech vendor specializing in AI offerings. Growing from a start-up based in Singapore, the company now provides a variety of solutions to financial institutions in over 150 markets, including the likes of HSBC, Standard Chartered, and AIA.

This article was first published by the International Compliance Association (ICA), the leading professional body for the global regulatory and financial crime compliance community. For more information on the benefits of becoming an ICA member, including access to the ICA’s complete content library of articles, videos, podcasts, blogs, and e-books, visit: Become an ICA Member – Application Form (int-comp.org)

References:

[1] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

[2] https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[3] https://www.forbes.com/sites/siladityaray/2023/02/22/jpmorgan-chase-restricts-staffers-use-of-chatgpt/?sh=2dfc559a6bc7

[4] https://www.bbc.co.uk/news/technology-65139406

[5] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[6] https://www.holisticai.com/blog/china-ai-regulation

Lavanya Rathnam

Lavanya Rathnam is an experienced technology, finance, and compliance writer. She combines her keen understanding of regulatory frameworks and industry best practices with exemplary writing skills to communicate complex concepts of Governance, Risk, and Compliance (GRC) in clear and accessible language. Lavanya specializes in creating informative and engaging content that educates and empowers readers to make informed decisions. She also works with different companies in the Web 3.0, blockchain, fintech, and EV industries to assess their products’ compliance with evolving regulations and standards.

Posted in Articles

Leave a Reply

Your email address will not be published. Required fields are marked *