Challenges in Implementing AI for Fraud Detection

AI for fraud detection

As organizations become more interested in leveraging AI technology for fraud detection, they must confront a diverse set of challenges. These challenges range from systemic vulnerabilities, to the emergence of novel forms of fraud, to hype-driven companies making promises about AI that they cannot possibly keep.

Combine all this with general ethical and practical concerns about AI’s many applications (and limitations), and there’s quite a lot of subject matter to review in this space.  Adopting a proactive approach that integrates AI with human expertise and vigilance can help organizations enhance their ability to combat fraud effectively while safeguarding against financial loss and protecting the interests of consumers.

Overview of AI for Fraud Detection

This blog is intended to provide a very broad overview of some basic considerations for people and organizations interested in learning more about challenges in implementing AI for fraud detection. I’ll bring together a ton of great sources, including some screenshots from this great presentation available on YouTube from Fisherfield Data and Privacy.

Let’s start out with a case study that emphasizes the “systemic issues” category of this blog – which is, arguably, the most high impact angle of this issue for businesses across finance, banking, healthcare and insurance sectors. Those industries have particular responsibilities around protecting consumer data – but any company looking to use AI to detect fraud, waste and abuse can benefit from resources like the ones we share here.

Perverse Incentives at the Systemic Level: The Tale of “Dr. Dave”

In the criminal insurance fraud case of “Dr. Dave” Williams, the perpetrator exploited vulnerabilities in the system to bill insurers for medically unnecessary services. Despite evident (and regularly reported) red flags, authorities struggled to apprehend him for years due to the lack of interest in pressing charges on the part of major insurance carriers that Williams was exploiting. There is a great and thorough overview of “Dr. Dave” Williams on Robert Evans’ podcast (warning for coarse language).

Williams, also known as “Dr. Dave,” lacked medical credentials but obtained federal identification numbers that indicated he was a valid medical provider. He used his real name and address on applications for these ID numbers, known as National Provider Indicators (NPIs), shutting down one and opening another as suspicions about his credentials rose time after time. This method enabled him to fraudulently bill insurers as a physician for services provided to approximately 1,000 individuals.

Dr. Dave’s scheme was audacious: inundate insurers with out-of-network claims. He was confident that a significant portion would be approved without scrutiny, and that those that did invite scrutiny would be simple enough to throw out and start again with a fresh NPI. For a long time, this assumption proved correct.

Even as objections arose and more and more reports were filed by scammed patients, Williams calculated that legal action was unlikely – and for a long time he was correct. Over four years, Williams defrauded top insurance companies, including United, Aetna, and Cigna, for $25 million, pocketing about $4 million. He attempted to justify his actions by claiming to provide preventive medicine, allegedly helping “hundreds of patients” avoid surgery and reduce medication reliance.

Williams’ fraudulent activities came to light in 2017 when the FBI’s healthcare fraud squad intervened. His trial exposed systemic vulnerabilities and lack of oversight within the health insurance industry. Prosecutors portrayed Williams as a predator exploiting trust-based systems and insurers’ passive claims processing.

Exploiting the System

While Williams was convicted and sentenced to prison time, his case underscores broader issues in the healthcare and insurance industry. It highlights how fraudsters exploit the system, challenging efforts to combat fraud effectively. Despite insurers’ assurances, fraud persists, impacting consumers through increased financial burdens and reduced coverage. For folks whose job it is to root out Medicare fraud, it seems like existing regulations all but encourage fraudsters to take advantage of the system.

In an article about this case by ProPublica, white collar criminal defense attorney Michael Elliott states that “Medicare has to make sure that the individuals who apply for NPIs are licensed physicians — it’s that simple.” Williams’ long unchallenged malfeasance reveals the financial incentives at play in the health insurance space. Rising healthcare costs due to rampant fraud boost insurers profits in the form of lost revenue translating into higher insurance premiums for patients. Combating fraud more aggressively, on the other hand, erodes those profits with the cost of fraud investigation measures, without the added benefit of raised premiums.

In summary – insurers often prioritize financial interests over rooting out fraud.  The Williams case sheds light on systemic weaknesses and challenges in combating healthcare fraud. It emphasizes the need for proactive measures to protect healthcare resources and consumers from fraudulent practices.

This example does, of course, apply to industries besides health insurance. It is worth your time to explore whether the systems that make your industry run do not incentivize fraud detection. This is true whether or not the tools you’re considering to improve detection of fraud, waste and abuse include AI.

Navigating the AI Privacy Landscape: IAPP Overview of Best Practices

The existing gold standard for AI risks taxonomy is based on Daniel Solove’s seminal 2006 paper, “A Taxonomy of Privacy.” Instead of directly applying Solove’s 16 privacy risks to an AI context, authors at the International Association of Privacy Professionals (IAPP) assessed how AI impacts these risks, whether by exacerbating existing issues, introducing new ones, or remaining unrelated.

By blending a regulation-agnostic approach with verified real-world incidents, the authors distilled a set of 12 distinct risks from Solove’s original 16, avoiding speculative or theoretical scenarios.

AI-Specific Risks

The list of 12 AI-specific risks is below:

Surveillance: AI amplifies surveillance risks by expanding the scope and prevalence of personal data collection.

Identification: AI technologies facilitate automated identity linkage across diverse data sources, heightening risks associated with personal identity exposure.

Aggregation: AI merges various data points about an individual to draw inferences, posing privacy invasion risks.

Phrenology and physiognomy: AI infers personality or social traits from physical features, introducing a novel risk category absent in Solove’s taxonomy.

Secondary use: AI exacerbates the diversion of personal data for unintended purposes through data repurposing.

Exclusion: AI worsens the lack of user information or control over data usage through opaque data practices.

Insecurity: AI’s data requisites and storage methods raise the risk of data breaches and unauthorized access.

Exposure: AI may unveil sensitive information, particularly through generative AI techniques.

Distortion: AI’s capability to generate realistic, yet fraudulent content amplifies the dissemination of false or misleading information.

Disclosure: AI may lead to improper data sharing when inferring additional sensitive information from raw data.

Increased Accessibility: AI broadens access to sensitive information beyond intended audiences.

Intrusion: AI technologies encroach upon personal space or privacy, often through surveillance measures.

AI Privacy Risks

It’s crucial to recognize that the AI privacy risks taxonomy is dynamic—a living framework that must adapt alongside the AI landscape. Effective governance necessitates continual adjustment, informed by collaborative research and interdisciplinary dialogue. Striking a balance between innovation and ethical standards, along with data protection, remains imperative.

For example, constantly evolving frontiers of AI are both aspirational and ambitious. In an attempt to get around privacy issues that arise when virtual assistants and Large Language Models (LLMs) are connected to the internet. Companies like Microsoft and Peter Thiel’s Palantir are leading the charge to create air-gapped LLMs (Large language models). Air-gapped LLMs are totally separate from the internet. LLMs like these, when properly designed, can help properly credentialed Pentagon staff quickly navigate large amounts of classified information.

While the new tech shows promise, there are many reasons to be concerned – for one thing, it is not at all uncommon for these cutting-edge AI tools to have the plug pulled at the last possible minute when massive data privacy issues are revealed. This is one of many reasons to avoid investing in tech that makes unproven claims about the capability of its AI-driven features.

Novel Forms of Fraud: The Challenge of Identifying Emerging Threats

Fraud and scams manifest in various recognizable shapes, each presenting unique challenges for detection and prevention. From classic schemes like check kiting and the Kansas City shuffle to more sophisticated ploys involving fake credentials and pig butchering, the landscape of fraudulent activities continues to evolve, presenting new obstacles for traditional detection methods to overcome. The Kansas City shuffle is a deceptive maneuver, named after the city notorious for its riverboat gambling, relies on misdirection and sleight of hand to deceive unsuspecting victims.

Scammer Tactics

Fraudulent activities have become increasingly sophisticated since the riverboat days of American financial fraud. Scammers employ tactics such as forging fake credentials to gain access to sensitive information or perpetrate identity theft. One example of fraud recently featured on John Oliver’s show Last Week Tonight – “pig butchering scams” – is a type of long-term investment fraud that has flourished in the era of cryptocurrency. The scam exploits the trust and vulnerability of victims who believe they are talking to a friendly stranger, who turns out eventually to be a bad actor dead set on scamming them out of their savings.

Schemes like the ones described above underscore the adaptability and ingenuity of fraudsters, who are always on the hunt for new opportunities to exploit vulnerabilities in the system. While artificial intelligence (AI) has proven adept at detecting patterns and anomalies based on historical data, its effectiveness diminishes when confronted with new and previously unknown forms of fraud. Unlike traditional detection methods that rely on predefined algorithms, AI struggles to identify emerging patterns that deviate from established norms. This leaves organizations that rely too heavily on AI (and not heavily enough on trained fraud analysts) for fraud detection more vulnerable to novel forms of fraud than they may be otherwise.

Anticipating AI’s Risks to Harness its Benefits: Ensuring Transparency 

AI’s ability to swiftly analyze vast digital datasets and identify patterns of suspicious behavior holds immense promise. That said, one of the most pressing challenges facing AI in identity verification is its opacity. Without a clear understanding of why AI makes the decisions it does, financial institutions find themselves unable to justify outcomes to regulators or provide an auditable trail demonstrating compliance with onboarding policies. This lack of transparency not only undermines accountability but also raises concerns about the potential biases embedded within AI algorithms, emphasizing the need for “human-in-the-loop” (HITL) system design.

Bias represents a significant obstacle in the deployment of AI for identity verification. Without proper oversight, biases present in machine learning models can go unchecked and perpetuate over time. Whether it’s biased training data, algorithms, or decision-making processes, the consequences can be dire – leading to unjust or discriminatory outcomes.

Lacking Moral Judgement

Unlike humans, machines lack moral judgment; they simply operate based on the information provided to them. Human intervention becomes indispensable in addressing biases and ensuring fair and ethical practices. A fraud analyst, armed with human intuition and expertise, can intervene when an AI system makes erroneous judgments, pinpointing the source of the error and educating the system to prevent similar issues in the future.

Amid the promise of AI lies a looming concern – the potential violation of data privacy. AI systems can only operate based on the information they are trained on, raising red flags regarding data privacy regulations. As exemplified by Microsoft’s in-house system designed to interact securely with classified information, the implications of AI’s reach into sensitive data domains remain uncertain.

Furthermore, AI’s over-reliance on historical data patterns poses limitations in detecting emerging forms of fraud. While AI excels at identifying known patterns, it struggles with novel threats that deviate from established norms. This is where human expertise comes into play. A proficient fraud analyst possesses the acumen to identify and address novel threats overlooked by AI systems. Through ongoing feedback and collaboration, alongside a HITL setup, machine learning models can evolve and adapt, enhancing their effectiveness in combating fraud.

Conclusion

The challenges inherent in implementing AI for fraud detection are complex, and demand a multifaceted approach that addresses both systemic shortcomings and the evolving landscape of fraudulent activities. The tale of “Dr. Dave” Williams exemplifies how systemic issues that incentivize fraud can be. The case also underscores the need for proactive measures directed toward protecting consumers, protected data, and healthcare and financial resources.

Even besides perverse systemic incentives, adoption of AI for fraud detection brings with it plenty of other inherent risks, including issues of transparency, bias, and data privacy. Without a clear understanding of AI’s decision-making processes, financial institutions risk undermining accountability and perpetuating biases embedded within AI algorithms. The potential violation of data privacy raises concerns about the ethical implications of AI-driven solutions, necessitating careful consideration of regulatory frameworks and best practices.

While AI excels at detecting known patterns, its effectiveness diminishes when confronted with emerging threats – otherwise known as novel forms of fraud. From classic schemes like check kiting to sophisticated ploys involving pig butchering scams, fraudsters continuously adapt and innovate, posing new obstacles for detection and prevention efforts. The emergence of new types of fraud presents a formidable challenge for traditional detection methods and AI alike.

The challenge of novel forms of fraud underscores the need for a multifaceted approach to detection and prevention. While AI offers valuable insights into historical data patterns, human expertise remains indispensable in identifying and addressing emerging threats. By combining the strengths of AI technology with human intuition and vigilance, organizations can enhance their ability to combat fraud effectively and safeguard against financial loss.

Catherine Darling Fitzpatrick

Catherine Darling Fitzpatrick is a B2B writer. She has worked as an anti-bribery and anti-corruption compliance analyst, a management consultant, a technical project manager, and a data manager for Texas’ Department of State Health Services (DSHS). Catherine grew up in Virginia, USA and has lived in six US states over the past 10 years for school and work. She has an MBA from the University of Illinois at Urbana-Champaign. When she isn’t writing for clients, Catherine enjoys crochet, teaching and practicing yoga, visiting her parents and four younger siblings, and exploring Chicago where she currently lives with her husband and their retired greyhound, Noodle.

Posted in Articles

Leave a Reply

Your email address will not be published. Required fields are marked *