Could AI One Day Be a Whistleblower? | Opinion

🎙️ Voice is AI-generated. Inconsistencies may occur.

Of all the jobs that have been on the radar for ruin at the hands of artificial intelligence (AI)—Truck driver? Personal assistant? Flight attendant, even?—one profession that most people did not imagine might find competition at the hands of new technology is "whistleblower." If anything, it's been whistleblowers who have been clanging the bell about the dangers of AI. But in fact, AI is a technology that in this case, could be used for good. It could be just the right tool for detecting certain kinds of fraud that corporations and others have become highly skilled at covering up from employees or the government.

The typical whistleblower under the law has historically been a corporate insider privy to details of the fraud. But companies' intent on duping the government have come up with countless tactics to evade detection, often siloing employees from each other so no individual, or small group, can piece together enough about the fraud to file suit.

Despite efforts to stop it, defrauding the government remains a booming industry. The government estimates that roughly 15 percent of pandemic-related unemployment benefits paid, over $100 billion, were lost to fraud. That's just the tip of the iceberg.

While COVID-19 was, hopefully, a generational pandemic, the government loses nearly that much money to Medicare and Medicaid fraud annually. Those figures will only grow as health care costs continue to skyrocket. And that doesn't even begin to address all the other projects and industries where the government doles out massive sums, all vulnerable to fraud. To name a few: infrastructure funding, defense contracts, education financing, and agriculture have all been targeted by fraudsters. But AI is poised to change that by cutting across departments and systems to gain a fuller picture of practices that may amount to fraud.

ChatGPT robot
A photo taken on March 31, 2023, in Manta, near Turin, shows a computer screen with the home page of the artificial intelligence OpenAI website, displaying its ChatGPT robot. MARCO BERTORELLO/AFP via Getty Images

Both the government and whistleblowers are already using statistics to identify fraud suspects and file suits alleging that, mathematically, fraud is the only explanation to patterns detected in data. But AI's rapidly developing speed and complexity will allow it to analyze datasets well beyond the most complex prevalence analysis. Instead, it will be able to detect patterns based on multiple variables, compare recognized patterns to estimates, and perhaps even predict anticipated behavior. All of this means AI has the potential to forecast fraud.

The logical starting point in AI fraud prevention is the health care industry, which is known for its large, well-kept data sets. But the AI technology has potential fraud projection applications far beyond health care. Large language models could be used to analyze emails or other large collections of text, detecting suspicious activities, and even exposing existing efforts to keep fraud under wraps, such as the use of code words. AI may gain the ability to conduct advanced materials testing—think scanning metal, concrete, and welds for density, rust, and other physical characteristics—which would allow it to detect frauds concerning deficient physical items like military equipment, or civilian roads and bridges. And, somewhat ironically, as the government contracts to purchase more and more AI technology, that very AI will be a tool to judge whether the government is getting what it was promised.

AI can not only help would-be whistleblowers detect and anticipate fraud with great precision; it also opens the door to new categories of whistleblowers. No longer will visibility be limited to company insiders and the like. There are many uniquely situated parties that observe a limited aspect of a fraud, such as hospitals and insurers that see only certain portions of each other's data when the full picture would reveal fraud, or subsidiary military contractors that have a window to only some of the prime contractor's fraudulent behavior. Currently, these parties may be stymied from bringing a suit by a lack of detail. As AI develops, it can fill those holes.

As with any technology with great potential, there is also risk. Patterns that are merely rare might be flagged as potentially fraudulent, possibly leading to expensive and burdensome investigations. Fraud detection software can also simply malfunction, or be programmed incorrectly, leading to miscarriages of justice. A heartbreaking example of this happened in the U.K., where a mail-tracking software had faulty design and implicitly accused hundreds of postal workers of theft, leading to hundreds of baseless prosecutions.

As the technology develops, additional AI applications in the fight against fraud are sure to become evident. Perhaps one day, AI could be a whistleblower in its own right (or, alternatively ... try to destroy humanity). Whistleblowers are already indispensable to the government's recovery of tens of billions of dollars that would have otherwise been forever lost to fraud, and their efforts have likely deterred hundreds of billions more. AI should be on whistleblowers' radar as a tool that will make them even more effective.

Max Voldman and Hallie Noecker are partners in the Washington, D.C., and San Francisco offices of Whistleblower Partners LLP, a boutique law firm dedicated to representing whistleblowers under the False Claims Act and other government reporting programs.

The views expressed in this article are the writers' own.

About the writer

Max Voldman and Hallie Noecker