Key takeaways
- Generative AI and other technological developments are changing the world, – and they may impact unemployment insurance (UI) fraud significantly.
- The nature and scale of UI fraud changed during the COVID-19 pandemic. The amount of fraud increased, and sophisticated international criminal gangs committed the most fraud.
- AI can help these groups automate fraudulent claims, scaling up the potential for fraud.
- The government must respond with reforms to protect taxpayer money and ensure benefits for eligible workers.
- One option is using AI to sift through claims, which would require better coordination across state lines.
- Low-tech solutions like in-person verification requirements may be the most effective short-term remedy.
Generative artificial intelligence (AI) is poised to transform various sectors, including unemployment insurance (UI) fraud. During the pandemic, expanded UI benefits led to a surge in fraudulent claims, exposing significant vulnerabilities in the system. AI may soon enable scammers to file fraudulent claims at an unprecedented scale. The government must act promptly to safeguard taxpayer funds and ensure rightful benefits reach eligible workers. Potential reforms include leveraging AI to screen claims and reinstating in-person verification processes. While advanced AI solutions are promising, low-tech approaches may offer the most immediate and reliable protection against fraud.
Recent developments in unemployment insurance fraud
Unemployment insurance (UI) was created in the United States by the Social Security Act of 1935, structured as a series of joint state-federal programs. People who lose their jobs are able to use the UI system to help stabilize their finances while they look for a new job. UI fills multiple purposes. It mitigates the impact of economic shocks by stabilizing aggregate cash flows. It helps claimants maintain their long-term economic arrangements, such as a mortgage, so the impacts of unemployment do not cascade. Finally, it preserves economic growth by allowing workers to find good-fit jobs instead of having to take the first job that will help them pay their rent. UI claimants are required to be able and available for work and to maintain an active work search while receiving benefits.
However, not all unemployment insurance claims are legitimate. Broadly speaking, unemployment insurance fraud can take up to three forms: new, fraudulent claims; failure to report reemployment; and fake firm scams.
- New, fraudulent claims are exactly that. Scammers use stolen identity information, such as Social Security numbers, to file false claims, whether as one-off scams or at scale.
- Failure to report reemployment occurs when a worker is laid off, begins collecting benefits, is re-hired, or finds a new job, and fails to report this to the unemployment office, thus continuing to receive benefits they are no longer eligible for.
- Fake firm scams are similarly self-explanatory: scammers create a fake business entity that claims to employ people, possibly using stolen identities, and then lay off the fake employees, generating fraudulent claims on benefits.
UI fraud was especially prevalent during the Covid-19 pandemic. The CARES Act expanded unemployment benefits by $600 per week and eligibility to categories of workers previously ineligible for benefits, like contractors and gig workers. Even after these benefits expired, the American Rescue Plan continued to increase benefits by $300 in early 2021. While these policy changes were a necessary response to skyrocketing unemployment during the depths of the pandemic recession, they also incentivized fraud.
Numerous reports have found that unemployment insurance fraud spiked during the pandemic. For example, the Department of Justice charged 30 members of a Milwaukee street gang last year for conspiring to fraudulently claim millions of dollars in unemployment insurance and use the money to pay for luxury items, travel, and firearms, and to solicit a murder for hire. International fraud rings were able to use hacked data (for example, from the 2017 Experian hack) to apply for benefits using false identities. Some fraudsters even used fictitious employers: in Colorado, over 7,000 fake employers were registered as new businesses during the pandemic and used for unemployment insurance applications.
The Government Accountability Office(GAO) has estimated that “the total amount of fraud across all UI programs (including the new emergency programs) during the COVID-19 pandemic was likely between $100 billion and $135 billion—or 11% to 15% of the total UI benefits paid out during the pandemic” and notes that despite state and federal law enforcement actions, “recovery rates for both fraudulent and nonfraudulent overpayments occurring during the pandemic remain low.” Per the GAO’s figures, improper payments (both accidental and fraudulent) of expanded federal unemployment benefits totaled $43.6 billion in FY 2023—18% the UI payments that went out. The number of fraudulent applications that were denied is much higher. In Maryland alone, there were half a million fraud attempts between May and June 2021, thanks to the proliferation of botnet accounts that applied for UI en masse.
The nature of fraud substantially changed during the pandemic. Instead of fraud being largely individual actors getting a few weeks of additional payments illegitimately, UI fraud has become the domain of sophisticated international crime rings, using hacked data and botnet accounts to bypass or overwhelm the anti-fraud systems.
An unanswered question is whether this was a temporary change in the nature of fraud due to the unusual nature of the COVID conditions and policy response or whether this state of affairs is likely to persist. Although the expanded federal unemployment benefits have since expired, new technologies will only provide further opportunities and tools for bad actors to game the system. In particular, new developments in “artificial intelligence” (including generative text, deep fake videos, or semi-autonomous agents) will likely make these challenges reoccur or lead to novel challenges. Government agencies must be ready and willing to clamp down on misuse of taxpayer funds—especially if a future economic downturn warrants another emergency expansion of unemployment insurance.
AI and UI
In the last few years, there have been substantial increases in what is sometimes called “artificial intelligence.” In particular, there have been significant improvements in how computers can interpret and respond to text. Systems like OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude can interpret human text and respond more-or-less appropriately, generating text, computer code, images, or video on demand for human users. While ChatGPT can be fooled by riddles and other mechanisms to demonstrate that no technical “reasoning” is taking place, ChatGPT can generate text that is not readily distinguishable from human-generated text.
Despite the interest in these developments, only some people understand how large language models (LLMs) like ChatGPT actually work. LLMs treat words as “vectors”: lists of numbers, referencing characteristics about the word in relation to other words; the LLMs then perform vector math in order to determine which word comes next in an incomplete sentence. This process seems simple enough when applied to a single word, such as your phone anticipating the next word in a sentence. But scaled up massively, this same basic process is the core of generative AI, giving ChatGPT the ability to write paragraphs and lines of computer code or engage in real-time conversations with human users when receiving instructions for a particular task (for an accessible, non-technical overview, Timothy B. Lee and Sean Trott have written an excellent explainer).
How can LLMs affect UI fraud? Going back to the three-way division earlier, Generative AI is unlikely to change “failure to report reemployment substantially”; that is, an error (or opportunity) of omission. But Generative AI opens up opportunities for bad actors to file more “new fraudulent claims” or perpetrate more “fake firm scams”. And while there is substantial interest in more sophisticated applications, such as using AI avatars on Zoom calls to represent nonexistent persons, the largest application of AI to unemployment fraud is likely to be in automating already existing, mundane types of fraud.
For example, AI may help scammers spam fraudulent individual applications using stolen identities by automating the process of filing out claims forms. Or, AI might help scammers fill out documents to create a fake business and then have that fake business generate fraudulent layoff documents.
The key issue here is scale. While it has always been possible to create illegitimate fraud claims, generative AI allows almost anyone to rapidly generate hundreds of documents. Even if many of these auto-generated spammed documents are low-quality and easy to spot by the human eye, scammers can overwhelm state unemployment offices that are unprepared for a flood of fraudulent claims, just as occurred during the COVID pandemic. This would have the additional cost of making it harder for legitimate claimants to have their cases evaluated and their benefits sent out.
This parallels some of the developments we are already seeing in the labor market itself. When people apply for a job, a standard practice is to have a cover letter and resume customized for each position. This process should not take too long – 10 to 15 minutes. But that time would signal to potential employers that you put effort into the process and demonstrate that your skills and experience are a good fit for the listed position. In the last year, this has changed dramatically. LLMs are able to customize a CV for a given job description in seconds. Bots can search for jobs that fit (or sometimes, do not quite fit) listed criteria and send customized resumes. Employers, inundated with spam applications, have subsequently turned to AI as well, creating tools to help flag AI-written spam or to process and sort the (plausibly) human resumes.
Government agencies can follow suit when screening for fraud claims. Sam Hammond, a senior economist at the Foundation for American Innovation (and senior fellow at Niskanen), points out that this is already happening with the Earned Income Tax Credit (EITC). Hammond notes that the “EITC audit review process uses automation to find discrepancies between W-2 data and EITC claims with no human in the loop, which is a dumb form of automation.”
This model could also be applied to failures to report re-employment. We can deploy artificial intelligence instead of relying on human caseworkers to sift through W-2 data and unemployment claims to identify people on both rolls, which is a time-consuming and labor-intensive process. An AI system might look through both datasets, identify overlaps, and send messages to workers still claiming unemployment data who are identified as re-employed by W-2 data. The messages could note that the worker seems to have failed to report their re-employment and that their unemployment benefits will be cut off unless they are still unemployed.
However, Hammond cautioned that some initial attempts at automating UI processing, such as automating the claims review process in Michigan, “led to massive increase in denials, leading to skepticism from the left.” The left was concerned that higher denial rates would (a) deny benefits to people who are eligible for them, (b) reduce post-transfer incomes, and (c) exacerbate racial disparities. Hammond hopes that these concerns can be addressed with “more generalist models [that] can understand the context better,” which would reduce the number of errors.
While “human review is very important in the medium term “to ensure accuracy in claim adjudication, as AI advances, possibly to the level of artificial general intelligence, “having humans in the loop might become the bottleneck in the system.” Therefore, Hammond thinks that “What reforms are robust to a wide range of future AI advancement scenarios?” is the right question for policymakers to focus on.
Reform 1: Fighting bad AI with good AI
How can AI be usefully incorporated into government policies? Jen Pahlka, a Niskanen Center fellow and the author of Recoding America argues that “in many areas of government service delivery, the status quo is frankly not worth protecting.” While advocates may worry about AI-automated systems erroneously denying benefits, “to take just one benefit, one in six determinations of eligibility for SNAP is substantively incorrect today. If you count procedural errors, the rate is 44%.” This does not even account for cases left undecided due to application backlogs. Applying AI systems to automate processes like benefit determination can substantially improve the productivity of those processes. That means more resources are available for looking at potential fraud.
A significant issue in UI is that each state would need to develop its own AI processes to assist with benefit determination since state policies are not necessarily compatible. Most states would have difficulty implementing a customized AI system for their individual UI system. Writing for Vox, Emily Stewart (now at Business Insider) has argued that unemployment insurance should be fully federalized, like Social Security, rather than maintained as a joint state-federal program. Moving to a federal program would make UI fraud detection efforts much simpler.
But there are substantial obstacles to full federalization. For example, states want control over their systems, and many in Congress will balk at a full federal takeover of the program. Other models besides federalization are possible. For example, the federal government could create and develop the software backend but allow states to choose their parameters (for example, the wage replacement rate or benefit length.)This would ensure the system matches the state’s economic needs. The Department of Labor and the National Association of State Workforce Agencies have recently begun collaborating on an Open UI Initiative to achieve similar ends – coordinating state procurement processes to standardize systems and create a degree of interoperability. This would allow for some standardization that could make a unified fraud detection system possible.
Reform 2: Sand in the gears
While proposals to increase government agencies’ digital capacity are worthy goals, it might become necessary to consider other potential pathways. One consideration is whether it might make sense to deliberately increase the amount of “friction” that UI applicants must go through to receive benefits. In recent years, policymakers have become more attentive to the issue of what researchers Don Moynihan and Pamela Herd call administrative burdens: the complicated paperwork and bureaucratic procedures that are put in place between people and social benefits. There is a good case for eliminating many of these administrative burdens. For example, the requirement that UI beneficiaries document their work search does not actually substantially speed up the return to work. Other interactions, though burdensome, can help detect fraud and could also be a mechanism for connecting UI claimants with other resources.
For example, consider requiring UI claimants to participate in some in-person (or, in more rural areas where offices are hard to access, video) activities as part of their application processes. This is a somewhat clunky requirement but does eliminate much of the fraud risk. While it is possible to commit fraud despite such a requirement, it is no longer something that can be done at scale.
Until recently, UI applications used to include in-person processes. This only changed in 1998, when the Workforce Investment Act eliminated the in-person UI office. As a result, all UI staff were moved to regional call centers, and UI applications systems became entirely phone- or internet-based, with little in-person interaction for claimants. UI offices were transformed into “One Stop Centers,” which provided training and assistance for job seekers (and, at most, a public phone that UI applicants could use to contact their regional UI office).
While laudable in many respects, this change resulted in separating the employment services and benefits provision of UI, to the detriment of both. Many UI applicants go to their local One Stop for help with their UI benefits, assuming that the staff has the resources to answer their questions. When told that their only option for assistance is the standard helpline, these applicants become frustrated.
The predominant users of One-Stop Centers tend to be hard-to-employ populations that need help with basic job search tasks like writing a resume. However, employers are rarely looking to hire from that population. This creates a vicious cycle in which neither employers nor job-seekers actively use One-Stop Centers because the job-seekers and employers they want to match with aren’t there.
If UI applications had mandatory in-person verification systems, they could not only reduce fraud but also be used as a mechanism to get job-seekers started with the re-employment process. An existing mandatory in-person program, Reemployment Services and Eligibility Assessments (RESEA) has already demonstrated its effectiveness, reducing UI benefit duration by three weeks.
Of course, there are valid concerns that an in-person application requirement could place a significant burden on Americans who do not live near a One Stop Center. In those cases, the DOL could allow people to verify their identity at a local postal office (the DOL is already piloting this program in several states).
Postal Service offices are required by law to serve all Americans. Those living in rural areas are no exception. New rules could also be written to provide exceptions for people for whom going to the Post Office is a large burden, such as those with physical disabilities.
Conclusion
Generative artificial intelligence is likely to provide bad actors with new tools to game the unemployment insurance system. This will make it harder to deliver benefits to those eligible for them and for policymakers to justify emergency expansions of UI during future economic downturns.
While “fighting fire with fire” by using artificial intelligence to sort through bogus AI-generated UI claims may generate more attention-grabbing headlines, lower-tech solutions will likely be easier to implement in the short run and more robust to technological advances in the medium to long run.