The Money Overview

The IRS is now using AI to flag tax returns in 2026: what triggers a closer look

The IRS has begun applying artificial intelligence to screen millions of tax returns filed in 2026, scoring each one for potential fraud or reporting problems before refunds are issued. The change represents one of the most impactful technology upgrades in the agency’s enforcement system in years. Instead of relying primarily on traditional filters and manual reviews, the IRS now uses machine learning tools to analyze patterns across millions of returns.

For taxpayers, the question is simple: what causes an AI system to flag a return for closer review? While the IRS doesn’t publicly release the exact formulas used in its models, government reports and IRS policy documents reveal the types of patterns these systems are designed to detect.

How the IRS Uses AI to Score Tax Returns

The backbone of the IRS fraud screening system is the Return Review Program, a platform originally developed to spot identity theft and refund fraud. According to the Government Accountability Office, the program analyzes returns using advanced analytics that compare filings against historical data, third-party information returns, and known fraud alerts before refunds are released.

That system now operates alongside a broader set of machine learning models that examine millions of filings simultaneously. These tools identify statistical patterns associated with red flags like underreported income, unusually high deductions, and credit claims that fall outside typical filing behavior. Instead of targeting individual taxpayers randomly, the models rank returns based on risk scores that determine which filings receive additional scrutiny.

One of the most closely watched areas involves refundable tax credits. The GAO has noted that the IRS relies on both automated and manual processes to select returns claiming refundable credits for review. Subsidies like the Earned Income Tax Credit have historically generated a high number of compliance issues, largely because eligibility rules are complex and mistakes are common.

Machine learning systems are particularly good at spotting patterns across these claims. When a credit claim appears inconsistent with income records, family status information, or prior-year filings, the return is likely to receive a higher risk score.

Patterns That Often Trigger Additional Review

Although the IRS doesn’t publish the exact thresholds used in its models, government reports and tax professionals routinely identify several patterns that tend to draw attention from automated screening systems.

Income mismatches are one of the most common triggers. If a taxpayer reports income that fails to align with information returns such as W-2 or 1099 forms submitted by employers or banks, the system can flag the discrepancy almost immediately. These mismatches often lead to delayed refunds while the IRS verifies the data.

Large swings in income from one year to the next can also attract attention. While many taxpayers legitimately experience income fluctuations, particularly small business owners or gig economy workers, sudden increases or sharp drops can prompt additional verification because those changes can sometimes resemble patterns seen in fraudulent filings.

Self-employment income is another area where AI systems frequently focus. Independent contractors and gig workers often have irregular earnings patterns, multiple income sources, and business deductions. These variables can create filing patterns that differ from traditional wage earners, making them more likely to be reviewed by automated systems looking for underreported income.

Unusually high deductions relative to income are another well-known trigger. If deductions appear far outside the statistical norm for taxpayers in a similar income bracket, the return may be flagged for human review. This does not mean the deductions are incorrect, but it can lead to requests for documentation.

New Governance Rules for IRS AI

The IRS introduced formal governance rules to manage these systems in early 2026. The agency’s AI governance framework, outlined in Internal Revenue Manual section 10.24.1, requires the IRS to track every AI system it uses and maintain detailed inventories of both the models and the data that support them.

The framework also establishes oversight requirements for systems considered high impact. These include monitoring requirements, risk assessments, and procedures that allow the IRS to shut down AI tools if they fail to meet compliance standards.

Privacy protections are also part of the policy structure. IRS guidance restricts the use of taxpayer information within AI systems and prohibits using sensitive tax data to train publicly available chatbots. The agency also requires strict limits on how personally identifiable information and federal tax data are handled within these tools.

Across the broader Treasury Department, a government inventory published in early 2026 lists several AI systems used to support functions like fraud detection, compliance analysis, and financial oversight. The inventory is part of a federal transparency effort to document how AI is used within major agencies.

What the Changes Mean for Taxpayers

For most taxpayers, the biggest change is that their return will likely be evaluated by algorithms before a human agent ever sees it. These tools act as a first layer of screening, helping the IRS prioritize which filings require additional review.

A flagged return does not automatically lead to an audit. In many cases, it simply means the IRS wants to verify certain information before releasing a refund. That verification may involve confirming wage data, requesting documentation for deductions, or checking eligibility for tax credits.

Taxpayers who receive notices can track their filings and respond electronically through the IRS online account portal available at IRS.gov. Filers can also upload supporting documents and communicate with the agency through its secure online response system.

In practical terms, navigating an AI screened environment comes down to accuracy and documentation. Taxpayers who maintain clear records of income, expenses, and eligibility for tax credits are far less likely to encounter issues if a return receives additional scrutiny.

While the IRS says AI is designed to support human examiners rather than replace them, algorithms are now playing a larger role in deciding which returns receive attention. As these systems continue to evolve, taxpayers should assume their filings are being evaluated by machines looking for statistical outliers and reporting inconsistencies.

For honest filers, there’s nothing to fear. Accurate reporting, consistent record keeping, and careful documentation remain the best defenses against any tax review, whether the initial screening comes from a human auditor or an algorithm.

Gerelyn Terzo

Gerelyn is an experienced financial journalist and content strategist with a command of the capital markets, covering the broader stock market and alternative asset investing for retail and institutional investor audiences. She began her career as a Segment Producer at CNBC before supporting the launch Fox Business Network in New York. She is also the author of Dividend Investing Strategies: How to Have Your Cake & Eat It Too, a handbook on dividend investing. Gerelyn resides in Colorado where she finds inspiration from the Rocky Mountains.