Ann Lewis was the former Director of Technology Transformation Services, GSA
The Trump administration’s executive order “Stopping Waste, Fraud, and Abuse by Eliminating Information Silos” aims to streamline data use across the federal government by eliminating barriers that prevent employees from accessing essential government data. It calls for “full and prompt access to all unclassified agency records, data, software systems, and information technology systems” while promoting both intra- and inter-agency data sharing and consolidation.
While the objectives of this order are well-defined, issuing mandates alone will not guarantee success. The complexities of federal data sharing—shaped by the structure of the federal enterprise and IT contracting processes—often lead to new data silos rather than eliminating them. Additionally, implementation comes with significant costs and cybersecurity risks. In practice, enforcing full data access across all systems could prove expensive and may still fall short of its goal to reduce fraud, waste, and abuse.
The government can draw valuable lessons from both its own past initiatives and established best practices used by large private-sector companies facing similar challenges. Investments in APIs, anti-fraud capabilities, identity management, and top-tier security practices are key to facilitating intra- and inter-agency data sharing. By adopting these approaches, the government can enhance efficiency, strengthen security, and maximize the benefits of data-driven decision-making for the American people.
How did we get here?
The federal government manages vast amounts of data, with even small agencies overseeing thousands of datasets. Each time a law or policy change is made, agencies must implement system changes to support it. This involves updating code, data, and access management. Over time, this work leads to outdated or conflicting regulations that complicate future changes.
Federal agencies manage tens of thousands of systems, often maintained by different vendors, with varied data formats, schemas, and standards. This heterogeneous data management is a natural byproduct of large organizations, both public and private. In such environments, work is divided among numerous divisions and teams operating in parallel to maximize efficiency. However, this structure also leads to siloed systems, as teams require control over their own data and tools to perform their tasks effectively. While this approach accelerates delivery, it complicates enterprise-wide data management.
The challenge of reconciling disparate data sources is not unique to government. In the private sector, healthcare companies grapple with integrating incompatible electronic health record systems, e-commerce giants invest heavily in properly integrating supply chain data, and financial services firms work to unify transaction data across banks, credit card networks, and fintech platforms.
Since the 2000s, every administration has sought to better leverage data for decision-making. Initiatives such as the Federal Data Strategy, Open Government Initiative, data.gov, and transparency efforts like the Federal Program Inventory and usaspending.gov have all aimed to improve data accessibility and accountability. Attempts to create a centralized data store have had mixed success due to misaligned incentives, added administrative burdens, underestimation of implementation costs, and a lack of outcome-oriented goals.
Unifying all data across federal agencies is an enormous undertaking—akin to rebuilding an entire system from the ground up. Without clear guidance and a focused scope, such efforts are unlikely to succeed, especially within a short term. More importantly, effective fraud prevention does not necessarily require costly, enterprise-wide data consolidation.
Solutions from the private sector
While policy changes—such as rescinding or modifying agency guidance and updating regulations on unclassified data access—can serve as an important first step, the real challenge lies in execution. Delivering on the mandates of this executive order will be 99% program implementation. Rather than addressing every dataset individually, the federal government can adopt best practices from private sector companies already tackling fraud, waste, and abuse at scale.
Amazon is a good example. As a massive online marketplace, it must continuously combat fraudulent sellers and counterfeit products to maintain customer trust. However, its solution is not to consolidate organizational structures or grant senior leaders universal access to all data. Such an approach would be slow, inefficient, and unlikely to yield actionable insights.
Instead, Amazon and similarly large companies tackle data integration and fraud prevention with targeted, scalable strategies. Here’s how the best large enterprises manage these challenges effectively:
Best practice 1: Make data accessible through secure, scalable APIs.
Rather than attempting to consolidate or redesign data silos, agencies should embrace modern data-sharing practices like APIs. An API (Application Programming Interface) is a structured way for software applications to share data, letting teams control what is shared, with whom, and when. APIs allow teams to maintain control over their systems while giving other teams access to important data.
Large companies like Amazon, Netflix, and Google combat fraud at national and international scales by leveraging parallel, high-performance teams. These teams share data like risk signals with each other through APIs, which balances independence, speed, user privacy, and scalability.
Additionally, to manage older, fragile systems—such as mainframes—both businesses and governments can “wrap” these systems in APIs. This method allows modern tools to test older systems without the need for full migration, reducing the cost of both modernization, and making use of data for fraud detection.
Best practice 2: Treat fraud as an ongoing risk and invest in anti-fraud capabilities long-term.
Instead of relying on one-time anti-fraud initiatives with fixed start and end dates, leading private-sector companies continuously invest in technology, talent, and processes to combat fraud. They continuously monitor user behavior, transactions, and risk signals across programs, invest in zero-trust architectures, and invest in fraud detection and prevention teams by hiring data scientists, forensic analysts, and cybersecurity professionals. These investments are made across the organization, with data sharing enabled by APIs, allowing teams to manage their own systems while working together effectively.
Best practice 3: Invest in identity management for access to critical systems.
Large companies recognize identity management as a cornerstone of fraud prevention. They use advanced authentication and identity verification methods, zero-trust architectures, and partner with identity verification providers.
Rather than managing identity across multiple disconnected systems, these organizations centralize identity management using a few trusted providers. Internally, they enforce role-based access control (RBAC), ensuring that users have only the permissions necessary for their roles. This approach protects sensitive data, prevents unauthorized access, and minimizes the risks associated with errors or malicious activity.
Best practice 4: Balance security with speed.
While it would technically be possible for all agency programs to allow any executive branch leader to have direct access to all data, doing so would create myriad security risks and provide minimal value. That’s why large private sector companies favor balancing robust security practices with carefully managed access granted to fraud-focused teams.
Large private-sector companies don’t rely on small strike teams with broad access to all datasets. Instead, they invest in robust, long-term anti-fraud capabilities that leverage continuous monitoring of fraud signals, cloud-first security models, AI-driven detection, and adaptive risk modeling.
A core principle in their approach is the Principle of Least Privilege (PoLP)—ensuring that users only have the minimum access rights, roles, and permissions required for their job. This safeguards high-value data and critical systems, reducing the risk of unauthorized access, accidental errors, or malicious activities. The government should adopt the same approach.
AI plays a key role in modern fraud prevention by handling large-scale transaction monitoring, allowing human analysts to focus on complex cases. AI can flag suspicious activities before they escalate into full-scale fraud.
For example, the Bank of America combines AI models that analyze billions of transactions with human fraud analysts to review flagged transactions. Netflix monitors user account activity and has a rapid response team for compromised accounts. Capital One employs neural networks for risk scoring, triggering additional authentication only when necessary.
Best practice 5: Preserve audit logs for everything.
Audit logs are essential for fraud detection, investigation, and compliance. They help organizations track, detect, and respond to fraud while ensuring transparency and regulatory compliance. Audit logs also ensure that companies are able to detect issues well after the issues occur, if needed. Audit logs capture user behavior: they track who accessed what data, when, and from where.
Having a complete log of behavior allows companies to develop processes that detect any deviation from statistically normal behavior. They also help investigators trace every action leading up to a fraudulent event, regardless of whether the fraud was committed by an employee, a cybercriminal, or a compromised account.
Best Practice 6: Have the right people in the right roles.
The most successful private-sector companies recognize that having the right people in the right roles is essential to implementing best practices effectively. To prevent the creation of new data silos in government, agencies must improve IT procurement by actively involving federal employees with technical expertise—such as product managers and software architects—in the procurement process.
These experts can help ensure that solutions delivered by vendor teams are solving the right problems connected to the agency and program’s larger vision, and that solutions are designed to integrate well with the agency’s existing data architecture, rather than creating isolated systems. Without technical guidance, IT vendors often build self-contained systems with new databases to minimize their own maintenance costs, leading to new silos.
Looking ahead
Making better use of data stored across a complex organizational ecosystem and more effectively handling fraud is not a one-time initiative—it is an ongoing necessity. As long as organizations, public or private, store valuable information, bad actors will attempt to exploit it. The key to fraud mitigation lies in sustained investment in anti-fraud capabilities across the federal enterprise.
There are many challenges involved in managing data across the federal enterprise, but there are many private sector best practices that can help government programs better detect and mitigate fraud, waste, and abuse. To successfully implement the executive order, the federal government should avoid attempts at large-scale data centralization and instead adopt private sector best practices, ensuring efficient, scalable fraud prevention while preserving security and operational flexibility.