The following essay was originally published in “The Digitalist Papers: Artificial Intelligence and Democracy in America“, on September 24th, 2024. This contribution is authored by Jennifer Pahlka1, a senior fellow at the Niskanen Center.

A quarter of the way into the 21st century, we face multiple, overlapping crises, what Adam Tooze calls the polycrisis.2 One factor quietly underpins or amplifies each of these individual crises: the diminished state capacity of the world’s advanced democracies. State capacity is simply the ability of a government to achieve its policy goals, and its decline is visible in everything from the botched Covid response to our inability to build green infrastructure in the face of an impending climate collapse. In these cases and many others, the famously dysfunctional US Congress (and its state and local counterparts) did in fact act, passing much needed if imperfect legislation. But the intent of that legislation was only partially realized. Elected leaders who pay attention to outcomes feel like they are trying to steer the ship of our nation, but the rudder is loose from the helm. Even when we pass laws, we fail to implement them. 

This crisis of state capacity isn’t as simple as a need for more or better technology, but it does implicate our government’s failure to adopt Internet-era tools and ways of thinking. The consequences of these failures are hard to overstate, fundamentally eroding our democracy. The trifecta of a rise in need for public services, a rise in the public’s expectations around the delivery of those services, and government’s failure to meet both has profoundly alienated large segments of the voting public. Political scientist Joe Soss made the connection: he showed that participating in means-tested benefit programs significantly reduces the chance that people will vote. The process of accessing benefits is confusing, often insulting, and impossible to succeed at for many people. “Because clients interpret their experiences with welfare bureaucracies as evidence of how government works more generally, beliefs about the welfare agency and client involvement become the basis for broader political orientations,” Soss explains. But Soss’s work was done in the 1990s. How many of these same alienated clients simply skip voting today, now that they have a candidate who promises to destroy the administrative state that showed them such disregard? 

In recent years, the US government at all levels has made significant but incomplete progress catching up to the expectations and ways of working of the Internet era. Nowhere near done with its first digital transformation, though, it has now been jolted rudely into the age of AI. Government’s reaction so far has looked a lot like its reaction to past paradigm shifts: words, hundreds of thousands of them, describing emerging (and hotly contested) dos and don’ts to guide this transition.

AI is risky, both in ways we understand and because there is so little we understand, given its emergent properties. Because managing risk is the effective raison d’etre for many government institutions, they’ve flocked to weigh in like ants to a picnic. The result has been the usual mix of controls and mandates designed primarily to keep bad things from happening. This well-intentioned guidance also mentions the opportunities of AI in the public sector, but rarely talks about an equally powerful opposing risk: the risk of further widening the gap between public sector and private sector capacity.

The internet era coincided with a decline in state capacity. It’s debatable how much technological changes contributed to this decline, but what’s not debatable is that the AI era must see a reversal of this trend if we are to confront our polycrisis, and AI itself is going to have to be part of that reversal, whether we like it or not. Unfortunately, so far, we are meeting this transition with the same tools we employed the last time: mandates and constraints. But we deeply misunderstand how mandates and constraints operate in a bureaucracy already subject to conflicting constraints and managed for conformity to process instead of outcomes. 

This essay seeks to explain not only how to predict more realistically how current safeguards in government use of AI will play out in practice, but also what we miss when we focus exclusively on how much or how little we should constrain AI’s use. I make the case for a far greater focus on how much or how little capacity and competency we have to deploy AI technologies thoughtfully, and for flexing a different set of governance muscles that act to enable and build capacity within government generally rather than to mandate and constrain specific public sector actors or actions.

The Cascade of Rigidity

To get AI in the public sector right, we need to understand how mandates and constraints actually operate in the real world of bureaucracy. When we are weighing risks and benefits, we worry that we have not been sufficiently detailed about what the state must or must not do in the use of any technology. It is the wrong worry. Safeguards that sound eminently reasonable on paper act very differently than expected when operationalized within risk-averse bureaucracies. The result can be not at all what the framers of those safeguards intended. 

Take FISMA, the Federal Information Security Management Act. FISMA provides a menu of some three hundred distinct “controls” that government tech teams can choose from to secure software and data from hackers. Competent developers should, in theory, create an informed, thoughtful security plan that chooses the controls most relevant to the circumstances and focus their efforts on implementing and testing those choices. But technologists in government will tell you that’s not an option for them. Routinely, they are forced to implement every one of the three hundred controls before their software is allowed to ship. Even if you have a skilled security team, they’ll have to march through a massive checklist, much of it meaningless for their project, instead of focusing on the specific controls that will actually secure their system. Implementing all three hundred and verifying that they are implemented will add months, sometimes even years, to the development schedule, making compliance extremely costly. It will also detract from the time spent on both features and testing—and testing, of course, is critical to the real-world security of software. FISMA, as written, is a fine law. But as practiced, it doesn’t just make the software worse, it actually impairs the security of our systems. 

How does this happen? It’s perhaps more helpful to understand what would have to happen to ensure the better outcome: that the tech team uses their discretion to employ the appropriate controls and puts their efforts towards testing and resilience. The launch of a piece of software, even one as simple as a static website, requires sign off from people at multiple layers of the hierarchy. Other than the team on the ground, generally no one in the layers above has the technical expertise or domain knowledge to know which controls are most appropriate, but each will need to put their name on paper (yes, often paper) stating they approve the launch. The tech team might be able to convince their direct supervisors that their plan was sound, but to go forward with it, each successive layer above would also have to essentially take the word of a team whose work is foreign to them and who is often very distant from them not only organizationally but physically, culturally, and even temporally (it can take a long time for these approvals to even get to the higher ups). In the meantime, there are compliance officers whose job is defined around “better safe than sorry,” and who are strongly disincentivized from approving anything other than all 300 controls in order to protect the agency. If something goes wrong, they will remind the others in the chain, the fact that the team thought that controls 210-244 weren’t relevant won’t matter. It will only matter that it appeared that we skimped on controls. These processes function as a vetocracy, in which it takes all thumbs up in order to accept the risk, and only one thumbs-down to stick with the less-risky option. 

Anyone in technology knows that there is no such thing as 100% secure, only better and worse practices, some of which change over time as threat vectors change. And that security is a function of ongoing practice and resilience, not a thing you can certify and move on from. And yet, the effect of well-informed and well-meaning security legislation is to force the bureaucracy into rigid, static, maximalist practices that degrade security at the same time they force them to pretend that their systems are 100% secure. Well-meaning and well-written legislation originates at the top of a very tall hierarchy, and as it descends, the flexibility that its authors intended degrades. Laws often have an effect entirely different from what lawmakers intended because of this cascade of rigidity.

Culture eats policy

This is not some edge case of malfunctioning bureaucracy explainable by government’s lack of technology expertise. It is the predominant dynamic. You see it in the civil service rules governing hiring, to take another flagrant example that has little to do with technology (but deeply affects our technical capabilities and capacities). Today’s civil service dates back to reforms in the mid- to late-19th century, when positions in government were filled through patronage. Starting with passage of the Pendleton Act in 1883, reformers have sought to ensure that federal employees are no longer hired because of who they know but on the basis of their skills and job performance. But the practices that have evolved over the years to implement these laws now have a very different effect. 

If you were in a federal agency today, and trying to hire someone with tech expertise through a competitive process that was open to the public, your experience might resemble what the team at the Defense Digital Service faced when trying to hire Jack Cable. In 2017, DDS held a contest to see who could find the most security flaws in its software. Jack won the contest, beating out 600 other security researchers. The DDS team was delighted to find out that Jack was actually open to the idea of working for the Pentagon, and they encouraged him to apply for a job. But the resume Jack submitted described his experience developing “mobile applications in IonicJS, mobile applications using Angular, and APIs using Node.js, MongoDB, npm, Express gulp, and Babel.” The job description called for “experience that demonstrated accomplishment of computer-project assignments that required a wide range of knowledge of computer requirements and techniques pertinent to the position to be filled,” and the HR staffer did not see a connection between what looked like a grab bag of gobbledygook on Jack’s resume and the job requirements. Winning the contest did not even merit giving him the benefit of the doubt, and he was cut in the first downselect. When the team intervened and asked that HR speak with Jack, he was advised to get a job selling computers at Best Buy for a few years and come back, because then he might be qualified for the job he was applying for. 

Why does this happen? In a risk-averse culture, rules intended merely to guide processes are interpreted very rigidly. In accordance with law, HR rules try to reduce bias in hiring. The safest way to do that, the logic goes, is to allow only HR professionals, who are specially trained in complex, obscure rules and processes, to review resumes and assess candidates. Involving domain experts in any part of the process except the final step is considered risky, as they may introduce bias or fail to follow one of many safeguarding procedures. Over the years, it has become so uncommon to allow nurses to assess nurses, or data scientists to assess data scientists, for example, that many in government believe the practice to be illegal. But merit system principles do not limit candidate reviews to HR professionals. In fact, the US Digital Service and the Office of Personnel Management have successfully piloted a new hiring process that employs subject matter experts to assess candidates, resulting in higher quality hires and more satisfied hiring managers. 

The point of the legacy process is not to select the best candidate, but to be able to defend the ultimate selection from criticism through strict adherence to a process in which no judgment can be questioned, because no judgment was used. What this means is that in implementing laws written to reduce nepotism and patronage, we have created a system in which only those who know someone on the inside to guide them has a hope of getting past the first screen. (Jack did eventually get hired, but only after repeated interventions by increasingly high level officials.) Every step down the ladder from the high level principles of law to the public servants’ day to day practices the process was drained of judgment and common sense, replacing them with a bizarre literalism in the service of defensibility. This cascade of rigidity perverts the intentions of lawmakers.

AI Meets the Cascade

The federal government announced an AI talent surge at the time of the Executive Order on AI. In a tacit recognition of the problems with its standard hiring procedures, OPM granted agencies “direct hire authority” for these jobs, which removes some of the strictures on HR managers and should allow for greater discretion in selection on the part of hiring managers. My point in describing the effects of the cascade of rigidity on hiring is not a concern about hiring unqualified people into AI roles, though that could happen for a variety of other reasons. I describe these dynamics because the cascade of rigidity is what I fear will happen with the safeguards that governments are now putting in place on the use of AI. 

In the US, the primary source of these safeguards is the executive order on AI that President Biden signed in November, but states and many other governmental entities are issuing similar documents. (To be clear, these documents also usually discuss government regulation of private sector use of AI, but my concern here is exclusively the use of AI within government operations.) When I read these documents, my first response is that the safeguards they’ve put in place sound eminently reasonable. But my second response is to imagine how they are going to be operationalized as lower level government offices issue additional guidance, which will be a bit more specific than the EO, and each department, agency, sub-agency, bureau, and division in turn translates that guidance into their own memos, again, a bit more specifically and ever more prescriptively. The cascade of rigidity is beginning. 

This rigidity sometimes manifests as extremely narrow, literal interpretations of guidance as strict but off-base rules, but at other times it expresses itself through overly broad interpretations of the same. As guidance about AI began to roll out last year, for example, gatekeepers began to get the message that AI carried risks and needed to be constrained. In one instance, a policymaker responsible for a healthcare data analysis program told researchers that certain programs that submitted data to their agency could not use “algorithms.” Either the policymaker was unaware that algorithms are core to basic mathematical analysis, necessary to the core work of health IT analysis, and not exclusive to AI, or the fear of not being able to distinguish between AI and non-AI algorithms led them to take the “better safe than sorry” route and attempt to ban the use of algorithms broadly. This error in judgment was ultimately resolved, but now imagine that kind of disruption to the operations of the program occurring over and over in various forms across government. 

Now imagine this guidance applying to existing uses of AI in government where the risks are well-understood and minimal and the benefits clearly established, or even where the technology is so firmly embedded and non-controversial that de-authorizing its use would be devastating. There are many of these uses, as Dan Ho and Nick Bagley point out, including the Postal Service’s long standing use of handwriting recognition.3 The reason my barely legible scrawl on an envelope arrives without delay at its destination is that the post office has been using a form of AI to read addresses on envelopes since the 1960s. Is that use now subject to the rules imposed by the recent executive order from the White House?4

The set of procedures required by the executive order includes public consultation with outside groups, studies to demonstrate the equity impacts of the application of any AI-enabled technology, the creation of a mechanism to appeal the AI’s decision, and a requirement to allow individuals to opt out of any use of AI. But how – and why – would we allow members of the public to opt out having their handwritten addresses on envelopes read by machines, or to appeal the decisions of those machines? The new guidance from the White House seems to require it, but pausing this use of AI until all the executive order’s provisions have been met would cripple the postal service’s ability to function.

In the abstract, these procedures are all thoughtful, reasonable, and desirable safeguards against bias and harm. In practice, they are likely to function not as safeguards but as barricades. Public consultation, for example, could in theory be conducted thoughtfully and expeditiously. But there are models for public consultation in federal government, and the executive order alludes to notice-and-comment rulemaking and public hearings. According to a report from the Government Accountability Office, it takes an average of four years to conduct rulemaking through a notice-and-comment process.5 There’s little reason to believe it will take less time when used to consult the public about uses of AI. It is more reasonable to assume it will take longer, both because much of civil society objects to AI’s use in government contexts and because AI’s novelty will mean extra-thorough review of the public consultation process itself by internal actors taking a “better safe than sorry” approach. In the context of AI, four (or more) years (just for one step of an approval process) is not a delay, it is a death sentence. The technology in question will be outdated after four months, much less four years.

Even if the review could take place in a matter of weeks or months, rather than years, rigid interpretations of guidance may make the point moot. For example, the draft guidance to agencies issued by the OMB regarding implementation of the AI executive order implied that “agencies must consider not deploying the AI” upon receipt of “negative feedback” from members of the public. Again, this sounds reasonable, but operationalized in a literalist, maximally risk averse environment, it is very easy (for those who’ve lived the absurdities of this environment, at least) to imagine any negative feedback at all effectively stopping a deployment, no matter how much time and energy had gone into understanding and mitigating potential harms (not to mention documenting these mitigations and jumping through process hoops to establish the other required safeguards). A later revision of the OMB memo clarifies that negative feedback provided in consultation does not automatically require the termination of the AI system, but in a highly risk averse culture, the threat of a vetocracy remains real. Just ask the team at Raytheon crippled by the requirement for an ESB.

The constraints we are imposing today also interact poorly with constraints that have accumulated over many decades. Take the requirement for equity studies. These studies can also take years, though exact averages are hard to calculate because so many studies are still in progress and face serious challenges to their completion. In response to a different Biden Executive Order from 2021, on Advancing Racial Equity6, federal agencies were required to file equity action plans. One study of these plans conducted two years later concluded that out of 25 agencies reviewed, 21 “noted serious data challenges to conduct the required equity assessment.” These challenges are largely the result of other guardrails put in place to prevent violations of data privacy, in the form of laws like the Privacy Act of 1974, and to reduce burden on the public, in the form of the Paperwork Reduction Act.7 Those guardrails interact with constraints on building internal capacity, like OMB memo A-76, which required agencies to outsource wherever possible, and on hiring, as previously discussed, to create a low-capacity environment as it relates to digital technology. Low capacity and adjacent strict guardrails in turn create an environment in which what sound like reasonable constraints necessary for safe use of AI could in effect stop its use

Beyond mandates and constraints

We need constraints on the use of AI. But we should understand the direction their impacts will drift: not towards cavalier attitudes but towards overly risk averse ones; not irresponsible use, but towards potentially irresponsible lack of use. In guidance, it would be helpful to explicitly promote the use of judgment and discretion on the part of civil servants, and to acknowledge that no risk is effectively stasis, and stasis has its own risks that must also be considered. But tweaking guidance just fiddles with the dials, tuning between stricter and looser controls along one narrow dimension. And loosening the controls too far is neither practical or desirable.

There is another dimension we pay far too little attention to. Fine tuning between strict and loose controls is like obsessing over the safety features of cars while entirely neglecting driver education and licensing. The guardrails are in place but the drivers don’t know how they work – nor how to actually drive. Responsible effective use of AI will be a function of government’s competencies and capacities far more than its rules.

Our digital competence and capacity deficit exists not because government technologists are bad, but because they are understaffed and overburdened. People who understand both the systems in question and the possibilities of technology are far outnumbered by lawyers, compliance officers, and oversight bodies whose default is to stop rather than to go. They must spend far more time reporting (often to an absurd level of detail) on what they will do, what they are doing, and what they have just done, and seeking approvals from sometimes dozens of stakeholders, than actually building or deploying technology. Improving government’s capacity starts with correcting these glaring imbalances between watching and doing (to borrow from Mark Schwartz), between stop energy and go energy. 

Mandates are meant to be the gas to constraints’ brakes. But telling an agency to do something doesn’t help them do it. In theory it could help them prioritize it, but mandates aren’t priorities. You can’t have unlimited priorities, by definition. You can, and in government do, have unlimited mandates. At any given point in time, the priority may seem to be the mandates that the current Congress or party in charge care about. But the reality is that agencies must comply with all the mandates that have piled up over the decades all the time. The soft mandates encouraging use of AI in the federal executive order and others like it are not likely to result in responsible and effective use of any more than the constraints they detail. Again, we must look to building competence and capacity.

AI competence is operational competence

Competency and capacity have not been entirely ignored. The federal government, for instance, announced an AI hiring surge along with the executive order. It does not appear to be living up to expectations. Conventional wisdom blames pay, and it’s true that the skill of building AI models garners sky-high salaries in the private sector right now. But government’s need is not primarily building models so much as using existing ones, something a far greater number of people can do. In fact, much of the novel use of AI today is done by people with relatively low technical skills. The skill these people have is in understanding a domain or problem, being able to judge where AI could uniquely add value, and availing themselves of the plethora of options now commercially or freely available to try out solutions. These successes are driven less by expertise in the inner workings of AI models than curiosity and the desire to solve a real problem or create a real benefit.

One problem with the hiring surge is that agencies don’t know what they would use AI for. Government has long outsourced much of its operations, and often the people who run agencies don’t know how their own systems work. When I was working on the pandemic unemployment insurance crisis at the California Department of Employment Development during the summer of 2020, I saw firsthand how little grasp the department had on its own operations. At the end of our engagement, I lamented to a colleague that no more than a handful of the department’s 5,000 people understood how its IT systems worked. My colleague corrected me. No, a handful of people knew how individual pieces of the systems worked, but there was no one who understood how it all worked together. There were a great many people who understood the request for proposal for a new system they were trying to procure – the department had been working on that procurement for eleven years when we arrived. This is because instead of developing digital competency, government has developed extensive processes and procedures for purchasing digital work. What that means is that when asked what they might use AI for, most government officials simply ask their vendors. The conversation then becomes about a new contract, not a new hire. It’s entirely possible that procurement expertise is more specialized and “technical” than AI expertise in a certain sense; in part because of this, we have a lot of the former and little of the latter. 

The executive order mandated that each federal agency have a Chief AI Officer, so many of those positions have indeed been hired for (or in some cases, the Chief Information Officer has taken on that role.) But at the level of the operations of a particular program or service, the lack of internal competence breeds lack of demand for internal competence. And the lack of AI demand at the program or service level means that much of the work of those Chief AI Officers is to be another gate through which procurements must pass and to craft additional guidance for the use of AI in their particular agencies – in other words, to be the next step in the cascade of rigidity. 

There is a term often used in the context of the AI hiring surge that’s deceptively helpful: AI-enabling. Strictly speaking, AI-enabling positions are those that build the foundation for the use of AI. Anything having to do with the quality of or access to data, for instance, would fit, because of course AI is nothing without data to ingest, and access to relevant and reliable data is a huge problem across government at all levels. But positions like product manager should also be given priority under the banner of the surge. Any role that increases government’s ability to understand its own operations and spot where AI can responsibly improve outcomes will better position government to get on the innovation curve it’s been missing over the last two decades.

How to build capacity

The middling results of the AI hiring surge should tell us that we need to dig deeper to understand and address what holds government back from harnessing the power of AI. Mandates and controls can only get us so far. But legislative and executive branch leaders can learn to operate in an enablement and capacity-building framework.

I was recently asked by a Congressional office what they might do to force a particular federal agency to perform better. “I think you’re asking me what mandates and constraints you might impose on them,” I replied. “But this agency has been subject to a never-ending stream of these orders and rules for decades, and their performance isn’t improving.” The staffers agreed. Rather than assuming they’d been imposing the wrong ones, and that someone with greater digital expertise might help find the right ones, I asked them to consider asking entirely different questions. “What is keeping this agency from delivering? What constraints might we remove?” It turned out this agency had almost no flexibility in how they used their funds. What they spent on which projects was determined far in advance by processes poorly suited to understanding actual needs. A working capital fund might give them a start on that flexibility. To my delight, the Congressional staffers agreed. This move won’t solve all the agency’s problems, but it’s an important step towards enablement. 

Mandates and constraints trap us in a downward cycle: by assuming incompetence on the part of the bureaucracy, they ironically encourage incompetence, as the people responsible for delivery are held responsible to process fidelity over outcomes and progressively stripped of the right to use their own discretion. Overuse of these controls also degrades trust between those imposing them and those being controlled. There is shockingly low trust and poor communication between the executive agencies and Congress, for example. 

Enablement begins to reverse those negative spirals. In this framework, the focus shifts from greater specificity around process to accountability for outcomes. Instead of assuming incompetence, this framework revolves around asking the agency being acted on what is needed to gain the appropriate competence. Instead of asking “what is wrong with these people?” an enablement framework assumes something is wrong with the system, and that the people who understand the system are often the key to fixing it. Instead of adding control after control (in addition to those that were added decades ago, and not well understood by those seeking to add new ones), leaders edit or reduce the controls so that the agency is no longer trapped in a halting, handicapped “Mother may I?” mode of operation, in which permission is needed from Congress or another oversight body before any action is taken. Instead of constantly eroding trust, enablement builds it. 

Something as small as granting a working capital fund won’t necessarily help the agency embrace AI, but many more like it could begin to build the foundation. It’s not just increased flexibility for agencies. The muscles we build when we enable insights and information to flow up the hierarchy, not just down, are the muscles we’ve needed (and had too little of) in the transition to the Internet era, and the ones we really can’t do without as we make this next transition. Internet era software has called for iterative cycles of build-measure-learn, so different from the cascade natural to hierarchies. AI, with its relentless dynamism, requires them.

Choosing competence

The past year has seen both gains and losses in the public’s faith that government can deliver on its promises. On the plus side, the IRS launched a pilot tax-filing tool for low income people. When surveyed, 90% of respondents who’d used the tool ranked their experience as Excellent or Above Average, citing ease of use and trustworthiness as reasons for their satisfaction. 86% of them said that their experience with Direct File increased their trust in the IRS.8 On the minus side, the Department of Education badly botched the roll-out of the new form for applying for federal student aid and lost track of 70,000 emails from undocumented parents of student applicants containing the proof of income needed to qualify their children.9 The emails were found, but so late in the process that some schools could not issue their financial aid packages to these students in time to enroll. In other words, an agency known for taking money from the public inspired trust, while another known for giving money away broke trust. How does that happen? 

The difference between these two outcomes is clear: the Department of Education (technically, the FSA, which is part of DoE) was visibly focused on policies around student loans and the politics of changing them, whereas the IRS, under the leadership of newly confirmed Commissioner Danny Werfel, focused on building the capacity to deliver. FSA relied on traditional, rigid contracting mechanisms (despite the flexibility to do otherwise, suggesting that the cascade of rigidity was at play), while the IRS assembled an internal development team capable of fast build-measure-learn cycles from within the agency and across government. People, not rules, build state capacity.

Outsiders to government technology blame politics for an environment hostile to building good software. But the failure to build good software has also deeply influenced our politics. When the child of immigrants is told that good grades and a little bit of paperwork can make them the first in their family to go to college, and that promise turns out to be false, that’s one more person likely to seek hope in strong man rule. Conversely, when even the interaction of collecting taxes demonstrates respect for the taxpayer through clarity and ease, our democracy may live to see another day.

To my knowledge, neither of these projects attempted to use AI, but AI is not the goal. The ability of our government to deliver on its promises is the goal. Government will ultimately need to employ AI because the magnitude and complexity of the challenges we face continues to grow, and because the public’s expectations continue to grow. Tom Loosemore, one of the founders of the UK’s Government Digital Service defines digital as “applying the culture, processes, business models and technologies of the Internet era to respond to people’s raised expectations.” The culture, processes, business models and technologies of the AI era will raise expectations even further. To meet those expectations with only an ever more persnickety set of rules and orders is to allow unintended consequences to dictate our future. Luckily, we have a choice. 

Footnotes

  1. The author wishes to thank Dan Ho and Nick Bagley and Cass Madison. ↩︎
  2. Tooze, Adam. “Welcome to the world of the polycrisis,” The Financial Times, October 28, 2022. Available online at https://www.ft.com/content/498398e7-11b1-494b-9cd3-6d669dc3de33. ↩︎
  3. “Runaway bureaucracy could make common uses of AI worse, even mail delivery,”
    By Daniel E. Ho And Nicholas Bagley, The Hill, January 16, 2024. Available online at
    https://thehill.com/opinion/technology/4405286-runaway-bureaucracy-could-make-common-uses-of-ai-worse-even-mail-delivery/ ↩︎
  4.  “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” The White House. Available online at https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ ↩︎
  5. https://dho.stanford.edu/wp-content/uploads/OMB_Letter.pdf ↩︎
  6. Executive Order 13985 of January 20, 2021, Advancing Racial Equity and Support for Underserved Communities Through the Federal Government
    https://www.federalregister.gov/documents/2021/01/25/2021-01753/advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government ↩︎
  7. https://dl.acm.org/doi/10.1145/3593013.3594015
     %0 Conference Paper %T The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government %@ 9798400701924 %U https://doi.org/10.1145/3593013.3594015 %R 10.1145/3593013.3594015 %B Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency %I Association for Computing Machinery %A Jennifer King %A Daniel Ho %A Arushi Gupta %A Victor Wu %A Helen Webley-Brown %D 2023 %P 492–505 %C , Chicago, IL, USA, ↩︎
  8. https://www.irs.gov/about-irs/strategic-plan/irs-direct-file-pilot-news ↩︎
  9. https://www.nytimes.com/2024/03/13/us/politics/fafsa-college-admissions.html ↩︎