Executive summary
The point of any hiring process is to select the most qualified candidate. Organizations use various methods — exams, interviews, case studies, and presentations — to predict job performance. When done well, these assessments clearly identify top talent and ensure informed hiring decisions. When done poorly, they slow hiring and provide little insight. Simply put, strong assessments drive good hires; weak ones lead to bad hires.1
Today, the vast majority of federal hiring actions rely on low-quality applicant self-assessments that do little to differentiate among candidates based on observable merit. This leads to frustration for hiring managers, cycles of repetition for HR professionals, and ultimately, weaker delivery of government programs as new hires don’t arrive fast enough to make a difference.
The recently passed bipartisan Chance to Compete Act (CCA) attempts to address this challenge by directing the federal government to prioritize the use of objective assessments in federal hiring to the maximum extent possible. While this is a clear step in the right direction, the CCA left many structural barriers unaddressed: new assessments, developed in the historical way by industrial and organizational psychologists, are extremely resource-intensive to produce. Generating enough of them to cover broad swaths of the federal workforce as envisioned by CCA is likely to be prohibitively expensive unless we change our approach as a government.
For the law to have its intended effect, the Office of Management and Budget (OMB) and the Office of Personnel Management (OPM) will need to lay out a strategy that enables successful implementation. This strategy should focus on assessments that:
- maintain sufficiently low unit cost to enable agencies to purchase or develop them given resource constraints;
- can efficiently handle large numbers of applicants, particularly for entry-level roles, with low-enough marginal cost to offer them to nearly all applicants that possess minimum qualifications for a given job;
- accurately differentiate applicants of varying suitability for a job and reliably predict which candidates will be high-performing in roles.
This paper proposes a strategy to tackle these challenges and drive a significant paradigm shift in how federal agencies assess candidates in ways that are consistent with long-standing Merit System Principles but reflect the needs of a modern workforce by focusing on three pillars:
- Set Goals and Drive Accountability at Agencies: Move beyond mere encouragement, using carrots and sticks to lead agencies to substantially reduce their reliance on self-assessments and improve their strategic workforce-management functions.
- Simplify the Process for Developing Assessments: De-proceduralize building and validating assessments to significantly reduce the amount of time and money it takes to build them, and expand the role of hiring managers in the entire process.
- Build and Share Capacity Across the Entire Enterprise: Transform the role of OPM in assessments from that of compliance and oversight monitor to an engine of support for agencies that offers a robust marketplace of government and commercial-off-the-shelf solutions that agencies can share.
Additionally, OPM and OMB should consider a set of broader strategies for reforming other parts of the federal personnel system that are upstream of this transition and would make success more attainable, such as the professionalization of the HR workforce.
These actions, if taken together and with an eye towards intentional reform, can help make real, meaningful assessments a normal part of federal hiring. Failure to stick the landing on any of these items will endanger the entire bipartisan project of skills-based hiring and force hiring managers to continue to guess at the relative merit of applicants for roles that are critical to the functioning of the government. Without bold action, we will be stuck in a status quo that serves no one.
How we got here
Since the creation of the nonpartisan, merit-based civil service in the 1880s, generations of American policymakers have zeroed in on the objective assessment of candidates (e.g., civil service exams) as the fairest way to ensure that the government hires the most qualified person for a job. From rudimentary tests of literacy and penmanship in the late 19th century Customs Service2 to a computerized exam and group exercises for today’s aspiring Foreign Service officers, exams have served an important role in building a talented federal workforce and ensuring that federal employees are hired based on merit and not political or personal preference. Offering an assessment to all interested applicants is also critical to ensuring that interested members of the public get a fair shot, even if they don’t get the job in the end.
However, structural barriers and shrinking budgets have led to an erosion in the use of objective assessments—today, 90 percent of federal jobs rely instead on a candidate’s self-assessment3 of their skills and abilities rather than some type of more technical exam. This has left hiring managers frustrated that they lack the ability to objectively differentiate among candidates of varying suitability for a given job. We didn’t get here on purpose. Rather, decades of disinterest from policymakers and appropriators pushed the federal government to make do.
Quality objective assessments got too expensive
Over the course of the 20th century, as the federal government got more complex and highly specialized, it became increasingly and then prohibitively expensive to offer assessments for many jobs.
In the 1950s, when the federal workforce reached its steady-state size of approximately 2.1 million employees,4 over half of federal workers were clerks performing highly routinized, standardized work for which skills and abilities can be relatively easily assessed.5 Over the ensuing 70 years, although the overall size of the federal workforce did not change, the type of work it did shifted towards ever-more-specialized program delivery. Today, there are over 400 unique occupations and many tens or hundreds of thousands of unique roles across the white-collar federal workforce. These roles are harder to test for: they require highly specialized skills, they are less standard across organizations, and they require not just cognitive skills but also particular experience. At the same time, court cases and related guidance from the Equal Employment Opportunity Commission (EEOC) have reinforced the need for any objective assessment to be backed by clear evidence and research in order to be legally valid under equal employment statutes.6
Historically, when organizations needed to develop job assessments, they turned to industrial and organizational psychologists (I/O psychologists) to conduct a job analysis to identify the core competencies required for a given role, develop assessment methods to test or predict those competencies, and then build the assessments. This process has been largely unchanged since the federal government began testing candidates en masse during World Wars I and II amid the need to hire and deploy many thousands of Americans quickly. It is highly manual work by experts and, as a result, it can cost agencies upwards of half a million dollars and many months to produce a single new assessment. Development is also rate-limited by the scarcity of I/O psychologists available to do the work either inside the government or on contract. Further, the federal government receives tens of millions of applications for only a few hundred thousand jobs each year, so offering assessments that require human graders quickly becomes infeasible.
This increasing complexity and scale of federal job searches, combined with the highly-manual process for developing assessments, has made it virtually impossible for agencies to offer high-quality candidate evaluations for most jobs. As agencies continue to experience downward pressure on budgets for management functions and have had to hold headcounts flat over the previous several decades, most have chosen to economize. As a result, the vast majority of hiring actions have come to rely on applicant self-assessments that are cheap and scalable but of low quality.
What’s is actually required (or not) by law
The federal government’s current approach to evaluating candidate qualifications is governed by a framework of laws and regulations dating back to the Civil Service Reform Act (CSRA) of 1978. The law abolished the Civil Service Commission and pushed agencies to make hiring decisions based on job-related criteria and the results of valid assessment tools. The law established OPM and the Merit Systems Protection Board (MSPB).7 While OPM’s role has evolved over time, it primarily serves as a centralized human resources hub for the federal government, with duties that span hiring, personnel management, and employee retirement. The MSPB was designed to safeguard federal employees and hear appeals on issues like wrongful termination. CSRA also handed the EEOC authority over the application of equal employment laws to federal agencies.In 1978 the EEOC, in collaboration with OPM, the Department of Labor, and the Department of Justice, published the Uniform Guidelines on Employee Selection Procedures.8 This document provides guidance to the public and private sector on how to maintain compliance with federal employment law — specifically Title VII of the Civil Rights Act of 1964.9 Federally, these guidelines are operationalized through Title 5 of the Code of Federal Regulations, Part 300-399. These rules detail how federal agencies should implement hiring procedures to ensure adherence to merit system principles and non-discrimination policies. Other OPM publications reinforce these rules, including the Delegated Examining Operations Handbook (DEOH)10 and Federal Assessment Policy Guides.11 Collectively, these guidelines provide the foundation for how agencies develop and implement candidate assessments. The intention is to ensure that assessments are both related to the job in question and valid predictors of performance — and that they don’t have an adverse impact on protected groups.12 The EEOC guidelines outline three primary methods for validating that assessments meet these goals:Demonstrate a statistical relationship between assessment scores and job performance metrics;Ensure that the assessment directly measures knowledge, skills, or abilities essential to the job;Verify that the assessment measures an underlying trait critical to job performance.When it comes to administering assessments, all candidates must be tested under similar conditions and federal agencies must monitor and address any adverse impact on protected groups. Agencies are also required to document the validation process, including the rationale for using a test, data supporting its validity, and any analysis of adverse impact. These records are maintained to show evidence of compliance and to address legal challenges.The EEOC guidelines leave a lot of room for interpretation in practice. While this gives the federal government room to exercise its best judgment, it has also raised the specter of litigation, which ironically has led to overly stringent interpretations of how to apply the rules so as to shield against legal challenge. The standard process for validating assessments is thus costly and time consuming; anecdotal reports suggest that assessments for new occupations take as long as 18 months and cost as much as $250,000. The resource-intensive nature of this approach, combined with the custom of engaging industrial and organizational psychologists, is a major impediment to timely development.While OPM has not issued guidance requiring the use of I/O psychologists in validating assessments, relying on them has become the standard modus operandi. These experts are often pulled into expert testimony when an applicant (or set of applicants) have challenged the validity, reliability, or application of an assessment instrument. This can become problematic during litigation because an I/O psychologist might not stand behind an assessment tool that neither they, nor one of their colleagues, had a hand in developing. To mitigate this risk, hiring teams typically engage I/O psychologists in the initial stages of developing roles and candidate evaluation plans. But because the federal government employs only a small number of these experts, engaging them can slow down timelines significantly.In the years since the EEOC guidelines were issued (they have not been meaningfully updated since 1978), OPM has continued to publish guidelines and recommend best practices. While these tools are designed to help agencies build compliant mechanisms for assessing candidate qualifications, it’s hardly surprising that already-strapped hiring managers and human resource specialists opt for the path of least resistance (paths that are less labor intensive, risky, or likely to be challenged), even if it means the candidates are less qualified or a posting is less likely to result in a strong hire. In short, there is no incentive to prioritize short-term pain for long-term gain.
The problem with self-assessments
The federal government receives more than 22 million applications per year on USAJOBS13 for about 250,000 to 300,000 jobs,14 with individual postings receiving several hundred or even thousands of applications. With limited time and resources at their disposal, agencies have been forced to adopt self-assessments as their primary tool for screening applicants. These assessments have some benefits: They are virtually free for agencies to offer, they are an automated part of the job application and don’t require scoring so they scale easily, and they are generally not at risk of challenge in court. However, self-assessments are also extremely poor differentiators of merit because they rely entirely on the candidates’ own view of their own qualifications rather than the observation of an impartial expert, their demonstrated skill at accomplishing a task, or their objective performance on an exam.
In the self-assessment model, applicants are given a questionnaire and asked to rank their skill level on a set of competencies required for the role. Some flaws in this system are obvious: Individuals seeking a job are incentivized to overstate their qualifications or ‘pad their resume’ in ways that are familiar to hiring managers and LinkedIn users.15 Other flaws are more structural: For example, in what is known as the Dunning-Kruger Effect,16 the least -skilled individuals are more likely to rate themselves high on the competency scale, while the most competent individuals are more likely to underestimate their ability.17 There are also notable gender differences that can impact candidate slates, with men more likely to overestimate their qualifications, while women are more likely to underestimate them.18 As a result, the outputs from these assessments are functionally useless to hiring managers trying to make good decisions in hiring.
Additionally, because of this process, the first step after self-assessment is usually a screen by an HR professional for minimal qualification — which is what the assessment was supposed to do to start. When administered using best practices, assessment tools provide an objective set of criteria that hiring teams can use to evaluate who is — and isn’t — qualified for a role rather than having to have an HR person scrub through every single one. However, because self-assessments don’t actually screen for observable skill, HR practitioners then have to go through the results and candidate resumes to determine qualification by attempting to match job qualifications with the candidate’s application. This typically occurs without input, guidance, or direction from the hiring manager. While HR professionals have a deep understanding of hiring systems and processes, they do not often have subject matter expertise or exposure to the day-to-day work of the teams they support. This can make it difficult for them to accurately assess who is a viable candidate for a role, particularly when that role requires specialized science, data, or tech expertise.19 This can mean disqualifying strong applicants whose resumes aren’t detailed enough to be cross-walked against job qualifications by someone who doesn’t know the work.
What does this approach to assessing qualifications mean for government in practice? People who know how the “unwritten rules” work understand they won’t make the cut if they don’t copy and paste the right keywords into their resume or rate themselves ‘highly qualified’ or ‘expert’ in all criteria on the self-assessment. So that’s exactly what they do. At the same time, highly skilled individuals who don’t understand these rules are likely to get knocked out at step one.
This result is hardly surprising. But the challenge for government is that it’s not enough to know that an approach is flawed. There must be a time- and cost-efficient alternative that human resource professionals and hiring managers can access and apply effectively. Despite a decade of efforts to develop alternative assessment approaches, buttressed by issue memos and executive orders that encourage different behavior, the big shift away from self-assessments has remained out of reach. In a bureaucratic environment that prioritizes stasis and mitigation of legal risk, behavioral change will not happen on its own.
SME-QA: Attempts to develop alternatives have shown promise but struggled to scale in a resource-efficient way
The Subject Matter Expert Qualification Assessment (SME-QA)20 is an example of a successful innovation that engaged subject matter experts at the beginning of the process of qualifying candidates for specialized tech and data roles.21 It was piloted in 2018 as part of a partnership between the U.S. Digital Service (USDS) and OPM.The first SME-QA pilots took place within the Department of Health and Human Services and the Department of Interior. They replaced the traditional self-assessments with a structured resume review and phone interview conducted by individuals with subject matter expertise (SMEs). Once the SMEs completed their assessment, HR specialists reviewed the justifications to ensure the decisions were properly documented, including the specific competencies that a rejected applicant was missing. During this process, the HR specialists deferred to the SMEs’ judgment on qualifications because they had a deep understanding of what a candidate needed to be successful in the role. It was only after the resume review and phone interview were conducted that candidates were deemed minimally qualified and veterans’ preference and category rating rules applied.22 Critical to the success of the SME-led review is a two-day job analysis workshop, which is held prior to posting the role. During this time, SMEs work directly with HR specialists and an I/O psychologist to identify the skills and competencies needed for the role. This investment of time allows teams to not only align on candidate profile and expectations, but to produce a valid assessment instrument. This process has been endorsed by OPM, and agencies who use it can consider their assessment validated and in compliance with EEOC rules.While SME-QA requires significant upfront investment from SMEs, the goal is to increase the quality of the candidate pool and save hiring managers time by (1) providing them with a smaller list of more highly qualified applicants, and (2) increasing the likelihood they will fill the role the first time it is posted. Both of these assumptions were borne out during the 2019 pilot, and the positive results have continued. The pilot found that the time hiring managers took to make a selection after receiving a certificate attesting to a candidate’s basic qualification for the job was reduced by more than half.23 Following these successful pilots, SME-QA has been used many times to qualify applicants for jobs in other agencies, including for permanent competitive service roles such as GS-12 Grants Management Specialists and GS-13 Foreign Affairs Officers in the Department of State.24 OPM has also leveraged similar SME-driven evaluation approaches during government-wide pooled hiring actions, in which agencies looking for similar skills share an applicant pool. The application of SME-QA in pooled hiring has become a key way to expand the use of skills-based hiring approaches.25 But despite SME-QA’s promise, it has failed to scale widely, primarily because of the resource-intensive nature of executing it well—it requires many experts to spend large amounts of time to both manually review resumes and participate in interviews that is otherwise spent delivering programs. For some jobs and organizations this tradeoff is one they are willing to make but for many roles with extremely high applicant volume it is not feasible to use this type of assessment with frequency. Many in government see the value in leveraging it for pooled hiring actions, but hiring managers have been less likely to see it as a valuable use of time for one or two hires in a given job classification. As OPM encourages more agencies to leverage SME-QA-like processes, additional pressure will be placed on the limited pool of existing SMEs for roles requiring specialized expertise. This type of assessment should always have a role to play for certain types of jobs but can’t scale to meet the broader challenges across government.
These challenges are not insurmountable, but they will require a thoughtful, measured, and sustained implementation focus inside the government to overcome.
Roadmap for successful implementation
The Chance to Compete Act, signed into law in December 2024, is the culmination of more than a decade of bipartisan efforts to modernize federal hiring and move away from outdated assessment practices. The legislation builds on previous hiring reforms, including the Competitive Service Act of 2015 and Executive Order 13932 (2020), both of which encouraged agencies to prioritize skills-based hiring. While the law represents a critical step toward modernizing, it is deeply vulnerable to implementation risk and is insufficient to get federal hiring where it needs to be.
Chance to Compete compels agencies to move to skill-based assessments but it does not do anything to de-proceduralize or speed up their development or deployment. The same resource tensions that drove the federal government towards self-assessments will exist unless implementation takes them seriously.
In order to drive meaningful change in federal hiring, OPM and OMB should lay out and quickly execute a strategy that is focused on several core principles:
Set Goals and Drive Accountability at Agencies
- Direct agencies to prioritize resources for implementation.
- Accelerate the transition to skills-based assessments for high-priority roles.
- Mandate adaptive workforce planning and use of pooled hiring actions that leverage skills-based assessments
Simplify the Process for Developing Assessments
- Streamline the process for developing new assessments.
- Involve hiring managers and teams in screening candidates – and in the hiring process more broadly.
- Build a cross-agency team to issue clarifying guidance to agencies on the implementation of EEOC’s Uniform Guidelines.
Build and Share Capacity Across the Entire Enterprise
- Transform OPM’s role in assessments from compliance and oversight monitor to an engine of support, innovation, and enablement for agencies.
- Invest in the usability and scope of OPM’s centralized set of hiring platforms.
- Rapidly expand the use of commercial off-the-shelf assessments and technologically-advanced assessment tools
Set Goals and Drive Accountability at Agencies
(1) Direct agencies to prioritize resources for implementation
OMB and OPM should be muscular in their approach to resourcing this transformation to ensure that agencies adequately prioritize time and money. Moving from low-cost questionnaires to high-quality objective assessments will require agencies to invest time and money in developing or acquiring new offerings. This may require agencies to reprioritize resources towards assessments but should result in less rework by HR professionals and hiring managers–and therefore more resource efficiency–in the long run by resulting in speedier hiring processes and fewer that close without a selection due to lack of qualified applicants. Federal human capital functions have been starved of this type of capital investment for years and true transformation will require at least a short-term infusion of resources to adopt more modern ways of working. Agencies may not be interested or equipped to make these tradeoffs on their own and OMB should be vocal in helping ensure that they do.
Additionally, OPM needs to dedicate resources and leadership attention to managing this strategic implementation and driving accountability for behavior change at agencies. The nonpartisan Congressional Budget Office estimated that OPM would have to spend $81 million over five years just to manage implementation of CCA,26 but the law itself did not include any specific appropriation to support implementation. The CBO estimate is likely overly generous, but without investment in OPM’s own capacity to serve as a center of excellence for assessment strategy, it’s hard to see the federal government being successful in implementation.
(2) Accelerate the transition to skills-based assessments for high-priority roles.
OPM and OMB should be bold, setting a goal to sunset the use of self-assessments for the top five job series in six months and use this as an opportunity for agencies to re-learn how to do skills-based hiring together. Sunsetting the use of self-assessments for the top five job series would not only move a significant number of federal roles towards an evaluation approach rooted in best practices, but would set a highly visible example that could significantly accelerate the transition for other role classifications and hiring actions. This would build on plans in the Biden administration to move the IT Management series to skills-based hiring by expanding the areas of focus and making the shift a requirement.
To accelerate in ways that touch every agency, OPM and OMB should pick positions that represent large segments of the federal workforce and have roughly the same set of duties from agency to agency like IT Management (2210), Human Resources (201), Contracting (1102), and Program Management (0343). These job series are ideal for this initial phase of implementation because:
- They exist across virtually all federal agencies, making them well-suited for shared development of assessments and pooled hiring actions.
- They have well-defined technical and professional competency requirements that can be objectively assessed through structured evaluations.
- Many of these roles have existing, industry-standard assessment models with established best practices that can be adapted for federal use.
(3) Mandate adaptive workforce planning and use of pooled hiring actions that leverage skills-based assessments.
OPM and OMB should mandate that agencies conduct more agency-wide, real-time, and ongoing hiring forecasting and participate in pooled hiring actions that use skills-based assessments. Rather than each program or agency independently posting positions and evaluating candidates, pooled hiring enables agencies to combine their recruitment efforts, share assessment resources, and create a common candidate pool that has been evaluated against standardized qualification criteria. This approach has proven particularly valuable for technical roles where agencies face similar specialized talent needs, such as cybersecurity specialists, data scientists, customer experience specialists, and digital service experts.
However, to do this effectively requires that agencies accurately forecast hiring needs in advance and communicate them internally to their agency and then externally to others – pooled actions work best when agencies can anticipate future needs and have roles available when the list of certified applicants is fresh. This requires building additional capacity and mandates for strong forecasting that help unlock economies of scale.
In the past, pooled hiring actions have been both centrally led by OPM and initiated through cross-agency coordination. Recognizing their potential to improve efficiency and to get agencies stronger talent, faster, OPM has made expanding the use of pooled hiring actions a priority goal.27 The 2024 joint memo issued by OPM and the OMB on improving the federal hiring experience specifically encourages agencies to use pooled hiring.28 OPM’s goal is to lead or facilitate 28 pooled/shared hiring actions for agencies to fill critical vacancies, resulting in 700 agency hires by September 30, 2025.29 As of June 2024, OPM had held 10 pooled hiring actions, resulting in 69 hires. While this is encouraging progress, it still represents a small share of the 200,000 new federal employees hired each year.30
Simplify the Process for Developing Assessments
(4) Streamline the process for developing new assessments.
OPM should issue guidance to limit the need to engage I/O psychologists in assessment development to complex or high-risk roles – prioritizing limited expert time in the areas of greatest need. There is limited public documentation on the size and scope of I/O psychologists’ day-to-day role and when their expertise is required by either rule or convention. However, most conversations on federal hiring quickly turn to discussion of the bottlenecks that arise from the limited number of I/O psychologists that work at the federal level. According to the Bureau of Labor Statistics, the whole country has just over 1,000 working I/O psychologists, and only a fraction of them work for federal agencies.31 Given these data, it’s no surprise that engaging this specialized expertise can slow down the development of job postings and assessments.
As explained above, the Uniform Guidelines on compliance with federal employment law do not require that I/O psychologists be involved in developing assessments. Critics of the federal government’s heavy reliance on I/O psychologists cite a compliance-driven mindset that overemphasizes legal defensibility, often at the expense of efficiency and practicality. While I/O’s expertise is invaluable for high-risk or highly complex roles, not every assessment requires such advanced validation. Many assessments, particularly for less specialized roles, can be designed and implemented effectively by HR professionals and subject matter experts following clear, standardized guidance from OPM.
Chance to Compete establishes that “an individual who is determined by an examining agency to be a subject matter expert in the subject and job field of the position may develop, in partnership with human resources employees of the examining agency, a position-specific assessment that is relevant to the position, based on job analysis”32 but doesn’t describe how this requirement intersects with EEOC’s Uniform Guidelines or the need to ensure assessments are legally defensible. OPM and Chief Human Capital Officers should take a more active role in helping HR teams cross-reference common practice with what is actually required by federal law to streamline the process of building and administering assessments.
In 2025, OPM should issue guidance encouraging agencies to leverage SMEs wherever possible and clarifying under what limited set of circumstances an I/O psychologist must be consulted directly. HR and program personnel with measurement expertise should be able to create certain types of assessments based on specific evidence in job analysis. The assessments developed should have clear and documented validity, which could be evaluated by an I/O psychologist if the assessment is challenged. This could free up I/O psychologists to focus on strategic priorities, such as developing scalable validation methods, addressing systemic hiring challenges, and supporting agencies in high-stakes cases.
(5) Update guidance and change the culture to reflect greater involvement of hiring managers and teams in screening candidates – and in the hiring process more broadly.
OPM’s Delegated Examining Operations Handbook33 includes process flows that outline limited roles for the hiring manager (see diagram below). Out of 22 steps, hiring managers are involved with only six, and their discretion is sharply limited in most of them, including selecting from a certificate of qualified applicants selected by a process they’ve been largely divorced from. Neither federal statute nor regulation requires this; it has been introduced by OPM and appears to be changeable without Congressional or regulatory action. However, it’s not enough to simply change the guidance – OPM has to work with the entire HR profession to shift the culture of hiring to prioritize involvement by and satisfaction from hiring managers with the process.
Hiring managers, supported by a skilled HR workforce, are best equipped to understand which candidates have the skill and experience to succeed in a role, and OPM should update guidance wherever applicable to direct HR professionals to collaborate closely with hiring managers, leveraging their expertise and perspective, much earlier in the assessment process and through the many steps of the hiring process. Hiring managers should have a greater voice in the type of assessment used, the way the results are interpreted, and any and all judgement calls that influence which candidates end up on the certificate. Even with the changes recommended here, the federal hiring process will remain rigid and hostile to manager discretion, which reduces government’s ability to select the best candidates. Our goal is greater confidence from hiring managers that the candidates they are presented with are truly qualified to do the job.

(6) Build a cross-agency team to issue clarifying guidance to agencies on the implementation of EEOC’s Uniform Guidelines.
The Uniform Guidelines on Employee Selection Procedures, written in 1978, created a framework to ensure fairness and equity in public and private sector hiring. The world has changed dramatically since then. We live in an era where modern technology and science has given us tools to better understand and assess human potential. Yet government agencies remain tethered to vague and outdated rules that fail to account for innovations in skills-based assessments and data driven evaluations. These limitations not only prevent agencies from tapping the full potential of their talent pool, but make hiring slower and less precise, particularly for roles requiring specialized expertise. The lack of specificity in the rules themselves also exacerbates the existing culture of risk aversion, with agencies often taking the most restrictive position on how they should be applied for fear of litigation.
Today, there is an opportunity to update EEOC guidelines to bring them into alignment with best practices, clarify key areas of confusion, and emphasize flexibility, practicality, and accountability in their application. The new framework must explicitly encourage organizations to incorporate skills-based assessment practices, such as job simulations, structured interviews, and competency-based evaluations. These approaches are already well-established in the private sector and have consistently proven to reduce bias and improve hiring outcomes. The updated guidelines should also account for emerging technologies, setting clear parameters for how AI and other digital tools can be leveraged in hiring actions.
Amending EEOC guidelines requires the Chair of the Commission to bring together several federal agencies, including OPM, OMB, EEOC, MSPB, the Government Accountability Office, and the General Services Administration. Proposed rule changes would need to go through a public notification and comment process to gather input from stakeholders, including federal agencies, hiring managers, civil rights groups, and assessment experts.
Build and Share Capacity Across the Entire Enterprise
(7) Transform OPM’s role in assessments from compliance and oversight monitor to an engine of support, innovation, and enablement for agencies.
OPM should adopt an approach that treats agencies like customers and seeks to empower them to make the right decisions rather than preventing the wrong ones. OPM was created in 1978 to oversee federal workforce management, with the goal of eliminating conflicts of interest and protecting merit system principles.34 In executing this mission, OPM has taken a compliance-focused approach, often acting as a gatekeeper and ensuring that agencies followed strict rules designed to protect fairness and prevent abuse – precisely the type of unproductive oversight culture that has slowed federal hiring to a standstill.
To be fair, OPM has invested a great deal of time and resources to improving processes for developing, administering, and sharing assessments. However, there is still a lack of clarity about what’s actually happening on the ground in agencies, including their key operational pain points. There’s also a need for a comprehensive mapping exercise to identify which parts of today’s hiring processes are required by federal law and where there are opportunities to remove some of the operational barriers that have accumulated over time.
OPM can complete its transformation into an innovation hub by pairing its natural background in federal human capital policy with expanded service delivery expertise. This can work at OPM and has before: Prior to the termination of most or all staff by the current administration, OPM had a nascent Digital Services team within its Office of the Chief Information Officer to incorporate modern technical skills and enhance digital services across the agency.35 Through their partnership with OMB, this team had been most focused on the high-impact service providers – OPM’s retirement services, health insurance, and USAJobs to name a few. While this focus on customer experience and modernization in high-leverage areas of government operations represents real progress, these efforts could be expanded to address critical gaps in assessment infrastructure. OPM should re-invest in this capability and empower a digital services team to develop a roadmap for policy and process improvements that is supported by effective technical solutions, grounded in what is required by law, and informed by the pain points experienced by HR professionals and hiring managers day-to-day.
(8) Invest in the usability and scope of OPM’s centralized set of hiring platforms.
OPM should invest in USA Hire as a government-wide shared service that forms the backbone for shared assessments while opening the platform to third-party assessments from agencies and industry. OPM launched USA Hire in 2014 as part of an initiative to modernize the federal hiring process.36 It was designed to provide agencies with a standardized platform for conducting online skills-based assessments, aiming to improve the objectivity and efficiency of candidate evaluations.37 The platform emphasizes evaluating candidates’ competencies rather than relying solely on self-reported qualifications or traditional resumes.
While it has made strides in standardizing evaluations, USA Hire’s usability often falls short, leaving both applicants and hiring managers frustrated. The platform also only hosts OPM-developed assessments and, as a result, is missing assessments for many of the specialized job series that hiring managers find most important and alternatives that might be available on the commercial market. USA Hire is also a fee-for-service platform, leading some agencies to forgo its use due to limited resources. This makes it a more challenging vehicle for sharing best-practice assessments across agencies.
To fulfill its potential, USA Hire should be built as a government-wide enterprise service, which would allow OPM to mandate that agencies use it. Ideally this work would be embedded in a broader hiring product strategy within OPM, ensuring that the software it develops and promotes addresses pain points and meets user needs. The platform must prioritize usability — investing in intuitive design, streamlining assessment processes, and incorporating modern user-centered approaches. A platform that works seamlessly for applicants while equipping agencies with better tools to identify top talent would not only reduce the burden associated with creating and administering skills-based assessments, but could help attract the next generation of public servants in an increasingly competitive labor market.
Additionally, the platform should be open to both other federal agencies and third-party assessment vendors that meet baseline standards for legal validity and other reasonable requirements that OPM might require. This would transform USA Hire into a central repository for all assessments available across government. This centralized repository of approved hiring assessments could provide already time-strapped hiring managers with access to vetted, validated instruments for evaluating candidates based on skills and competencies. Leveraging proven assessments can reduce administrative burden and accelerate hiring timelines for agencies seeking to fill a role. Sharing assessments could also promote consistency in how candidates are evaluated across agencies, strengthening fairness and transparency across the federal government.
OPM has been historically reluctant to promote the sharing of assessments across agencies because of the risk they will make their way into the hands of potential job candidates. If a candidate accesses an assessment prior to their screening, it invalidates the assessment and the development process has to restart. This risk can be mitigated by building controls into sharing platforms like implementing role-based access, restricting downloads, and auditing logs to monitor for suspicious activity.
(9) Rapidly expand the use of commercial off-the-shelf assessments and technologically-advanced assessment tools.
OPM should work to quickly build a strong best-in-class marketplace for commercial offerings, leveraging the scale of government to get a good deal.
USA Hire provides a range of off-the-shelf assessments that cover various general and technical competencies, including cognitive ability, situational judgment, and writing skills. As agencies move to hire more specialized talent in the areas like cybersecurity, data science, and AI, these general assessments will be of limited use. USA Hire currently lacks validated instruments to evaluate these specific technical skills, and building custom assessments is time- and resource-intensive for agencies. However, off-the-shelf qualification assessments for technical roles already exist and are widely used by companies outside of government. This is particularly true in fields like software development, data science, and cybersecurity. Agencies like the Department of Veterans Affairs and OPM itself have utilized off-the-shelf assessments for roles in IT, cybersecurity, and other technical areas. There are also professional organizations that produce standardized assessments, including those that support engineering disciplines. OPM should leverage industry-standard assessments that validate competencies in areas like coding, data visualization, and machine learning to address this critical gap. Additionally, many companies are beginning to offer AI-enabled assessments that allow employers to offer options like asynchronous structured interviews that can help relieve some of the resource burden on HR and hiring managers while still producing methodologically valid outcomes to advance candidates further in the process. OPM should seek to partner with such vendors to provide agencies and hiring managers with as many valid assessment options as possible.
Additionally, OPM should take this opportunity to use category management techniques championed by the first Trump administration38 to aggregate demand from agencies and issue a best-in-class contract vehicle for agencies to purchase both specific assessments and assessment technology like AI-enabled structured interview tools. This would enable OPM to take a proactive approach to signaling the government’s needs to industry and working with agencies to find the most cost-efficient assessment for a given job. It would also allow OPM to negotiate on behalf of the entire federal government and leverage the size of the entire market to get the best possible deal from industry.
The Chance to Compete Act is a good first step but other barriers remain
If the federal government is going to be successful in reforming federal hiring, merely implementing Chance to Compete isn’t enough. It should address several other parts of the interconnected federal personnel system that are not discussed in CCA but are nevertheless upstream of a successful assessment strategy:
Reconsider Job Classification Systems – There are more than 400 job series and thousands of individual job classifications in the federal government. Under current system and process constraints, the transition from self-assessment to skills-based assessment for even a fraction of these roles would take decades and tens of millions of dollars. Now factor in the need to adapt to the disruptions brought on by the dizzying rate of technological and societal change, as AI and other forces reshape all manner of jobs and work.
There is little hope of catching up without reducing the number of assessments needed by the federal government (and thereby the total cost and difficulty of implementation). One way to do this is to reduce the total number of discrete occupations/jobs in use by agencies and individual types of positions in a given workforce. This could be accomplished by collapsing similar occupations into job families and offering assessments for that entire job family—for example, there are 16 different occupations in the Accounting and Budget job family today and collapsing those into even 4 or 5 discrete job types could significantly improve the government’s ability to offer assessments at scale while improving flexibility for supervisors to compose teams of more versatile talent. Chance to Compete requires that OPM removes degree requirements from its classification system but not that it actually rethinks the entire system to simplify it.
Encourage Agencies to Hire in Bulk – Another way to reduce the number of assessments needed is to hire in bulk through “pooled” hiring actions where candidates apply to a general posting (e.g., for Customer Experience Strategists) and be assessed once but are eligible for selection by any agency with a need for that role. The Competitive Service Act of 201539 allowed agencies to share candidates certified using this process, with the goal of creating a pool of qualified candidates that multiple agencies can leverage to hire quickly, but uptake has been slow across government despite significant efforts by OMB and OPM to change behavior.40
Depending on the nature of the role and the speed with which the hiring process progresses, candidates may stay certified for up to 240 days.41 Candidates who are “certified” are kept in a central repository that is searchable by hiring managers in other agencies. However, these candidates can only be shared if (1) the candidate opts into this sharing during their application process, (2) the hiring manager chooses to share their information with other agencies (today they must decide with which agencies and in what sequence they want to share), and (3) a hiring manager in another agency chooses to search through the database of certifications as a part of their hiring process. Even when a motivated hiring manager chooses to share certifications, agencies report that the search interface is difficult and time consuming for agencies to use.
OPM and OMB should consider making sharing a default permission for all job postings, with a checkbox requiring candidates to opt out from sharing with other agencies for other available jobs. They should also require hiring managers and their HR teams to review available shared certificates before launching new hiring actions. Today, sharing certificates is also restricted to candidates who applied to a position “in the same occupational series, grade level (or equivalent), full performance level, and duty location of the position the original hiring agency was seeking to fill.”42 In practice, this means that if an agency wants to consider a candidate on another agency’s certificate for a different role or a new hiring action, the candidate has to re-apply for the position and go through a new evaluation process, even if the new role requires the same qualifications as the original. The requirement for re-application should be removed.
Envision a Professionalized, Future-Ready HR Workforce – Without clear guidance, adequate training, and appropriate resourcing agency hiring teams default to what is familiar and cheap, even if it is ineffective. Skills-based assessments require a nuanced understanding of how to structure job analyses, design practical tests, and incorporate expert evaluations that should be part of the normal skill set of highly-effective human resources personnel. When these tools are not well-understood or supported, agencies struggle to implement them effectively, even when policy changes explicitly encourage their use.
Lack of investment in the workforce and their training at the ground level results in inconsistent adoption — or at times, active resistance — as hiring teams may not feel confident in shifting away from the more familiar, albeit less effective, self-assessment model. To drive cultural change, agencies need structured training, accessible tools, and continuous support to embed these practices into their hiring processes. Currently, HR officials are required to have 80 hours of training over a three-year period. This bar could be raised with a requirement for specific training in innovative hiring practices. Without this investment, even well-intentioned reforms will struggle to gain traction, ultimately limiting the government’s ability to attract and hire top talent. This could also be paired with a certification program modeled off of the Digital IT Acquisition Professional (DITAP) Program, which has been successful in preparing acquisition professionals for similarly complicated ways of working.
Address the culture of rework driven by unproductive oversight and compliance – While reusing work from a colleague to save time, effort, and money is an intuitive concept for most employees in the public or private sector, the federal government has historically disincentivized agencies from doing so. Agencies have been coached never to trust work done by others unless their lawyers and their experts have had a chance to conduct their own detailed analysis and confirm that it meets their understanding of a given legal standard. This culture has been further reinforced by oversight functions (e.g., the Government Accountability Office or Inspectors General) faulting agencies to process failures rather than outcome failures–there is no reward for speedy and effective program delivery but there is plenty of potential opprobrium for failure to mitigate every risk in a given process.
This has historically meant that agencies are frequently re-working things when they pass between agencies – security clearances, for example, were historically not reciprocal between agencies and recent reform efforts have zeroed in on this as a major barrier to speedier, more effective security procedures.43 This same culture has prevailed for assessments – agencies have felt the need to re-validate assessments developed by other agencies for fear that a mistake up-stream would reflect poorly on them. OPM and OMB need to find ways to help de-risk good faith reciprocity between agencies if the federal government is going to make use of the economies of scale and skill available to it as the largest employer in the country.
In sum, the Chance to Compete Act provides agencies with a mandate to transition to skills-based hiring, but it does not address adjacent issues that will hold back use of assessments. Without strong leadership, interagency coordination, and a commitment to overcoming bureaucratic inertia, federal agencies may struggle to implement these reforms at scale. To ensure the law achieves its intended impact, the administration must prioritize building agency capacity, establishing clear timelines for adoption, and incentivizing compliance. If executed well, this law has the potential to jumpstart a dramatic improvement in the federal hiring process, making it faster, more transparent, and better aligned with the principles of merit-based hiring.
Conclusion
Federal hiring is at an inflection point. The rules and processes meant to ensure fairness and strengthen merit-based principles have instead created a system that is slow, opaque, and misaligned with the demands of a modern workforce. While well-intentioned, these bureaucratic hurdles prevent agencies from hiring the best talent and undermine the government’s ability to deliver for the public. Yet the passage of the Chance to Compete Act and the energy of a new Administration create the opportunity to drive real change.
The reforms outlined in this paper — from eliminating outdated self-assessments to shifting OPM from a compliance watchdog to a partner in innovation — are not just theoretical fixes. They are practical, actionable steps that can be taken now. That means eliminating outdated processes, embracing skills-based hiring, and empowering hiring managers with the tools and authority to leverage best practices. The federal workforce of the future depends on the actions we take and the priorities we set today.
Footnotes
- Steven Hunt describes this succinctly: “The most important decision companies ever make about employees is the decision to hire them. All other decisions are a consequence of this initial choice.”
Steven T. Hunt Hiring Success: The Art and Science of Staffing Assessment and Employee Selection (Wiley, 2007), 3. ↩︎ - Moreira, Diana, and Santiago Pérez. “Civil Service Exams and Organizational Performance: Evidence from the Pendleton Act.” Working Paper. Working Paper Series. National Bureau of Economic Research, April 2021, https://doi.org/10.3386/w28665. ↩︎
- “Hiring Assessment and Selection Outcome Dashboard,” U.S. General Services Administration, last accessed January 2025, https://d2d.gsa.gov/report/hiring-assessment-and-selection-outcome-dashboard. ↩︎
- “Executive Branch Civilian Employment Since 1940,” United States Office of Personnel Management, accessed March 10, 2025, https://www.opm.gov/policy-data-oversight/data-analysis-documentation/federal-employment-reports/historical-tables/executive-branch-civilian-employment-since-1940/. ↩︎
- “A Fresh Start for Federal Pay: The Case for Modernization,” United States Office of Personnel Management, (April 17, 2002), https://ourpublicservice.org/wp-content/uploads/2002/04/ea4746e133b5f93e4f4086c873bd0bd9-1414080224.pdf. ↩︎
- Martinez, J. Michael. “The Luevano Consent Decree and Public Personnel Reform.” Public Personnel Management 37, no. 3 (September 1, 2008): 327–38, https://doi.org/10.1177/009102600803700305. ↩︎
- “Civil Service Reform–Where It Stands Today,” U.S. Government Accountability Office, May 13, 1980, https://www.gao.gov/products/fpcd-80-38#:~:text=The%20Civil%20Service%20Reform%20Act,from%20unfair%20or%20unwarranted%20practices. ↩︎
- “Uniform Guidelines on Employee Selection Procedures,” https://www.uniformguidelines.com/. ↩︎
- “Employment Tests and Selection Procedures,” U.S. Equal Employment Opportunity Commission, December 1, 2007, https://www.eeoc.gov/laws/guidance/employment-tests-and-selection-procedures. ↩︎
- “Delegated Examining Operations Handbook,” United States Office of Personnel Management, June 2019, https://www.opm.gov/policy-data-oversight/hiring-information/competitive-hiring/deo_handbook.pdf. ↩︎
- “Assessment and Selection,” United States Office of Personnel Management, last accessed January 2025, https://www.opm.gov/policy-data-oversight/assessment-and-selection/. ↩︎
- 29 CFR Part 160 ↩︎
- Erich Wagner, “OPM and OMB unveil a new plan to improve the federal hiring ‘experience’ for both workers and HR managers,” GovExec, August 15, 2024, https://www.govexec.com/management/2024/08/opm-and-omb-unveil-new-plan-improve-federal-hiring-experience-both-workers-and-hr-managers/398853/. ↩︎
- “Fedscope – Accessions Trend, FY2020-FY2024; FY2015-FY2019,” Office of Personnel Management, last accessed March 2025, https://www.fedscope.opm.gov/. ↩︎
- Jamie Guillory and Jeffrey T. Hancock, “The Effect of Linkedin on Deception in Resumes,” Cyberpsychology, Behavior, and Social Networking 15, no. 3 (March 2012): 135–40, https://doi.org/10.1089/cyber.2011.0389. ↩︎
- Brian Duigan, “Dunning-Kruger Effect,” Britannica, December 2024, https://www.britannica.com/science/Dunning-Kruger-effect. ↩︎
- Justin Kruger and David Dunning, “Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments,” Journal of Personality and Social Psychology, 1999, https://doi.org/10.1037/0022-3514.77.6.1121. ↩︎
- David Reilly et al, “Gender Differences in Self-Estimated Intelligence: Exploring the Male Hubris, Female Humility Problem,” Frontiers in Psychology, February 7, 2022, https://pmc.ncbi.nlm.nih.gov/articles/PMC8858829/. ↩︎
- Allie Harris and Peter Bonner, “Many Chutes and Few Ladders in the Federal Hiring Process,” Federation of American Scientists, October 28, 2024, https://fas.org/publication/many-chutes-and-few-ladders-in-the-federal-hiring-process/. ↩︎
- Jennifer Pahlka, “What on earth is SME-QA and why should you care about it,” Medium, February 2021, https://pahlkadot.medium.com/what-on-earth-is-sme-qa-and-why-should-you-care-about-it-66383167387c. ↩︎
- “SME-QA Hiring Pilot Final Report,” United States Digital Service, Summer 2019, https://smeqa.usds.gov/assets/SMEQA-initial-pilots-final-report.pdf. ↩︎
- Ibid. ↩︎
- Ibid. ↩︎
- “Lessons Learned: Piloting the SME-QA Process at the State Department,” Performance.gov, February 1, 2022, https://bidenadministration.archives.performance.gov/blog/state-hiring-pilot/. ↩︎
- “Streamline the hiring experience through the use of pooled/shared hiring actions, U.S. Office of Personnel Management, last accessed January 2025, https://bidenadministration.archives.performance.gov/agencies/opm/apg/fy-24-25/goal-2/. ↩︎
- “Cost Estimate – S. 59, Chance to Compete Act of 2024,” Congressional Budget Office, October 15, 2024, https://www.cbo.gov/publication/60824. ↩︎
- “Streamline the hiring experience through the use of pooled/shared hiring actions, U.S. Office of Personnel Management, last accessed January 2025, https://bidenadministration.archives.performance.gov/agencies/opm/apg/fy-24-25/goal-2/. ↩︎
- “Improving the Federal Hiring Experience,” Office of Management and Budget and United States Office of Personnel Management (August 14, 2024). https://bidenwhitehouse.archives.gov/wp-content/uploads/2024/08/M-24-16-Improving-the-Federal-Hiring-Experience.pdf ↩︎
- “Streamline the hiring experience through the use of pooled/shared hiring actions, United States. Office of Personnel Management, last accessed January 2025, https://bidenadministration.archives.performance.gov/agencies/opm/apg/fy-24-25/goal-2/. ↩︎
- “A Profile of the 2023 Federal Workforce,” Partnership for Public Service, last accessed January 2025, https://ourpublicservice.org/fed-figures/a-profile-of-the-2023-federal-workforce/#:~:text=In%20fiscal%202023%2C%20the%20number,45%2C000%20from%20the%20previous%20year. ↩︎
- “Occupational Employment and Wage Statistics,” U.S. Bureau of Labor Statistics, last accessed January 2025, https://www.bls.gov/oes/2023/may/oes193032.htm#(1). ↩︎
- 5 U.S. Code § 3304(d) ↩︎
- “Delegated Examining Operations Handbook,” U.S. Office of Personnel Management, June 2019, https://www.opm.gov/policy-data-oversight/hiring-information/competitive-hiring/deo_handbook.pdf. ↩︎
- “Mission and History,” United States Office of Personnel Management, last accessed January 2025, https://www.opm.gov/about-us/mission-history/#:~:text=OPM%20was%20established%20as%20a,created%20the%20competitive%20civil%20service. ↩︎
- John Hewitt Jones, “OPM CIO: new digital services team proving key for IT modernization projects,” FedScoop, August 24, 2022, https://fedscoop.com/opm-cio-on-new-digital-services-team/. ↩︎
- “Success Stories: Custom Mission-Critical Assessment: U.S. Customs and Border Protection,” United States Office of Personnel Management, last accessed January 2025, https://www.opm.gov/services-for-agencies/assessment-evaluation/usa-hire/cbp-processing-time.pdf. ↩︎
- “USA Hire,” United States Office of Personnel Management, last accessed January 2025, https://www.opm.gov/services-for-agencies/assessment-evaluation/usa-hire/#url=Overview. ↩︎
- “CAP Goal Action Plan: Category Management Leveraging Common Contracts and Best Practices to Drive Savings and Efficiencies” Office of Management and Budget (July 2020) https://assets.performance.gov/archives/action_plans/july_2020_Category_Management.pdf ↩︎
- “Competitive Service Act – Shared Certificates: Questions and Answers,” Chief Human Capital Officers Council, last accessed January 2025, https://www.chcoc.gov/sites/default/files/Competitive%20Service%20Act%20-%20Shared%20Certificates%20Q&As.pdf. ↩︎
- “Improving the Federal Hiring Experience,” Office of Management and Budget and Office of Personnel Management (August 14, 2024). https://bidenwhitehouse.archives.gov/wp-content/uploads/2024/08/M-24-16-Improving-the-Federal-Hiring-Experience.pdf ↩︎
- “Competitive Service Act – Shared Certificates: Questions and Answers,” Chief Human Capital Officers Council, last accessed January 2025, https://www.chcoc.gov/sites/default/files/Competitive%20Service%20Act%20-%20Shared%20Certificates%20Q&As.pdf. ↩︎
- Ibid. ↩︎
- Natalie Alms, “Reciprocity Remains a Key Clearance Issue,” Nextgov, March 13, 2023, https://www.nextgov.com/people/2023/03/reciprocity-remains-key-clearance-issue/383888/. ↩︎