In a surprise development late last week, the White House mandated that defense spending for fiscal year (FY) 2020 would be $700 billion. That is down slightly from the $716 billion approved for next year, as well as the $733 billion originally projected for 2020. This move had been foreshadowed in a cabinet meeting the week before, where Trump told all of his department chiefs to prepare to cut five percent of their budgets for the following year. It does raise a number of questions though: How much change does the $700 billion budget actually represent? Is the new figure likely to survive until FY2020 rolls around? And, most importantly, how should the Department of Defense respond to the White House’s mandate?
How significant a “cut” a $700 billion defense budget represents depends on the baseline. As Seamus Daniels, a defense budget researcher at the Center for Strategic and International Studies, noted on Twitter when the $700 billion figure was first raised, it would represent 2.2 percent less than the FY2019 budget.
Despite Trump’s order to his cabinet secretaries to cut five percent, $700 billion would be 4.5 percent less than what had been projected for FY2020. It is up for debate whether it should be considered a reduction in defense spending when reality fails to meet with projections. Seeing as the defense budget cap for FY2020 is $576 billion, it could be argued that $700 billion represents a significant increase.
Whether the $700 billion figure survives the politics to follow is also up for debate. The Pentagon has already announced publicly that it will continue to plan for a $733 billion budget, alongside the $700 budget mandated by the White House. Speaking to Marcus Weisberger of Defense One, former Office of Management and Budget official Gordon Adams suggested a higher figure was more likely:
“The White House may be saying that” — $700 billion — “but I’ll tell you from history, I’ve lived through a lot of White Houses that folded in December,” Adams said. “They had a tough position going in and at some point the Pentagon marshaled its political forces and went and had a conversation with the [president] and in the end it wasn’t what the original proposal was.”
With military pay and benefits already set, reductions in spending will have to come from cutting acquisition programs. General Joseph Dunford, chairman of the Joint Chiefs of Staff, recently suggested that the services’ proposals would undergo rigorous tests through wargamesto determine which programs would survive.
Conducting rigorous simulations to determine acquisition priorities is a wise move, but the Pentagon can go much further in prioritizing. The Department of Defense identified major power competition as its top priority in the National Defense Strategyit released earlier this year. At the same time, the president has at times expressed skepticism about the value of America’s post-September 11 wars. While there are reasons to believe he will not, the president would be best served from both a fiscal and strategic perspective, by mandating that the Pentagon begin to wind down peripheral conflicts and to focus its spending on first order strategic priorities.
The Congressional Budget Office (CBO) released a report this week on Department of Defense spending on overseas contingency operations(OCO) that provides insight into how it distorts America’s strategic choices. OCO spending is non-base defense spending. Its original purpose was as an emergency supplemental to fund the War on Terror, yet it has continued to be used even as the war in Afghanistan turned 17 this month.
According to CBO, “non-base” spending since September 11 has followed a different pattern than previous eras. As the figure below shows, previous conflicts also used contingency funds. However, the use was temporary before the costs of a war could be incorporated into the base defense budget. Since 2001 though, CBO finds that an average of more than $50 billion (in 2019 dollars) of OCO spending each year has gone to enduring, rather than temporary, activities.
The use of an emergency supplemental after a security shock, such as a large-scale terrorist attack on U.S. soil, makes sense. The usefulness diminished though as the requirements of America’s post-September 11 wars became more predictable. However, it did provide some political advantages. In recent years, it became a convenient workaround for the Pentagon to avoid the budget caps imposed in the Budget Control Act of 2011 (BCA). War spending was exempt from the caps, so there was incentive to shift non-war spending to OCO.
The major problem with this practice is that it creates a distorted picture of future defense spending. As the CBO report notes:
A key disadvantage of excluding the cost of OCO activities from the base budget is that doing so creates an inaccurate picture of what future defense spending might be in the absence of military conflicts. As contingency operations have become the norm and [the Department of Defense (DoD)] has adjusted its allocation of resources to accommodate them, it has become increasing difficult to distinguish between the incremental costs of military conflicts and DoD’s regular, enduring costs.
CBO provides the following figure to show just how distorted the picture is due to the use of OCO over the past seventeen years:
More perniciously than its effect on Pentagon planning though, might be its political effect. “War funding” was exempt from the BCA for a reason. Failure to approve war funding is likely to prove politically unpalatable for legislators. While the unclassified summary of the National Defense Strategy released earlier this year emphasizes great power competition as the country’s top defense priority, the continued use of contingency funding for ongoing military operations with relatively predictable requirements distorts the choice between resourcing peripheral and core needs.
But strategy is about making choices. It links military means to political ends and determines priorities and resource needs. Doing so is difficult. Adversaries react and adjust to countermeasures. Domestic priorities sometimes overtake foreign policy concerns. Exogenous shocks occur. These factors lead to a budget cycle that includes large spikes and gradual drawdowns. The need to prioritize is the essence of strategy though. The distorted picture created by the use of contingency funding for enduring needs undermines that.
Research on the economic effects of rent control goes back more than 70 years, to the pubication of a monograph titled “Roofs and Ceilings,” by Milton Friedman and George Stigler. That work found that rent control could be expected to restrict the supply of rental housing and would cause inequities and market inefficiencies, despite the help it provided to some sitting tenants.
Today, rent control is back in the news. California’s Proposition 10 would expand rent control. Lawmakers in Oregon and Illinois are considering similar legislation.
A report from Brookings, written by Stanford economist Rebecca Diamond, looks at recent research on rent control, based on natural experiments that have occurred as cities have extended or narrowed the scope of rent control over the years. Her review of the recent literature suggest that Friedman and Stigler’s conclusions stand up, but also adds some new twists.
A 2014 study by David Autor, Christopher Palmer, and Parag Pathak looked at the effects of rent decontrol in Cambridge, MA. The authors found that decontrol boosted the value both of previously controlled properties and of neighboring properties. They concluded that
Rent controlled properties create substantial negative externalities on the nearby housing market, lowering the amenity value of these neighborhoods and making them less desirable places to live. In short, the policy imposed $2.0 billion in costs to local property owners, but only $300 million of that cost was transferred to renters in rent-controlled apartments.
A 2018 study conducted by Diamond with colleagues Timothy McQuade and Franklin Qian looked at the effects of an extension of rent control to previously exempt small units in San Francisco in 1994. It found that rent control benefitted sitting tenants but reduced the supply of housing. They also found that rent control led to an increase in condominium conversions and construction of high-income housing.
Taking all of these points together, it appears rent control has actually contributed to the gentrification of San Francisco, the exact opposite of the policy’s intended goal. Indeed, by simultaneously bringing in higher income residents and preventing displacement of minorities, rent control has contributed to widening income inequality of the city.
Taken together, the two studies paradoxically suggest that both expansion and removal of rent control cause can cause increases in the value of non-controlled real estate in the affected areas. Diamond explains the seeming paradox occurs because rent control both increases incentives for condo conversion and at the same time reduces incentives to invest in maintenance of controlled units. Thus, expanding controls causes a spurt in conversions while removal of control causes a spurt in catch-up improvements to previously controlled units, with beneficial effects on neighborhood quality.
A recent Kaiser Family Foundation poll finds that surprise medical bills top the list of financial fears for American families:
Surprise medical bills can be astonishingly large, as when Drew Calver, who thought he was fully insured, received a bill for $109,000 for emergency treatment of a heart attack – more than twice his annual salary as a high school teacher.
Surprise medical bills for insured patients most often occur in two situations. One (as in Calver’s case) is emergency treatment at a hospital that is not in the insurer’s network. The other is complex treatment, as for a joint replacement or transplant, which is undertaken at an in-network hospital but with the assistance of some out-of-network participants, such as a radiologist or anesthesiologist. Patients typically do not get to choose these practitioners, and often are unaware of their participation until bills arrive.
An increasing number of states have laws protecting patients from surprise medical bills, but even in those states, there are loopholes. One big loophole concerns certain employer-sponsored insurance plans that are regulated at the federal level. Federal law currently offers no protection against surprise bills.
In September, a bipartisan group of six Senators – Michael Bennet (D-CO), Tom Carper (D-DE), Bill Cassidy (R-LA), Chuck Grassley (R-IA), Claire McCaskill (D-MO), and Todd Young (R-IN) – introduced draft legislation that would offer the needed protection. An analysis by the Brookings Institution summarizes the main provisions of the draft legislation, and offers some suggestions for resolving potential problems.
If enacted, the bill would limit patient cost-sharing to the amount they would owe to an in-network provider; set a payment standard regarding what insurers owe providers in these situations; and, prohibit providers from balance billing patients.
The Brookings analysis provides numerous links to further discussion of the issue of surprise medial bills.
In previous posts, I have been skeptical of work requirements for non-cash welfare programs in general, and for Medicaid in particular. One reason for my skepticism is that the majority of able-bodied recipients of Medicaid already work. Another is that past experience with work requirements, such as the exhaustively studied results of the welfare reforms of the 1990s, show that work requirements fail to move more than a tiny fraction of beneficiaries to self-supporting employment.
Now the results of the latest round of work requirements are starting to come in. New rules for Medicaid recipients in Arkansas are one of these. Writing in the New York Times, Margot Sanger-Katz emphasizes still another reason why work requirements don’t work: Those subject to the requirements are often not aware that they even exist.
Sanger-Katz reports that Medicaid authorities in Arkansas have been realistic in recognizing that many Medicaid recipients cannot be expected to work because of disability, family responsibilities, student status, and so on. Taking those factors into account, they exempted two-thirds of all beneficiaries, leaving just 20,000 people subject to work requirements. But of those, just 1,200 reported to the state that they had completed enough work related activities (which can include job search or training in addition to actual employment) to continue their Medicaid eligibility.
Conservative supporters of work requirements might fantasize that is because the rest just sighed, gave up their benefits, and got jobs. But that is not what Sanger-Katz found.
Instead, she found that huge numbers of beneficiaries were not even aware that they were subject to work requirements.
State officials said they worked hard to get the word out — mailing letters, sending emails, placing phone calls, briefing medical providers, putting posts on social media sites and distributing fliers where Medicaid patients might find them. . . .
But it seems that not everyone opened or read their mail. Ray Hanley, the president of the Arkansas Foundation for Medical Care, which ran a call center for the state, told my colleague Robert Pear that many people never answered their phones. The state said the open rate on emails was between 20 and 30 percent.
And that is not the end of it. Even if Medicaid beneficiaries learn of the work requirements, they may find it difficult to make monthly on-line reports of their work activities, as the system requires. Arkansas has one of the lowest levels of internet penetration in the country, and the website where reports must be made can be confusing even for those with solid computer skills.
The bottom line: If success is to be measured by reducing Medicaid roles, without worrying what happens to those who are dropped, the Arkansas approach looks like it is working. If success means getting beneficiaries into self-supporting employment, it is a dismal failure.