Apparently Kevin Warsh is in a dead heat with Janet Yellen for Fed Chair. I tried to articulate just how bad this is but the whole thing has me shrill. Thankfully there are many other folks you can read while I gather my bearings. For my part, I’ll leave you with this: back during the crisis some of us used to say “shut-up Warsh” to indicate that the previous speaker had just made a case so incoherent it wasn’t worth addressing.
When [first] nominated by President Bush, Warsh was panned in unusually harsh terms:
Most of President Bush’s nominees to the Federal Reserve have earned accolades from across the economic and political spectrums.
And then there’s Kevin Warsh.
“Kevin Warsh is not a good idea,” said former Fed Vice Chairman Preston Martin, who was appointed by Republican President Ronald Reagan in 1982. “If I were on the Senate Banking Committee,” which must approve Fed nominees, “I would vote against him.”
Being wrong with conviction is a trademark of President Donald Trump. Perhaps that makes Kevin Warsh, his new perceived favorite to replace Janet Yellen as Federal Reserve chair, an ideal candidate.
Warsh’s mistaken policy views are especially egregious since he was brought into the Fed specifically for his purported expertise in financial markets, which were still sending panic signals even as Warsh tried to strike a more inflation-hawkish, sanguine tone.
Can we now be sure that Warsh was wrong about monetary policy during the Great Recession? I think so, but I’d also like to briefly discuss the implications of the other view, that we can’t be sure he was wrong. If that were true, then monetary economics would be useless. There would be no core of knowledge worth teaching to our students.
Former Federal Reserve Governor Kevin Warsh’s column in Tuesday’s Wall Street Journal was so riddled with errors and misperceptions that it is hard to believe he was actually a governor.
In this week’s Wall Street Journal, Michael Spence and Kevin Warsh say the Federal Reserve’s policy of bond buys, or quantitative easing(link is external) (QE), is responsible for sluggish business investment in recent years.
There is no logical or factual basis for their claim. Indeed, logic and facts point strongly in the opposite direction. It is the reluctance of businesses and consumers to spend in the wake of a historic recession that is forcing the Fed and other central banks around the world to keep interest rates unusually low—not the other way around.
…Taylor and Warsh argued publicly against additional monetary stimulus in November 2010, when the unemployment rate was almost 10 percent and the inflation rate had fallen nearly to 1 percent. Their concerns about excessive inflation proved to be completely unjustified. Yellen, by contrast, supported stimulus.
My friends Mike Spence and Kevin Warsh, writing in yesterday’s Wall Street Journal, have produced what seems to me the single most confused analysis of US monetary policy that I have read this year (Brad DeLong has expressed related views). Unless I am missing something — which is certainly possible — they make a variety of assertions that are usually exposed as fallacy in introductory economics classes.[Editors Note: No Larry you are not missing a thing]
Warsh is indeed someone who has been wrong about everything; a bubble denier who spoke of strong capital markets before the crash, a hawk who has been warning about the risk of inflation for three years, an invoker of invisible bond vigilantes who somehow managed to describe the supposed threat from these vigilantes as somehow both a certainty and unknowable.
The Sunday before last marked the beginning of the NFL season, and I attended my adopted hometown team’s opener. As I waited among a mob of people to enter the stadium, our collective eyes looked upward as we heard the roar of five U.S. Air Force fighter aircraft above our heads when the national anthem being sung inside FedEx Field came to an end. The response from the crowd waiting outside was immediate: a loud cheer followed by, “USA! USA!” At halftime, a U.S. Air Force drill team performed (always an impressive spectacle). And Air Force Chief of Staff General David Goldfein personally inducted a group of new recruits into the service to further cheers.
Not long after, President Trump made news when he said after seeing a French military parade in Paris on Bastille Day that he would like one in Washington, D.C. to mark the Fourth of July. Most pundits and national security commentators think this is a bad idea.
Boston Globe staff writer Alex Kingsbury seems to be the exception. On Wednesday, Kingsbury argued that a parade of military equipment tearing apart the asphalt on Pennsylvania Avenue could prove beneficial in better understanding the U.S. military and how much it costs:
If President Trump gets his way, a grand military parade will tromp down Pennsylvania Avenue on the Fourth of July. Tanks. Bombers. Rocket launchers.
Maybe such a display is exactly what Americans need — just not for the reasons Trump imagines. American should take a look — a close look — at what they’ve bought.
Tanks rolling past the White House this summer would indeed be watched in foreign capitals. But they would also be watched by a domestic audience that’s viewed actual war as a distant abstraction for far too long.
Several million Americans have served in the wars in Iraq and Afghanistan — a lengthy line to march in front of a reviewing stand. Yet the burden they’ve borne has been disproportionate: Only one-half of one percent of adults have served on active duty. Combat-style weapons sell out quickly, but most people don’t sign up for actual combat.
Color me extremely skeptical. I doubt any of the more Americans will look up the cost of an M1A2 Abrams tank—or the politics surrounding its assembly line—than will understand the per unit price tag on a B-2 bomber just because they saw one one fly over a stadium.
While far from full-blown panic mode, I have become more concerned in recent years about the disconnect between the U.S. military and American society. I worry about its implications for the use of military force and the potential to deform our politics. If I had to venture a guess, a military parade will not help shrink that gap one bit. I’d put money on the reaction from broad swaths of the American people will be the same as those fans outside FedEx Field: USA! USA!
An article published yesterday in Nature heralds some exciting new developments in the field of genetics. For the first time, researchers in the U.K. used CRISPR technology to “cut” the gene encoding for a particular protein (OCT4) in a human zygote. The result was compromised blastocyst development—that is, excising OCT4 appeared to forestall the zygote’s ability to develop into an early-stage embryo.
The researchers note that they “cannot be certain that the early developmental arrest is associated with the loss of OCT4” and not due to some other effect resulting from the CRISPR procedure. However, they remain optimistic that this research could herald future insights and developments into our understanding about embryonic development:
In summary, we have developed an optimized approach to target OCT4 in human embryos, thus suggesting that OCT4 has a different function in humans than in mice. This proof of principle lays out a framework for future investigations that could transform our understanding of human biology, thereby leading to improvements in the establishment and therapeutic use of stem cells and in IVF treatments.
Writing for Axios this afternoon, Jennifer Berg responded to these developments by pointing to some of the questions that germline genetic editing will inevitably raise:
Genetic changes to a human embryo are “germ-line” changes that can be passed on to future generations. While that means we wouldn’t have to treat a disease in each subsequent generation, scientists are naturally wary of these types of interventions because we don’t yet understand the long-term repercussions of making a genetic change that will affect more than the original entity.
Gene editing raises the specter of parents manipulating their children, and a Gattaca-type future of haves and have-nots. But these first experiments involved only a change to a single gene in an embryo in a petri dish. Most characteristics parents are likely to want to design involve multiple genes, and environmental interactions. If we can eventually tailor those traits, should we? What things are permissible to alter? And who should control access to this new technology?
Before we implant modified embryos we’ll have to address whether this is acceptable experimentation on pregnant women or the subsequently born child.
Her bottom line? “Science does not translate immediately into practice. We won’t have ‘designer babies’ anytime soon but these are important early studies, and they raise a number of ethical questions. We should consider those questions carefully.” I agree entirely. However, as I pointed out yesterday, simply recommending that we “consider” these questions doesn’t really lead us to any solid policy conclusions. Nor does casting the specter of a “Gattaca-type future of haves and have-nots” lead us to any real solutions to substantive policy questions. Such fears all too often dominate conversations surrounding developments in genetic science—just the same as they do for advancements in AI—and distract from the many near-term promising benefits of this technology. I’m all in favor of having a conversation about the ethical implications of “designer babies.” In the meantime, however, let’s avoid doomsaying rhetoric and focus on safely and effectively advancing the science behind this marvelous and still-emerging technology.
To that end, my recommendation—echoed by policy analysts, leading genetic researchers, and hinted at by the National Academy of Sciences—is for Congress to lift its statutory ban barring the FDA from considering clinical trials of germline edited embryos. That’s a necessary first step to realizing any of the benefits from genetic modification technology, and can help steer conversations about the ethical questions towards actionable policy goals.
Otherwise, we’re doomed to forever endure the purgatory of conversation limbo.
The Graham-Cassidy healthcare bill shares many traits with earlier GOP reform efforts. It repeals key elements of the Affordable Care Act. It cuts total federal spending on healthcare. And it would, if enacted, add millions to the rolls of the uninsured. It does, however, differ in one key respect: it goes much further than its predecessors in decentralizing healthcare policy to the states. (See this Vox Explainer for details.)
In a New York Times op-ed, the Washington Examiner’s Phillip Klein tries to spin this sweeping decentralization as the one great idea in an otherwise flawed bill. As Klein puts it,
It makes sense to allow states to set their priorities and direct their resources based on the characteristics of their populations.
As states come up with innovative solutions to their health care problems, it means there are 50 opportunities to experiment. States can test solutions that worked elsewhere, or steer clear of ideas that failed. This path makes more sense than having politicians and distant regulators impose one giant experiment on the entire nation that is harder to undo if it fails.
It makes sense, perhaps, until you actually think about it. As I wrote back in July, before Graham-Cassidy was on the agenda, even the lesser healthcare decentralization in earlier bills would have had some nasty unintended consequences.
One of those consequences, as I explained then, is to further undermine the already-declining interstate mobility of American labor. Up to now, state-administered Medicaid has posed the greatest problems of interstate portability of healthcare benefits, but Graham-Cassidy would, at a stroke, subject the entire working-age population to the same problems. Ironically, the only demographic group that would still enjoy more-or-less portable benefits would be Medicare-eligible retirees.
Healthcare balkanization also creates serious macroeconomic problems. Recessions affect the U.S. economy very unevenly. In the last recession, for example, states like Florida and Arizona were hit much harder by the housing bust than those in the Midwest or Northeast. Federal spending on healthcare helps spread the budgetary pain for states hit by plunging tax revenue and rising unemployment costs. In doing so, it speeds the recovery, not just of those states, but of the whole economy. Graham-Cassidy, which would undermine both labor mobility and fiscal burden-sharing, would make recovery from recession that much harder.
No, decentralization is not the One Great Idea in Graham Cassidy. It is the One Great Flaw.
Bryce Covert says that it’s time to “Get Rid of Equifax“:
Why should we continue to allow private companies to make money from us while ignoring our needs? Let’s nationalize Equifax and the other two major credit reporting companies, Experian and TransUnion. We could follow other countries’ example and hand the duty of tracking our financial histories over to a public registry instead of a private profiteer.
Look — I don’t have the ultimate solution to data breaches; I just know that giving the U.S. government exclusive control of our financial data is not it. On the contrary, the U.S. government “sucks at cybersecurity,” as my former Mercatus colleague Andrea O’Sullivan once put it. Despite pouring tens of billions of dollars into cybersecurity, the total number of Federal information security incidents continues to climb:
O’Sullivan and Eli Dourado attribute the incompetence of the U.S. government to structural factors within bureaucracies:
The federal government’s continued failures to secure its own information networks indicate a fundamentally flawed approach to cybersecurity. Sweeping technocratic solutions are iteratively imposed every few years with little-to-no understanding or continuity with previous policies. Abstract consistencies in top-down planning break down on the human level as personnel struggle to make sense of redundancies and eventually ignore complex reporting and procedural standards. Fundamental issues of talent recruitment and personnel training go relatively unaddressed as offices struggle to keep up with the changing security checklists, which may or may not actually translate to good cybersecurity outcomes.
Equifax’s failure was not due to the profit motive, but rather because it, as a large corporate incumbent, took on a degree of bureaucratic sclerosis and opacity itself. Yet unlike a federal agency, Equifax can rearrange its internal operations with limited political controversy, up to and including being supplanted by a competitor. Indeed, for all the errors Equifax made leading up to its hack—from hosting an encryption key on the same server as the encrypted data, to failing to patch a known vulnerability—it will be made to suffer for its mistake. Already, Equifax has lost roughly $6 billion in market value, a third of its total market cap, and is facing at least 23 class action lawsuits. In turn, the crediting monitoring industry as a whole will become stronger through feedback mechanisms, include profit and loss, that the U.S. federal government simply lacks.
Large, centralized databases will always be vulnerable to attack. For the companies charged with protecting our data to succeed, they must be allowed to fail.
I recently came across this old 2013 post at the Technology Liberation Front. In it, Adam Thierer points to an old XKCD comic that perfectly encapsulates a particular sentiment often showcased in policy discussions surrounding emerging technology regulations.
The idea that the best way to address concerns with new technologies is to “consider the consequences” is all-too common in policy conversations. But what does that actually mean in wonk-speak? Adam has some thoughts:
[A]fter conjuring up a long parade of horribles and suggesting “we need to have a conversation” about new technologies, authors of such essays almost never finish their thought. There’s no conclusion or clear alternative offered. I suppose that in some cases it is because there aren’t any easy answers. Other times, however, I get the feeling that they have an answer in mind — comprehensive regulation of new technologies in question — but that they don’t want to come out and say it because they think they’ll sound like Luddites. Hell, I don’t know and, again, I don’t want to guess as to motive. I just find it interesting that so much of the writing being done in this arena these days follows that exact model.
Like Adam, I’m increasingly wondering what the “let’s have a conversation” crowd is actually aiming for. What’s the end goal they expect from these conversations? Calls for a “broad societal consensus” or an approach that embraces the “consent of the governed” or “opening doors to conversation” don’t illuminate clear, articulable, and actionable policy recommendations. Even when a forum for dialogue is convened (such as a multistakeholder process or soft law proceeding that embraces ongoing collaboration), someone always claims that there remains an elusive “other conversation” that is not being had. Thus, real solutions remain forever beyond our grasp.
I don’t want to impugn or question anyone’s motives—doing so is seldom productive, and certainly not conducive to arriving at anything approaching a reasonable compromise. Unfortunately, the vagueness with which the “conversation” advocates address emerging technology issues leaves little room to conclude that their positions are the product of anything other than a disguised anathema towards technological progress. If that’s the case, they should state it clearly. If not, then they should state that clearly. In his book A Dangerous Master, Wendell Wallach does at least that much for us:
[T]he cavalier adoption of technologies whose impact will be far-reaching and uncertain is the sign of a society that has lost its way. Moderating the adoption of technology should not be done for ideological reasons. Rather, it provides a means to fortify the safety of people affected by unpredictable disruptions. A moderate pace allows us to effectively monitor risks and recognize inflection points before they disappear. (p. 262)
I disagree with this perspective, but at least I know what his opinion is and where we come down on our disagreements. I can’t say the same for those championing the “let’s have a conversation” mentality.
Is it indeed a crypto-luddite mentality driving these perspectives? I can’t say for certain, but one thing is clear: these types of recommendations lack substance. And in the absence of substance, a policy recommendation becomes an empty vessel that can be filled with pretty much any idea under the sun. Any proposal that can mean all things to all people warrants skepticism, especially if it takes the form of a recommendation for policymakers.
So the next time you hear someone advocate a conversation as a solution to the concerns associated with a new technology, beware—it may simply be the hue and cry of a crypto-luddite.
Tyler Cowen makes a persuasive case that American federalism is essentially broken:
One study found that when it comes to votes for the state legislature, the most important factor was the popularity of the sitting president and the president’s party. How well the state’s economy was doing was relatively unimportant. Again, that hardly creates strong incentives for good practical performance. Many state and local issues are more about competence than ideology, including road maintenance, running the prison system and helping to fund K-12 education.
This is a hard nut to crack. One idea occurred to me though, that combines wish lists from both sides of the isle.
First, on most big ticket spending items, eliminate the state’s contribution all together. That means Medicaid, highway construction and even education are funded through federal taxes. However, put the money for each of those into 50 Block Funds, one for each state. Allow the Governor of each state to appoint the administrator of each his states fund.
What am I trying to do here?
I am trying to focus the election for Governor not on ideology but on competence. The Governor doesn’t have the power under this arrangement to determine the size of spending or the level of taxes. Those are determined nationally. Of course, each state could raise additional taxes and spend additional money on its own, but these would likely be very small in comparison and could not be easily combined with the large fund.
Poor performance could easily be traced back to the Governor’s managerial decisions. Equally important, it would be harder for the media and partisan activists to tar-and-feather a Governor for being ideologically flexible.
Legislatures, on the other hand, could focus more on criminal law and the regulatory environment.
Naomi Klein is not fond of neoliberalism, that much is clear. From an recent interview with Husk:
What I’m tremendously inspired by is that, since 2008, there really aren’t any true believers in neoliberalism anymore.
So it’s been this zombie ideology – it’s still upright, it still staggers around, it has its own momentum – but it’s without a soul. It doesn’t have that animating force.
Yet in the same article she outlines a perspective that is strikingly Hayekian. First, on the ability of well-meaning outside experts to judge the efficacy of a project:
So we can say that [the Occupy Wall Street Movement] failed because it didn’t have demands, but I think it just shows a really short-term view of the way movements actually work.
There are periods when they are obvious to the media and then they go underground into a gestation period – a hibernation period where they learn from their mistakes – and then re-emerge as people who have a fully articulated political platform. Which is what the Sanders campaign had.
I tend to never believe a movement’s obituary because I don’t think our media understands social movements. They’re constantly declaring our movements dead and over – and are perennially surprised when they re-emerge.
Next, on the spontaneous order of social movements she offers imagery reminiscent of Peter Boettke, Hayek scholar and author of Living Economics.
I see movements as this flowing river – it sort of ebbs and flows and we learn from our failures. But we don’t have a lot of time to fail right now.
It is just one of these moments. There’s this quote from Samuel Beckett: ‘Try again. Fail again. Fail Better.’ But I don’t think we’re in a fail better moment. I actually think we’re in a win moment. That’s what we have to try and do.
The obvious difference is that Ms. Klein applies her Hayekian insights to the dynamics of social movements rather than the economy and society writ large. Indeed, she misconstrues neoliberal efforts to express exactly the same reasoning she espouses. She says
Milton Friedman famously said in a letter to Pinochet, when he was advising the Chilean dictator, that the major mistake happened when people thought they could do good with other people’s money.
The idea that by pooling resources, i.e. taxes, we could do something good, like have public healthcare or free education – that was the fundamental error in his view.
Yet, Friedman’s point about bureaucrats and public education was exactly the same as Klein’s on media and social movements. The dynamics are invisible to those with a 30,000 foot view. Instead, Friedman argued that government taxes—pooled resources—should be allocated by parents using vouchers. Independent schools would then be free to: Try again. Fail again. Fail Better. To learn from their mistakes without having to please overseers who would perennially declare reforms dead, only to see the momentum for them rise again.
Believers in different systems—Montesorri, Open Education, The Three Rs—could coexist together in an dynamic system, not unlike a flowing river, learning from their own mistakes as well as one another. It would allow the natural ebb and flow to lead us towards equilibriums that we may not have been able to imagine before we reached them, and indeed, may not even fully understand after they’ve passed.
Writing for the Washington Post’s Fact Checker, Nicole Lewis accuses House Speaker Paul Ryan of using “fuzzy math” in defense of corporate tax cuts. Ryan emphasizes statutory corporate tax rates, says Lewis, but the effective tax rate paid by corporations is far lower—a point often made by progressives. But effective tax rates are themselves misleading.
For example, Lewis notes that the U.S. statutory corporate tax rate is 35 percent, compared to just 19 percent in the UK. Yet the U.S. effective rate, after taking into account all deductions and exclusions, is actually lower than the British rate—18.6 percent vs. 18.7 percent. The implication is that American corporations have nothing to whine about.
But Lewis misses a key point: The effective rate paid significantly understates the actual burden of the corporate tax because it ignores the very real costs of qualifying for those lucrative loopholes. Here is an example I used in a recent post, “The Progressive Case for Abolishing the Corporate Income Tax:”
Suppose that if your corporation paid taxes at the full statutory rate, you would owe $35 million in tax on $100 million in pre-tax profit, for a net of $65 million. By changing your product line, moving your corporate headquarters, and using more tax-deductible debt instead of equity financing, suppose you can cut your taxes to $10 million. Unfortunately, taking those measures incurs administrative costs of $12 million and cuts your revenue by $8 million. You end up with after tax income of just $70 million, even though the government only gets $10 million in revenue.
Yes, that’s still worthwhile in the sense that you are $5 million better off than if you had just paid your taxes. But what is the accurate measure of the burden that the corporate tax system places on your company? Is it the $10 million, or 5 percent, that you pay on your remaining $80 million in before-tax profits, or is it 30 percent—the bite that $10 million in taxes plus $20 million in tax avoidance costs takes out of your original $100 million? Obviously, it is the latter.
The specific numbers here are hypothetical, but the point stands: a corporate tax system like that of the United States, with a high statutory rate and lots of loopholes, is far more burdensome than a system like that of the UK, with a low statutory rate and almost no loopholes, even if the effective rates are essentially the same.
It’s a pity that Ryan himself didn’t think to make that point. If so, a progressive newspaper like the Washington Post might be supporting efforts to cut the top corporate rate and close loopholes, instead of assigning Pinocchios for fuzzy math.
Ken Burns and Lynn Novick’s 18-hour documentary series on the Vietnam War started this week. So far, it is well worth watching. It is also well worth discussing one of the most persistent, and pernicious, myths to emerge from that war: the military lost because civilian leaders tied its hands.
The myth takes different forms, but the gist of it is that feckless civilian politicians prevented the U.S. military from applying the sufficient firepower in Vietnam to win the war. Journalist Arnold Isaacs explains why this argument is wrong in the first of a series of essays about Vietnam for War on the Rocks,
During the Vietnam War the United States dropped approximately twice as many tons of bombs in Southeast Asia as the Allied forces combined used against both Germany and Japan in World War II. Between 1964 and 1973, U.S. aircraft expended over seven million tons of bombs in Vietnam, Laos, and Cambodia, compared to 3.4 million tons dropped by the United States and its allies in all of World War II. There were restrictions on some targets, particularly in areas of North Vietnam that were close to China and where U.S. leaders were concerned that American airstrikes might provoke a Chinese response. But those do not change the fact that the American air campaign in the Vietnam war was the heaviest in the history of war, by a very large margin.
Similarly, in his 2002 book on wartime civil-military relations, Supreme Command, Eliot Cohen of the Johns Hopkins University’s School of Advanced International Studies explains why the civilian restrictions made sense:
Consider the prime example of overweening civilian control—[President Lyndon] johnson’s control of target selection. The most careful study of the conduct of the air war over Vietnam notes that in fact Johnson ended up approving most of the targets submitted by the Joint Chiefs of Staff. To be sure, the process of approval meant a drawn-out air campaign rather than the sudden shock that is (in theory, at any rate) critical in order for air power directed against a national economic and political entity to work. Undoubtedly too, the exclusion of certain areas from bombing (early on, antiaircraft sites, but also targets in Hanoi proper and the port of Haiphong) sharply reduced the U.S. air war’s effectiveness, both as a way of brining pressure to bear on the North Vietnamese government and in support of the nominal operational mission: cutting off supplies coming in to feed Hanoi’s aggression in the South. There are, however, two mitigating arguments.
First and most important, Johnson and his advisers feared and sought to avoid an extension of the war by Chinese intervention in it. As we now know, this was no idle fear, for in fact the Chinese sent over 300,000 troops into Vietnam and lost over a thousand killed in action. At the time too, it must be remembered, the Korean war was less than fifteen years in the past, and the Cuban Missile crisis less than five. Both events taught the American decision makers that the threat of escalation by the major Communist powers was real. Korea seemingly taught the lesson that pressing too far—as the Americans had when they advanced to the Yalu River, in particular—could indeed widen the war, while restrictions on the use of military power (e.g. refraining from bombing Chinese and Soviet installations supporting Communist units in Korea) could confine it. The Cuban missile crisis demonstrated the artfully restrained use of force—while providing evidence that in some military quarters the urge to use massive violence required civilian restraint. Today historians might qualify or object to these readings of what occurred in 1950-1953 and 1962, but at the time the lessons seemed altogether clear.
But it wasn’t just that civilians had understandable reasons for the restrictions it did place on the military. The U.S. Army also chose a strategy that fit with its organizational preferences rather than one appropriate for fighting an insurgency. Andrew Krepinevich, a retired Army colonel and defense analyst, summed up the problem: “Simply stated, the United States Army was neither trained nor organized to fight effectively in an insurgency conflict.”
I briefly touched on this issue in my latest appearance on The Secure Line podcast (available here). The idea that the involvement of civilian politicians impedes military victory still very much affects American politics today, as Donald Trump’s references to loosening the rules of engagement in his recent speech on Afghanistan demonstrated. But war, as Clausewitz told us, is the continuation of politics with other means. Unless civilian leaders choose achievable politics objectives in America’s wars, the continuing application of firepower matters little.
Ed Crooks has a nice primer in the Financial Times this morning on the market outlook for coal. As you might expect, it’s not good. Natural gas and renewables are not only taking an increasing share of the U.S. power market, but look to be edging out coal in China as well. That makes India the only global growth market. There are a number of points in Crooks piece, but I wanted to pick up one in particular. He quotes from an IHS Markit report:
The only practicable way to stop that threatened loss of coal-fired and nuclear plants will be intervention in the market to provide them with additional support. IHS Markit argues that those moves should not be seen as market-distorting subsidies, but as ways to correct distortions already caused by mandates and tax breaks for renewable energy.
First, we can discuss whether mandates and tax breaks are the way to go about it, but clearly they are intended to correct for the climate change effects of burning fossil fuels. The only logic for recorrecting that correction is that you explicitly want coal in the mix. Indeed, that’s what IHS Markit seems to be arguing in a quote from earlier in the piece:
IHS Markit argues in a new report on Tuesday morning that the loss of diversity in the fuel mix for power generation threatens to make electricity supplies more expensive and less resilient to shocks such as extreme cold weather.
Now on the face of it, it doesn’t make sense that switching to cheaper natural gas powered electricity generation should lead to more expensive electricity supplies, but I am pretty sure I know what IHS Markit is getting at. Back in the olden days, as in pre-fraking, natural gas supplies were fundamentally limited by the number of fields in operation. Moreover, the largest supplies were overseas, and the U.S. consumed more oil and natural gas than it produced.
Residential customers burn natural gas for heat, using up the excess Canadian production stored over the summer. When winter came again, there would be little gas to burn and a lot of demand for electricity to run air conditioners. This could cause large spikes in natural gas prices and the cost electricity generation. Coal and nuclear helped spread that risk out.
Yet, today’s world is very different. Since modern wells have to be hydraulically fractured before the gas will rise to the surface, it is possible to drill wells and leave them in reserve. Once fraked, the wells then produce at much higher rates than traditional wells. This makes it possible for drillers to respond to winter draw downs in supply and have extra gas available for the summer. Thus, the concern about cold snaps is not nearly as important as it used to be.
There are two different stories contained in this New York Times article from this morning:
Trump administration officials, under pressure from the White House to provide a rationale for reducing the number of refugees allowed into the United States next year, rejected a study by the Department of Health and Human Services that found that refugees brought in $63 billion more in government revenues over the past decade than they cost.
First is the appalling story of an administration ordering that evidence be generated to confirm conclusions it had already drawn. The administration rejected a report that had the audacity to look at both the costs and benefits of refugee resettlement instead of merely the costs.
John Graham, the acting assistant secretary for planning and evaluation at the health department…noted that Mr. Trump’s memorandum “seeks an analysis related to the cost of refugee programs. Therefore, the only analysis in the scope of H.H.S.’s response to the memo would be on refugee-related expenditures from data within H.H.S. programs.”
Mr. Miller personally intervened in the discussions on the refugee cap to ensure that only the costs — not any fiscal benefit — of the program were considered, according to two people familiar with the talks.
If one didn’t know better than to assume intent, one could be forgiven for inferring that administration officials opposed to refugee resettlement expected that an estimate of the net effect would be positive—that they already know, at some level, that refugees contribute more than they take and continue to adamantly oppose refugee resettlement for other reasons.
The second story is in the conclusions of the suppressed report itself. The report found a sizable fiscal benefit from accepting refugees, and was presumably made with access to administrative data not publicly available. Since the report was never made public, we cannot say precisely how its conclusions were reached or what data was used. But the fact remains that an internal government report found such benefits, confirming the conclusions of papers like this one, that have found similar positive results—although without access to all of the data that HHS presumably has.
Robin Harding has a provocative piece arguing that Warren Buffett might not have caused the Great Stagnation himself; but “Buffett-ism” has a lot to answer for:
Mr Buffett is completely honest about his desire to reduce competition. He just calls it by a folksy name — “widening the moat”. “I don’t want a business that’s easy for competitors.
He tells Berkshire Hathaway managers to widen their moat every year. The Buffett definition of good management is therefore clear. If you have effective competitors, you are doing it wrong.
This is a big deal, because as Harding points out, a rash of new papers suggest that increased market power and—possibly by extension—reduced competition, are behind almost all the disturbing trends in the U.S. economy over the last 30 years. Most notably, in the slowdown in productivity growth and median wages.
Harding argues that Buffett’s aversion to competition contributes to this. Not only because Buffett himself chooses companies that have little competition and then drains the profit from them, but because when other managers copy his techniques, a sort of Buffett equilibrium can set in which no one invests or competes.
I don’t think this kind of outcome should be dismissed out of hand. It has a certain puckish plausibility to it. However, it doesn’t answer the central question of how one goes about building an effective moat in the first place. If it’s in developing a particular niche or brand that appeals to customers, then Buffett isn’t so much causing a Cowen-style Great Stagnation as he intuitively recognizes the Great Variation.
In particular, his restraint injunction to starve some those organizations of investment looks like an attempt to avoid confusing the ability to build a great brand with the ability to deploy capital. For example Elon Musk, whom Harding praises, clearly has a talent for deploying capital. Just listening to him talk about why a Hyperloop might be cost effective shows his keen sense for cutting down a project to its bare minimum.
Musk didn’t develop the idea of rockets, tube trains, or even electric cars. They’ve been around for generations. What Musk may have developed are ways to deploy these science fiction technologies on the cheap. That requires a very different skill set – one imagines – than creating a Berkshire Hathaway subsidiary or See’s Candies.
Looking for yet another broken federal program in need of market-based reform? Put the National Flood Insurance Program (NFIP) near the top of your list. It is a mess, and time is running out to fix it. As sea levels rise and extreme weather events trigger inland flooding, NFIP offers property owners insurance against flood damage at rates that do not come close to reflecting the true risk of losses. It compounds the problem by insisting that money it pays out in claims can be used only to rebuild in the same flood-prone locations—not for moving to higher ground.
There are lots of ideas for a makeover of NFIP. One obvious one would be to charge property owners full risk-based premiums. However, owners resist that measure because it would crash the value of their properties. Another reform would let owners use claims to rebuild in other, safer, areas. However, local governments where the flood-prone properties are located resist that idea because they would lose part of their tax base. Still another idea is to buy out whole communities at fair, pre-flood prices and rebuild them elsewhere. However, powerful realtor and builder lobbies resist all these reforms.
Congressional committees have been working on promising fixes. Reform proposals have progressed to the point of being ready for a vote. But—did I mention?—Congress has less than two weeks to do something. NFIP expires at the end of September. The pressure to reauthorize it without substantive changes will be overwhelming.
Here is some background reading if you want to pursue the cause of building a market-based National Flood Insurance Program:
SmarterSafer.org is a coalition that promotes risk-based insurance and risk mitigation efforts. Its website is a trove of information and links.
The National Resources Defense Council has a great, short backgrounder on the need for flood insurance reform.
An excellent article in The Atlantic by Michelle Cottle outlines the politics of flood insurance reform.
Writing for Project Syndicate, economist Fabrizzio Coricelli takes on the perennial issue Germany’s trade surplus. Long a simmering issue in Europe, the surplus has recently been criticized sharply by President Trump.
Coricelli reviews the usual suspects: high savings rate, productive workers, and chronically tight fiscal policy. He concludes that all play a role. However, he dismisses as “bizarre” the claim that Germany engages in currency manipulation, since it does not have its own currency.
But Coricelli is not entirely correct. Although Germany does not have its own nominal exchange rate, it is possible to calculate its real effective exchange rate (REER) separately from that of the euro. A country’s REER is its exchange rate, weighted by the share of trade to each trading partner, and adjusted both for movements in nominal rates vs. other currencies and for changes in rates of inflation (methodology here). The following is a chart of real effective exchange rates for the United States, Germany, and the euro since 2000.
Two things stand out in this chart.
First, from 2002 to 2014, the REER of the euro was persistently strong relative to the dollar. By itself, that reduced the competitiveness of exports from all Eurozone countries, Germany included, relative to that of exports from the United States. Since 2014, however, the dollar has appreciated sharply against the euro, so that the relationship of the REER’s of the U.S. and EZ are now close to where they were 2000.
Second, we see that throughout the period, Germany’s own REER has tracked well below that of the EZ as a whole. That must be attributed to lower inflation, since Germany has the same nominal exchange rate as all other EZ countries. In that sense, it is fair, after all, to attribute a significant part of the German trade surplus to exchange rates.
Is it currency manipulation? Not in the classic sense, but, as I argued in this earlier post, the structure of the Eurozone does allow Germany to play a persistent free rider role when it comes to trade.
Senator Bernie Sanders and sixteen colleagues have introduced a new healthcare plan that they call “Medicare For All” (MFA), but what does “For All” really mean? Would MFA be a system through which everyone gets all of their medical care, or a system that provides basic care at the government’s expense, while those who can pay their own way seek something better?
The practice of medicine outside the framework of MFA would be permitted under Title III of the bill, “Provider Participation,” especially Sec. 303, “Use of Private Contracts.” These give providers the choice of signing a contract with MFA in accordance with its rates and conditions, or opting out and contracting directly with patients. The only restriction seems to be that private contracts must be all or nothing. A provider could not accept the standard reimbursement for a treatment from MFA and then collect an additional fee from the patient.
The attractiveness of private contracts to providers and patients would depend, in large part, on how generously or tight-fistedly other provisions of MFA are implemented. In particular:
In short, Medicare for All could easily turn into Medicare for All Except Those Who Can Afford Something Better. Is that good or bad? Expect to hear more about this as the debate over the Sanders plan unfolds.
One of the earliest proposals for a basic income guarantee can be found in Thomas More’s 1516 book, Utopia, in which the character of Raphael Nonsenso relays a conversation with English cardinal John Morton, who believes a basic provision for livelihood would help prevent petty crime:
I forgot how the subject came up, but he was speaking with great enthusiasm about the stern measures that were then being taken against thieves. ‘We’re hanging them all over the place’, he said. ‘I’ve seen as many as twenty on a single gallows. And that’s what I find so odd. Considering how few of them get away with it, how come we are still plagued with so many robbers?’ ‘What’s odd about it?’, I asked – for I never hesitated to speak freely in front of the Cardinal. ‘This method of dealing with thieves is both unjust and undesirable. As a punishment, it’s too severe, and as a deterrent, it’s quite ineffective. Petty larceny isn’t bad enough to deserve the death penalty. And no penalty on earth will stop people from stealing, if it’s their only way of getting food. In this respect, you English, like most other nations, remind me of these incompetent schoolmasters, who prefer caning their pupils to teaching them. Instead of inflicting these horrible punishments, it would be far more to the point to provide everyone with some means of livelihood, so that nobody’s under the frightful necessity of becoming, first a thief, and then a corpse.
Five-hundred and one years later, and we now have robust empirical verification of the Cardinal’s hypothesis. This is from “SNAP Benefits and Crime: Evidence from Changing Disbursement Schedules,” a 2017 working paper by Analisa Packham and Jillian Carr for Miami University’s Department of Economics:
We find that staggering SNAP benefits throughout the month leads to a 32 percent decrease in grocery store theft and reduces monthly cyclicity in grocery store crimes. Moreover, we find that the relationship between time since SNAP issuance and crime is nonlinear.
Periodic payments are powerful. To be eligible for SNAP, a household has to generally be at or below 130 percent of the poverty line. These are households that are incredibly resource-constrained, so it shouldn’t be that surprising that spreading grocery benefits across the month better matches household consumption patterns. We also know that SNAP benefits are routinely (and illegally) exchanged for cash at 50 percent of their face value. You can’t pay your phone bill with a grocery voucher, so that makes me wonder: What would happen to other forms of petty theft and missed bill payments if SNAP recipients were allowed to use their card to pay for more than just groceries? That seems like a pilot study worth running.
In general, I view the social insurance state as partly existing to address the problem of incomplete credit markets for low income folk. Due to an adverse selection problem, the poor are unable to access credit to smooth consumption both between and within months, at least without paying very high rates of interest. This is why lump-sum transfer programs like the EITC are accompanied by high-cost borrowing on the part of the recipient. Unconditional cash transfers can lessen those credit constraints and help substitute folks away from loan sharks and desperate behavior, like shoplifting. This is how I put it in a recent memo I wrote for the National Academies of Sciences on cash-based approaches to child poverty:
Some have argued against periodic payments by noting how the lump-sum nature of tax credits may be useful as a forced savings mechanism. The evidence for this view comes from expenditure surveys which show as many as 84 percent of EITC recipients use some portion of their refund to pay down debts. However, other research reveals that credit card usage by EITC recipients increases in anticipation of the refund, indicating that debt-paying behavior is largely a byproduct of credit-based consumption smoothing. Periodic payments would allow households to smooth consumption without incurring interest costs, thus relaxing credit constraints more efficiently.
Hat-tip: Alexander Berger
Politico reports this morning that “Senate Democrats are angling to take on Ivanka Trump and the Trump administration on one of her signature issues,” affordable child care. Their bold and transformative idea?
Joshua McCabe provides some important historical context for why that gambit is folly — in short, everytime Dems have tried to outflank Republicans on big “pro-family” policy initiatives like day care subsidies or Universal Pre-K they get, well, schooled. Indeed, the “Child Care for Working Families Act,” as they’re calling, is eerily similar to Senator Christopher Dodd’s Better Child Care Services Act from 1988, which got dismissed as unworkable in favor of an EITC expansion. As McCabe points out, if history repeats itself the Democrats’ strategy will only lock them out of the current negotiations over the First Daughter’s Child Tax Credit expansion, when they could be an important voice for making the proposal even more pro-poor by pushing for a larger credit or greater refundability.
But what about the idea of a national daycare program on the merits? I wrote on that topic in The Hill last year:
In 1997, the Canadian province of Quebec (which has traditionally been the North American protégé of European-style welfare states) took up the experiment. With the help of aggressive government subsidies, the cost of daycare was reduced to a flat $5 a day for all children aged four and under. The first program of its kind, it fueled a national debate about the merits of a federal day care scheme along the same lines.
But as the early results of the experiment came in, it quickly became clear that Quebec had made a grave mistake. A comprehensive 2005 study revealed that the program caused a disturbing deterioration in outcomes relative to the rest of Canada. Child aggressiveness and anxiety jumped, parental mental health declined, and the home environment become more hostile. Most alarmingly, a follow-up study published last year found the damage persisted well into adolescence, with the teenagers who used the program exhibiting higher crime rates and lower overall life satisfaction.
At the center of this disastrous experiment was a government that thought it knew the needs of children better than their parents did.
You can find the NBER digest of the research on Quebec here, subtly titled “Canada’s Universal Childcare Hurt Children and Families,” and the follow-up research here, co-authored by Jonathan Gruber of MIT. From the abstract:
We first confirm earlier findings showing reduced contemporaneous noncognitive development following the program introduction in Quebec, with little impact on cognitive test scores. We then show these non-cognitive deficits persisted to school ages, and also that cohorts with increased child care access subsequently had worse health, lower life satisfaction, and higher crime rates later in life.
Note that Quebec’s program has substantial similarity to proposals like the ones put out by Center for American Progress, in that it works by capping the cost of daycare for parents. But even if you think a national day care subsidy in the U.S. would turn out better, there are of course many other reasons to favor simple cash payments to families instead, including cost-effectiveness and neutrality to different ways of life. That, in so many words, is the argument for a child allowance.
Even Hillary Clinton has come around to the case for universal cash transfers. So why is Democratic Leadership stuck in 1988?
The Trump administration’s enthusiastic push to double the Child Tax Credit (CTC) and institute first-dollar refundability remains one of the most under-appreciated developments in tax reform. As I noted Friday, the silence has been particularly deafening among progressive and anti-poverty groups. Not unsurprisingly. To the extent that they’ve been shutout from reform negotiations, they are probably less than eager to buck the powerful institutional inertia pushing them to maintain a monotone chorus of resistance.
That said, by any reckoning, Trump’s CTC proposal has a shot at being incredibly pro-poor. How pro-poor? My favorite sociologist Joshua McCabe charted a comparison of the status quo against a Trump CTC expansion that also eliminates the dependent exemption:
Distribution of existing CTC and dependent exemption versus GOP/Ivanka/Rubio proposal funded by consolidating CTC and dependent exemption. pic.twitter.com/eFeTXmX37z
— Josh McCabe (@JoshuaTMcCabe) September 11, 2017
This is for a family with one child. Because the Trump proposal links refundability to the 15.3% payroll tax, you can think of it as zeroing out all payroll tax for single-child households on the first $13,000 in earnings, or the payroll tax for first $26,000 in earnings for two-child households, and so on. As the chart shows, even with the simultaneous consolidation of the dependent exemption, a recommendation McCabe and I have both made in the past, all but the wealthiest households would be made better off. For single-child households, this will have the largest relative impact in the $0 to $13,000 range, since this is the range where eliminating the $3,000 minimum earning requirement makes a big difference in credit size relative to income. These are minimum wage and part-time working parents for whom $1,000 extra is a significant boost.
To illustrate, consider a two-parent, one-child household that earned $3,000 in a given year. They are currently neglected by the CTC altogether. However under Trump’s proposal, this family would instead receive a $459 credit, or a full 15.3 percentage-point increase in take-home pay. A one-child household with $13,000 in earnings is already eligible for the full $1,000 credit under the status quo. Doubling the credit and beginning refundability at the first dollar means this family would experience a straightforward doubling in its credit, equating to a seven percentage point increase in after-tax income.
But what if the Trump administration doesn’t get the full CTC expansion they’ve asked for? Earlier today, the Tax Policy Center released a slate of new estimates for CTC reform options, focused on the creation of a new refundable tax credit for young children. This could be a potential Plan B for the administration, since restricting expenditures to children under five would substantially reduce total cost while remaining a large enough credit increase to have a meaningful impact.
In any case, my only point is that a) the CTC debate has moved from being one of “should we increase the CTC” to “how large and progressive of an increase can we afford given competing priorities?”, and b) that this is a big deal. Think back to the original Trump child tax deduction proposal from 2016. It received hatred from every corner of the media for being designed to benefit the rich, to which I contributed, as well. But now the CTC proposal being pushed by the First Daughter flips the distributional impact on its head. And the media’s response? Crickets.
Of course, there are many other important pieces to tax reform that bare on the lives of poor people, like the ludicrous proposal to preemptively audit EITC recipients. But let’s acknowledge the good ideas, as well. If not out of intellectual consistency, then at least out of a belief in the psychological power of positive reinforcement.
In The Times, Matt Ridley writes a familiar post-massive hurricane storyline. I would summarize it in three parts: There is little to no evidence that climate change has altered the frequency or character of hurricanes or tropical cyclones; that wealthy weather storms better than the poor; that adaptation to climate risks has advantages over putting downward pressure on emissions.
Let’s take them in reverse.
That adaptation has advantages over mitigation is probably right. But so too does mitigation have advantages over adaptation. They do different things and work different ways. Adaptation is local, ongoing, and mostly private. Mitigation is global, permanent, and cooperative. Their disparate characters are important, because mitigation is what satisfies our obligations to future generations. People in Houston, Florida, or in future storm events have done little to add to climate change, so expecting them to spend to adapt to higher risks imposed by historical emissions is unjust.
The idea that we can enhance the adaptive capacity of people in the future by growing wealth is solid, but has serious limitations if you want to argue that mitigation policies aren’t worth it. Large hurricanes have historically led to prolonged decreases in economic growth. So allowing their impacts to grow under climate change is potentially very costly.
In Ridley’s piece, the key passage on the climate effect on hurricanes is here:
An analysis published last month by the American government’s Geophysical Fluid Dynamics Laboratory stated: “It is premature to conclude that human activities, and particularly greenhouse gas emissions that cause global warming, have already had a detectable impact on Atlantic hurricane or global tropical cyclone activity.”
Interested readers should check out the analysis Ridley quotes, which reviews the state of the science on how climate change will affect hurricane frequency and characteristics over the 21st century. It cites climate models that show intensity increasing by 2-11 percent on average, and potential damage increasing by 30 percent by the end of the 21st century. However, it explains we have not observed such changes to high statistical confidence yet, because the climate effects are still predictably small and observations from before the satellite era are too spotty to beat out statistical noise. While strong, wet, storms fit expectations under global warming, we won’t be able to make confident claims about trends in hurricanes until mid-century. Global trend analysis, however, is not the only interesting question to ask.
We are sure to see studies of Harvey and Irma in the coming months and years that look at these storms as individual events, and the meteorological conditions they developed in, to suss out any connections to climate change. My bet is those studies will find a small contribution to rainfall totals and intensity, but we’ll see. We already have evidence showing that climate change increases impacts of major storms because sea level rise gives flooding a boost (h/t @PeterFrumhoff ).
So while there is much correct about Ridley’s story, I think it misses the point. Climate change is just a part of every story now. Storms and extreme weather events will probably become more familiar, and adaptation will lessen the blow as wealth continues to accumulate. The question is, how much change do we want to countenance—and there, Ridley has little answer.