At the recent annual meeting of the National Governor’s Association, Elon Musk spoke with his usual cavalier optimism on the future of technology and innovation. From solar power to our place among the stars, humanity’s future looks pretty bright, according to Musk. But he was particularly dour on one emerging technology that supposedly poses an existential threat to humankind: artificial intelligence.

Musk called for strict, preemptive regulations on developments in AI, referencing numerous hypothetical doomsaying scenarios that might emerge if we go too far too fast. It’s not the first time Musk has said that AI could portend a Terminator-style future, but it does seem to be the first time he’s called for such stringent controls on the technology. And he’s not alone.

In the preface to his book Superintelligence, Nick Bostrom contends that developing AI “is quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face.” Even Stephen Hawking has jumped on the panic wagon.

These concerns aren’t uniquely held by innovators, scientists, and academics. A Morning Consult poll found that a significant majority of Americans supported both domestic and international regulations on AI.

All of this suggests that we are in the midst of a full blown AI techno-panic. Fear of mass unemployment from automation and public safety concerns over autonomous vehicles have only exacerbated the growing tensions between man and machine.

Luckily, if history is any guide, the height of this hysteria means we’re probably on the cusp of a period of deflating dread. New emerging technologies often stoke frenzied fears over worst-case scenarios—at least at the beginning. These concerns eventually rise to the point of peak alarm, followed by a gradual hollowing out of panic. Eventually, the technologies that were once seen as harbingers of the end times become mundane, common, and indispensable parts of our daily lives. Look no further than the early days of the automobile, RFID chips, and the Internet; so too will it be with AI.

Of course detractors will argue that we should hedge against worst-possible outcomes, especially if the costs are potentially civilization-ending. After all, if there’s something the government could do to minimize the costs while maximizing the benefits of AI, then policymakers should be all over that. So what’s the solution?

Gov. Doug Ducey (R-AZ) asked that very question: “You’ve given some of these examples of how AI can be an existential threat, but I still don’t understand, as policymakers, what type of regulations, beyond slow down, which typically policymakers don’t get in front of entrepreneurs or innovators” should be enacted. Musk’s response? First, government needs to “gain insight” by “standing up an agency to make sure the situation is understood.” Then “put in place regulations to protect public safety. That’s it.” Well, not quite.

The government has, in fact, already taken a stab at whether or not such an approach would be an ideal treatment of this technology. Last year, the Obama administration’s Office of Science and Technology Policy released a report on the future of AI, derived from hundreds of comments from industry, civil society, technical experts, academics, and researchers.

While the report recognized the need for government to be privy to ongoing developments, its recommendations were largely benign—and it certainly didn’t call for preemptive bans and regulatory approvals for AI. In fact, it concluded that it was “very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.”

In short, put off those end-of-the-world parties, because AI isn’t going to snuff out civilization any time soon. Instead, embracing preemptive regulations could just smother domestic innovation in this field.

Despite Musk’s claims, firms will actually outsource research and development elsewhere. Global innovation arbitrage is a very real phenomenon in an age of abundant interconnectivity and capital that can move like quicksilver across national boundaries. AI research is even less constrained by those artificial barriers than most technologies, especially in an era of cloud computing and diminishing costs to computer processing speeds—to say nothing of the rise of quantum computing.

Musk’s “solution” to AI is uncharacteristically underwhelming. New federal agencies that impose precautionary regulations on AI aren’t going to chart a better course to the future, any more than preemptive regulations for Google would have paved the way to our current age of information abundance.

Musk of all people should know the future is always rife with uncertainty—after all, he helps construct it with each new revolutionary undertaking. Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome. Would the world have been a better place if the “public good” demanded even more stringent regulations for commercial space launch or autopilot features? That’s unlikely—and, notwithstanding Musk’s apprehensions, the same is probably true for AI.