This week, I’ll be discussing another issue that Congress and the Administration should prioritize in 2017: artificial intelligence (AI), robotics, and automation.
Like many people, I’ve recently become a huge fan of HBO’s Westworld, a new sci-fi series that mashes a lot of Blade Runner with a bit of Jurassic Park. The gist of the story revolves around an amusement park-style operation where customers (the guests) experience a Wild West setting populated by highly advanced robots (the hosts). AI, robotics, and automation are at the core of the series, but one of the central themes also revolves around the nature of “consciousness.” This, I think, is one of the key factors driving confusion about the power of AI in the real world: the distinction (or lack thereof) between “intelligence” and “consciousness.
Separating out the concerns over an impending SkyNet future is no small task. Popular media has enshrined the bipedal Terminators of James Cameron’s eponymously-titled movies as the inevitable outcome of developments in AI. Many see this as the future this technology will engender. For others, the concerns are simply about the decision trees and algorithmic transparency—that is, if we don’t know how it works, we can’t know how it will fail. Intuition matters in how humans interpret information. For AI, intuition is scarce and even more inscrutable than it is among humans. Since AI is, in the narrowest form, attuned to data sets and not facts about the broader world, it makes extracting general rules about how neural networks and machine learning algorithms operate difficult.
Yet many fail to realize that every time we use Google’s search, we’re using a very narrow, task-specific form of AI. “As soon as it works,” the famed computer scientist John McCarthy once quipped, “no one calls it AI anymore.”
The need for algorithmic accountability will likely be compounded as this technology becomes more and more a part of everyday life, and everyday decisions. How precisely we achieve that level of transparency and accountability is still unknown, however. At this point in its development, AI is still a “black box.” Until we can figure out how to get these systems to “talk” to us—that is, more effectively disclose the rationale behind the outcomes they report—accountability will remain elusive. Nonetheless, we should not let that hurdle blind us to the many benefits this technology will deliver to society.
As discussed in a recent report from the Center for Data Innovation:
Because AI will continue to evolve and work its way into a wide variety of applications, it is difficult to predict just how much value AI will generate. The International Data Corporation estimates that in the United States the market for AI technologies that analyze unstructured data will reach $40 billion by 2020, and will generate more than $60 billion worth of productivity improvements for businesses in the United States per year. Investors in the United States are increasingly recognizing the potential value of AI, investing $757 million in venture capital in AI start-ups in 2013, $2.18 billion in 2014, and $2.39 billion in 2015. The McKinsey Global Institute estimates that by 2025 automating knowledge work with AI will generate between $5.2 trillion and $6.77 trillion, advanced robotics relying on AI will generate between $1.7 trillion and $4.5 trillion, and autonomous and semi-autonomous vehicles will generate between $0.2 trillion and $1.9 trillion. A report from Accenture examining the potential impact of AI on economic growth found that by 2035, AI could increase the annual growth rate of the U.S. and Finnish economies by 2 percentage points, the Japanese economy by 1.9 points, and the German economy by 1.6 points. The report also found that, for the 12 countries surveyed, AI would boost labor productivity rates by 11 to 37 percent.
The economic benefits from this technology are clearly poised to be enormous. Of course, the possibility of an AI-driven economy leads to other, less Terminator-style concerns. More realistic concerns are neither about killer robots or algorithmic transparency, but the issue of an economy of ghosts in the machine. The popular video game series Metal Gear Solid actually touches on this.
The series revolves around a central cadre of characters fighting a complex web of foes tied together by unseen forces that are slowly revealed to be advanced AIs. Originally constructed to unite the nations of the world, the systems end up becoming the very reasons for which war is waged. Over the course of many decades, these AIs eventually transform the world by inducing a “war economy.” The global economic structure, reengineered by algorithmic optimization, turn conflict into a lucrative business that drives whole state economies to become dependent on never-ending war. By creating everlasting enemies, the AIs are able to accomplish the goals set forth by their progenitors: a world without ideological priors, principles, or ideals, united in a common pursuit of perpetual warfare.
This might lead us to conclude that the real concern, then, is not so much a future of Terminators, but a future of ghosts in the machine, a la Metal Gear Solid. Realistically, however, neither situation appears likely for humankind’s near-term future. Nevertheless, policymakers will soon have to confront these issues; and when they do, they should begin with regulatory forbearance, while avoiding doomsaying language. The AIs of science fiction and Hollywood are a far cry from the reality of the day. Fanciful portents of dystopian future make for good clickbait and even better blockbuster hits, but they shouldn’t serve as the basis for informed policies governing developing technologies.
Safeguarding Advancements in Artificial Intelligence
Depending on the particular use and application of AI, there are many recommendations to be offered. However, for the time being, the 115th Congress and incoming Administration should embrace regulatory forbearance as the ideal starting point for any future conversation about federal rulemaking. I argued as much in a July comment filing to the Office of Science and Technology Policy (OSTP) on the matter of artificial intelligence (see the full comments here). In those comments, I argued that OSTP and federal regulators should:
- Avoid speaking of hyperbolic hypothetical doomsday scenarios; and
- Embrace a policy of regulatory restraint, intervening in the development and use of AI technology only when and if the prospect of harm becomes realistic enough to merit government intervention.
However, there are a number of other recommendations worth considering given the policy fissures likely to emerge on perennial issues like privacy and cybersecurity. The following general recommendations should serve as a starting point for regulators and policymakers thinking about this issue:
- Any agencies considering regulations specific to AI (though generally applicable to any new emerging technology representing a unique regulatory knowledge gap) need to focus on tangible and known costs and benefits, not hypotheticals;
- Government can be positive contributor to developments in AI by increasing public funding for research into interdisciplinary analyses of the economic and social ramifications of future developments;
- Avoid any singular, mandatory development standards for safety (such as the Food and Drug Administration’s mandatory regime for medical software of “high consequence”); and
- Promote industry self-regulation as a first step towards developing best practices and standards in AI research. This would ideally materialize in the form of independent, third-party validators and open-source developers (such as OpenAI) or industry consortiums (such as The Partnership on Artificial Intelligence to Benefit People and Society), not as legislative fiat or prescriptive mandates imposed by regulators or other government actors.
In short, the best policy recommendations for promoting innovation in AI will rely on flexible regulatory frameworks. Government can be a valuable ally in helping to accelerate the development and deployment of this technology, but it should not stand as a gatekeeper to the future. As noted in the first report from Stanford University’s One Hundred Year Study on Artificial Intelligence, initiative, policies can and should “strengthen a virtuous cycle of activity involving internal and external accountability, transparency, and professionalization, rather than narrow compliance.” Additionally, any policy developments should be continually reevaluated as new challenges emerge. A one-size-fits-all ex ante methodology of precautionary regulation simply will neither foster this industry’s development or maximize beneficial outcomes for society.
Accelerating the Autonomous Roadway
Perhaps the most pressing application of AI—and one of the leading technological applications driving public fears of this field—is already unfolding around us. Driverless cars are already hitting the roads, from Pittsburgh to the streets of Mountain View, and even around the world. The United States, if it wishes to maintain its lead in this emerging space, as well as reap the benefits from ongoing innovations, needs to create an environment of regulatory certainty. In upcoming comments to the National Highways Traffic Safety Administration (NHTSA) on its Federal Autonomous Vehicle Policy Guidelines, I argue against some of the guidelines’ newly proposed authorities and expanded interpretation of existing authorities.
NHTSA itself notes that: “Substitution of pre-market approval for all standards for which manufacturers currently self-certify would be a wholesale structural change in the way NHTSA regulates motor vehicle safety and would require both fundamental statutory changes and a large increase in Agency resources.” This is problematic. While it is understandable that the agency would argue for structural changes to assist in their efforts to address concerns from a new technology like this, it is unclear that a “large increase in Agency resources” would yield more optimal outcomes. Alternatively, this is an area where the 115th Congress can potentially lead, reasserting its legislative authority to address the issue of autonomous vehicles and federally preempt any disconcerting legislation coming from states and localities.
Promoting an expeditious and effective deployment of this technology is important in order to actualize the social and economic benefits associated with autonomous vehicles, to say nothing of the tens of thousands of lives that could be saved.
Why the Time For Action is Now
Autonomous vehicles are but one case study in how advancements in AI have helped usher in a new era of technological progress. As AI is incorporated into developments in headline-grabbing technologies, public attention is naturally becoming more focused on developments in this space. We all walk around with task-specific AI in our phones (Siri, Cortana, etc.) and utilize virtual private assistants, voice recognition technology, Google Translate, and a host of other AI-driven technological conveniences. It’s important, now more than ever, that legislators and Administration officials take the lead in continuing to examine developments in this technology, while foregoing ex ante regulatory measures. As AI continues to drive developments in other technologies, the potential for bad regulatory action could stymie not only autonomous vehicles, but numerous other technologies—some not yet even imagined—that could hold the potential to improve the lives of millions of Americans.
Through 2017 and Beyond
Philosophers’ dilemmas and apocalyptic rhetoric should not be mistaken for the real world dilemmas facing policymakers. The reality is that AI does not present a fundamental existential challenge to human existence, and likely will not for many, many decades to come. When it comes to AI research, let a thousand strategies bloom. For legislators and regulators, however, let but one strategy be the guiding light: embrace forbearance and let the private sector lead.
Stay tuned the week after next for the third issue policymakers need to focus on in 2017: the Internet of Things.