The Parallel Fears Driving Perceptions of AI and Genomics
In a recent statement, Food and Drug Administration (FDA) Commissioner Scott Gottlieb announced that the Agency would be releasing a new framework addressing regenerative medicine sometime before the end of 2017. This is a heartening development, especially given the Commissioner’s focus on developing “clearer lines” regarding the Agency’s authorities and oversight mechanisms for new and innovative medical technologies. As Adam Thierer and Jordan Reimschisel of the Mercatus Center point out in a recent article, this innovation-friendly perspective will be especially important in helping to usher in an age of more personalized medicine made possible by advancements in gene therapy and genetic modification treatments. Unfortunately, the road ahead is likely fraught with fear.
People are often skeptical of the likelihood that emerging technologies will end up delivering on purported promises. However, that healthy skepticism very quickly turns to anxiety once notable milestones in its development are reached. At that point, the public discourse quickly and inevitably shifts away from “uncertainty” towards “rising panic”—the initial stage of the techno-panic cycle. Indeed, as I’ve discussed previously, we’re already at the peak hysteria phase of this cycle with regards to perceptions of artificial intelligence (AI), even with potentially momentous economic gains close at hand. While a similar hysteria hasn’t yet materialized with regards to genetic modification technology, it’s ascent is likely close at hand, and will only grow with each new research milestone. So how does the current AI techno-panic compare to what’s likely coming down the road for genetic modification?
To start, public attitudes towards AI and genetics are primarily defined by narrative themes in popular culture and media. Terminator has been a driving source of fears over AI, portending a war-torn post-apocalyptic hellscape in which human civilization gives way to the onslaught of killer robots. Likewise, the movie Gattaca echoes concerns over the possibilities that advanced genetic manipulation will lead to a society of perpetual class stratification based on the genetic haves and have nots, resulting in an everlasting eugenic dystopia. Both visions are representative of the worst types of apocalyptic doomsaying, and neither is grounded in a realistic assessment of what these technologies are actually capable of.
But the similarities don’t stop at blockbuster hits.
At a technical level, the core of AI systems are processes by which seemingly inscrutable algorithms make decisions. In the same way that AI is something of a “black box,” so too are the cells in the human body. There’s a great deal we still don’t understand about how (or why) certain processes within cells unfold the way they do. This is especially true when discussing the limitations of CRISPR technology. It’s unclear why some guide RNAs work better than others at targeting nucleotides along the DNA helix. Nor can we fully understand why certain types of cells prefer the use of one type of genetic repair pathway over another. (For a detailed look at these limitations, I highly recommend this TED talk by Dr. Ellen Jorgensen.)
In short, our limited knowledge of AI decision-making and molecular biology is a significant limiting factor on what we can achieve. Just as we cannot create conscious synthetic systems capable of human-level intelligence, nor can we design therapies or treatments that allow us full control over genetic expression. The complexities of molecular biology are at least the equivalent of those involved in AI systems. (Ironically, the recognition of these similarities in complexities has actually driven a great deal of applied AI research into “genetic algorithms”—programs designed to find near-optimal solutions to complex problems that are based on biological evolutionary design properties.)
Edward Dickson discussed many of these parallels almost three decades ago in an article for AI Magazine. Even back in 1984 he recognized that “[t]here are many striking similarities that suggest that examination of the commercialization of genetic engineering contains lessons valuable to the budding artificial intelligence industry.” Among other insights into the then-emerging fields of AI and genetics, he was keenly aware of the potential public backlash against the normalization of these two fields, noting that:
AI and genetic engineering are both names with which the general public cannot feel immediately comfortable. These futuristic names carry within them the hint of meddling in areas best left alone.
As the techno-panic surrounding AI reaches levels of peak hysteria, we should be prepared to encounter those same anxieties in the context of genetic modification. Policymakers and regulators would do well to take a balanced and measured approach in parsing outlandish fears from scientific realities. We cannot anticipate how, precisely, the age of genomics will materialize; the research milestones are not likely to suddenly and unpredictably open the doors to “designer babies” or induce an overnight Gattaca-style transformation of our society. To that end, as we inch ever closer towards the inevitable genetic modification scare, policymakers should embrace policies that promote, rather than inhibit, new scientific innovations. “The main lesson,” according to Dixon, “is that the future will be good, but its evolution will be different than [we] imagine.”