The recent hack on the Democratic National Committee has caused a media firestorm. At the Democratic National Convention, angry Bernie Sanders supporters railed against emails leaked by WikiLeaks that revealed now-former-DNC chair Debbie Wasserman-Schultz had been involved in a conspiracy to thwart Sanders’ nomination in favor of Hillary Clinton. The resulting brouhaha at the convention appears to have been precisely what WikiLeaks founder Julian Assange had intended, given his strong and avowed opposition to Mrs. Clinton on both policy and personal grounds. Though ostensibly targeting the Democrats, Assange’s leaks are reverberating beyond the Philadelphia convention hall.

There is now a great deal of discussion about the potential involvement of the Russian government as the source of the email leaks passed to WikiLeaks. This, some have suggested, is evidence of a larger Russian master plan to ensure Donald Trump wins the White House in November. However, although intelligence officials seem certain the Russians were involved, there is currently no evidence to suggest the hack was part of a grander international, Manchurian Candidate-style conspiracy.

This is yet another sore eye to the government’s cybersecurity practices. With last year’s cornucopia of cybersecurity breaches, one might have expected 2016 to grant us a respite. (After all, wasn’t the passage of the Cybersecurity Information Sharing Act supposed to start solving many of these issues?) Instead, with the election in full swing, expect more of the same.

In the meantime, there may be an opportunity for Congress to do something positive in promoting good cybersecurity practices: refining the practice and process for disclosing “zero day” vulnerabilities.

Zero day vulnerabilities refer to bugs in software that have had “zero days” to be patched by developers and “zero days” for consumers to install. These types of weaknesses are discovered under a wide variety of circumstances—some by white hat hackers penetrating systems software to reveal the vulnerabilities for the public good, others by hackers intent on selling the weaknesses on black and gray markets, and other still by government agents intent on discovering new avenues for surveilling targets of interest. It is in this last category that the government can more effectively reform its current practices to promote better cybersecurity for American networks.

So how does the government currently treat the discovery of such vulnerabilities?

In a 2014 White House blog post, Michael Daniel, the White House Cybersecurity Coordinator, outlined Administration policy regarding the exploitation of these zero day vulnerabilities. (In particular, Daniel’s post revealed details of how the government’s vulnerability equities process (VEP)—a process whereby government agencies, upon learning of a zero day exploit, would determine when and if the discovered vulnerability should be retained for law enforcement use or disclosed for patching—actually worked in practice.)

The high-level overview essentially conveyed the key trade-off associated with vulnerability disclosure and hoarding: ensuring digital communications networks are as secure as possible versus greater intelligence gathering tools for national security purposes. Daniel correctly notes that:

We rely on the Internet and connected systems for much of our daily lives. Our economy would not function without them. Our ability to project power abroad would be crippled if we could not depend on them. For these reasons, disclosing vulnerabilities usually makes sense. We need these systems to be secure as much as, if not more so, than everyone else.

However, he goes on:

there are legitimate pros and cons to the decision to disclose, and the trade-offs between prompt disclosure and withholding knowledge of some vulnerabilities for a limited time can have significant consequences. Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack stop the theft of our nation’s intellectual property, or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks.

Daniel’s blog was originally responding to public conjecture over the NSA’s role in the infamous Heartbleed vulnerability that plagued the OpenSSL implementation of TLS (a more advanced form of transport layer encryption) before it was discovered in 2014. It was also surmised that the intelligence agency had known about the vulnerability and intentionally “hoarded” knowledge of the bug in order to exploit the weakness for surveillance purposes. Hence, the piece was meant to allay the public’s fears by more openly discussing the questions that were weighed in the disclosure-retention trade-off.

However, as Jack Goldsmith argued—not unfairly—over at Lawfare, there is also a trade-off between more transparency and appropriate intelligence collection practices:

At some point … more transparency will affirmatively harm intelligence collection in ways that outweigh the public confidence and related benefits of further disclosure. … This is a very tricky tradeoff to manage. The tradeoff is tricky not just because transparency aids our adversaries. It is also tricky because disclosure invariably begets further disclosure, and because disclosures of the sort Daniel made—which reveal a lot about what the [government] is up to—will (contrary to Daniel’s stated aim) diminish trust in the [government] in many quarters, especially since no other country makes disclosures of this type.

To be fair, Goldsmith caveats his point by admitting he is not entirely certain we’ve reached the threshold between transparency costs outweighing intelligence collection benefits. Until we reach that breaking point, however, the default presumption ought to be for ever more transparency from government institutions. Trust is a key component in any well-functioning liberal democracy. Maintaining that level of trust has understandably become more of a concern—and difficulty—for officials in post-Snowden America.

So what can be done to help reestablish that trust while balancing the needs of national security?

In a recent paper from the Belfer Center at Harvard University, Ari Schwartz and Rob Knake provide some recommendations for improving the VEP. Among the recommendations offered, the authors argue the Obama Administration should make the high-level criteria for determining disclosure or retention publicly available, clearly define the process involved, ensure decisions of retention are subject to periodic review, and expand oversight of the process in both Congress and bodies within the Executive Branch.

While I agree with much of their proposed policies, the recommendation for the use of an executive order here is not stable in the long term.

Instead, Congress should pass legislation that would formalize the process for VEP. Executive orders can be rescinded at any time, leading to real concern that any headway made towards a better, more transparent disclosure process could be lost following the Obama Administration’s exit. Implementation of the VEP policy should be enshrined in law. Congress can play a leading role in helping to shape better, more transparent, and more stable cybersecurity practices by taking the lead on legislation.