If you follow government cybersecurity activity, you recognize that this October has been more than just Cybersecurity Awareness Month; it has been Cybersecurity Policy Proposal Month. Most prominently, the Deputy National Security Adviser for Cyber & Emerging Tech Anne Neuberger made significant waves in mid-October when she announced the administration’s aims to use existing regulatory frameworks to raise the level of cybersecurity requirements for the Rail, Aviation, Health and Water sectors as well as aspects of early warning communications. Neuberger also indicated desire to place increased requirements on critical manufacturing, emergency services, and information technology critical infrastructure companies in the future.
Neuberger’s proposals, echoed by department and agency leaders with regulatory authority, arrive at the same time that the Cybersecurity and Infrastructure Security Agency (CISA) is hosting listening sessions related to implementing new incident reporting authorities passed by Congress earlier this year. At the same time, CISA is asking for feedback on a list of proposed performance goals (https://www.cisa.gov/sites/default/files/publications/Common_Baseline_v2_Controls_List_508c.pdf) for critical infrastructure cybersecurity. And, Congress is debating whether to designate some critical infrastructure companies as “systemically Important.”
It is not just sector-based critical infrastructure entities, however, that are subject to a renewed push for requirements to demonstrate cybersecurity practices are in place and strong. Hardware and software manufacturers, developers, and purveyors of key enabling technologies are also subject to additional attention. The administration has proposed an “EnergyStar”-like designation for the resilience of Internet of Things devices, the Office of Management and Budget just published guidance (https://www.whitehouse.gov/wp-content/uploads/2022/09/M-22-18.pdf) for software development practices for critical software sold to the federal government, and the European Commission is working on cyber resilience legislation (https://www.european-cyber-resilience-act.com/#:~:text=The%20European%20Cyber%20Resilience%20Act&text=The%20proposal%20for%20a%20regulation,secure%20hardware%20and%20software%20products) that, among other things, makes software developers liable for security flaws.
So, there is a lot to keep up with and, viewed cumulatively, it is fair to ask what is going on and why is such a policy push ongoing.
From my perspective, it is because we have seen a clear belief by policy makers that the current approach to cybersecurity is not achieving the desired results. And, while the leaders here in the United States continue to talk about the importance of collaborative information sharing and partnership between the public and private sectors, certain pockets – particularly the National Security Council – seem increasingly comfortable saying that such an approach isn’t enough.
One of the reasons for that is, what we used to call in CISA, the “national security risk delta.” What this term is meant to convey is that there is a natural delta between what organizations, including critical infrastructure companies, will do on their own to manage risk to their cyber systems given their available resources, capabilities and incentives as compared to what national security demands. Thought about logically, this makes sense. Companies were not established to defend themselves against nation-states and that burden should not fall solely on the back of corporate leadership and their network defenders to protect systems because of national security concerns. Indeed, the U.S. government is a natural backstop to industry as part of a national cybersecurity strategy.
The question for policymakers has always been what to do to fill that delta and how aggressive to be at mandating change. The White House is clearly moving toward a more aggressive path and, for the first time since 2013, there is the very real specter of a regulation-centric approach to critical infrastructure and enabling technologies in the United States, whether through existing or additional authorities or use of federal purchasing power.
I have always been somewhat skeptical of such an approach because of the difficulty in implementation – preferring one that was more driven by positive incentives than assigning punitive outcomes. (For more on potential cybersecurity incentives, see https://www.cisa.gov/publication/analytic-report-executive-order-13636-cybersecurity-incentives-study)
Leading with requirements only works if the requirements make sense, can be linked to measurable outcomes, and are dynamic to emerging threats. Furthermore, if requirements lead to critical infrastructure companies and technology companies spending more on compliance than security and chilling government and industry collaboration and information sharing then they also can be a net negative.
That being said, I understand the urge to raise requirements in the name of national security given the current state of cybersecurity risk and what looks like an expanding threat environment coupled with ubiquity of certain technologies. If placing additional requirements is the path the administration is going to take – and it certainly looks like it is – then it is important to do so with a measured approach. In that vein, here are a few thoughts of what such an approach looks like:
- Get rid of the rhetorical device of cybersecurity “sprints”. Every few months it seems like some national cyber leader or another is calling for a “sprint” to fill a cybersecurity gap. I get the imperative to try to create urgency but that is absolutely the wrong analogy for building long-term cybersecurity and incentivizes the government to prioritize outputs and activity at the expense of enduring collaborative approaches to risk mitigation. Good rules and processes need to be developed collaboratively and benefit from broad feedback. Sprints suggest a myopic look at a finish line and a declaration of work being done. I am unaware of any cybersecurity problem that has been solved by a “sprint”.
- Relatedly, ensure that rules that are developed are done so collaboratively. While there is always the concern that industry will try to dilute requirements placed on it, the reality is that rules need to be tested in realistic operating environments and the downstream impacts on commerce and competitiveness need to be considered. A robust engagement process – hopefully as transparent as possible – is important. It seems like CISA is taking such an approach related to incident reporting rules.
- Be careful about rules being set to benefit incumbent big organizations at the expense of smaller entrants and future competition. Spending any time in Washington, you start to recognize the level of investment that big firms can make in the rule-making process and influencing government. When rules and regulations are meant to shape behavior across ecosystems, it is important that there are attempts to get diverse perspectives. The government needs to formally allow for, and solicit, diversity of input as a requirement for rule setting.
- Balance attention to developing rules with developing effective and efficient processes for implementation. It is the process for implementation of a new set of requirements which often applies the most significant regulatory burden, and also leads to wasteful government programs. No rule should be established without a requisite process design to promote efficient and clear monitoring and enforcement. The Pentagon’s Cybersecurity Maturity Model (CMMC) program is a good example of a potentially good idea that is lost in cumbersome implementation concepts.
- Wherever possible, harmonize requirements. All rulemaking processes should begin with a look at where there are generally accepted practices that are being adopted for other reasons which can be leveraged. They should also follow an interagency process – and if necessary an international process – where there is governance put to avoid duplicative and conflicting requirements. The reality is that jurisdictional issues make this difficult but this seems like an obvious role for the National Cybersecurity Director and the revised State Department Bureau of Cyberspace and Digital Policy internationally.
- Finally, demand evaluation of regulatory effectiveness every five years and make continuity of requirements subject to proof of need. Rules and programs get put in place to fill urgent needs to close gaps. Many times, the initial burst of attention is useful to fill a security gap and it isn’t necessary to make the rules permanent. Too often, however, government policies seem to suggest that, without government oversight, security won’t continue. That needs to be regularly evaluated to ensure a more dynamic response to risks – and to not stifle opportunities.
As my newly branded Cyber Policy Proposal month winds to an end, there is much left for the Executive Branch to do and it will hopefully be done in a measured, collaborative way. It’s easy to call for new rules. Ensuring they lead to security outcomes takes more than a sprint.