It’s been a challenging time managing cyber risk at federal agencies. In July, the Government Accountability Office (GAO) issued a report on its investigation of 23 agencies and their progress toward implementing cyber risk programs. All but one was given a failing grade. Some of the findings included the need for cyber risk to be reported and measured the same as other risks managed by the Enterprise Risk Management (ERM) office at the Department of the Treasury.
- At the Department of State, the vulnerability data was difficult to understand without data speaking about the mission impact of cyber risk materializing once triggered by someone exploiting those vulnerabilities.
- Health and Human Services needed supplements to its National Institute of Standards and Technology (NIST) program to provide metrics for program risk management governance.
- Lastly, the Environmental Protection Agency (EPA) was struggling to identify an actionable risk tolerance level that takes into account the EPA mission and public perception.
To make matters worse, the Inspector General at the State Department issued its findings in August that cybersecurity hiring freezes kept key roles from being filled and likely put critical systems at risk. The hiring freeze at State was longer than the government-wide freeze; however, similar situations likely existed in the other agencies.
Faced with these kinds of failings, many federal cybersecurity organizations will fall back on trying to increase their program maturity using frameworks like Capability Maturity Model Integration (CMMI) or similar vendor-supplied models. These measures are leveraged to increase control spending to implement or upgrade existing control programs. The trouble with this approach, however, is that organizations cannot fund all cybersecurity maturity projects. In fact, they cannot fund all the other organizational projects they want, either. The result tends to be cybersecurity spending being viewed as a wish list without relevance to the organization’s mission.
The Root of the Problem: Lack of Actionable Data for Decision Makers
For many cybersecurity executives, figuring out what to do to support an organization’s risk posture is often just as complicated as convincing the organization to invest in it once you’ve figured it out. Neither are doable without a clear, defensible data set that’s shared and accepted among the stakeholders.
Fortunately, there is a solution. Many organizations have discovered the benefits of utilizing Cyber Risk Quantification (CRQ) as a cornerstone in their cyber risk management programs to develop a true risk-based methodology to improve control rationalization, establish risk thresholds and appetites, develop and justify strategic priorities, and defend them when confronted with audit findings that may require spending organizational resources differently.
Frameworks like NIST Cybersecurity Framework (CSF) are helpful as they establish the foundation for what good security looks like. But many who embark upon implementing it are challenged by a voluminous catalog of controls with a limited budget upon which to spend on them. It’s for this reason that the recently published NIST CSF Informative Reference Catalog included cross-references to the Factor Analysis of Information Risk (FAIR) standard. The open source FAIR standard is the de-facto CRQ methodology that enables organizations to have a common language for speaking about cyber risk that is flexible enough to engage front-line cybersecurity technology professionals and links back to ERM standards such as ISO 31000.
FAIR and CRQ give organizations the ability to speak a common risk language, which serves as the foundation for the additional risk management tasks that organizations like the GAO indicated are missing. Once the foundation for good communication is established, then values for cyber risk quantification can be established. This includes values for probability and frequency of cyber incidents causing impact to the organization’s mission, and to understand what that mission impact would look like. It’s for reasons like this that the Department of Energy is employing FAIR to gather the foundational data it needs to aid in its cloud migration efforts.
FAIR gives practitioners the ability to walk through an activity-based costing process of understanding the types of loss that may materialize if a cyber incident occurs. This allows agencies to think about the loss in terms of the activities and their corresponding costs when assessing mission impact. FAIR encourages practitioners to leverage existing processes for thinking about breach or availability concerns, for example, in terms of the hours of delay or apportioned hours, hourly cost, and additional costs. For public-sector risk practitioners, this process should also include an economic assessment of their constituencies. Assessments should be made to lost/delayed wages and tax revenues, healthcare costs, loss of life, relocations, and quality of life.
Implementing and operating an information risk management program is a complicated endeavor and requires thoughtful and thorough approaches to ensure the right things are being prioritized. Because that’s what good risk management does: prioritizes the efforts of the organization to avoid the worst outcomes and, where it can’t avoid them entirely, allows organizations to understand what they would need to weather it.
Having these conversations is challenging as prioritizing and allocating limited resources is an emotional activity. But tempering this with a logical discussion is valuable for ensuring the fair and best distribution of resources to manage the biggest cyber risks in your organization. Not doing this well could mean the misallocation of critical resources at best and devastating impacts to vulnerable populations at worst. If your risk management program can’t help you prioritize your top risk items, then your biggest risk may be your risk management program.