In recent weeks there have been a number of announcements from the United States government on plans to improve cybersecurity: Department of Homeland Security and Department of Education five-year plans, the Office of Management and Budget Federal Cybersecurity Risk Determination and Report, etc. Central to these plans has been a newfound emphasis on better visibility into cyber risk. This should be great news because the sole value proposition of any security measure is, or should be, to help manage how much risk an organization faces. By focusing on which parts of the cyber risk landscape represent the most risk, and the risk reduction benefit of cybersecurity efforts, the government should be able to focus more effectively on what matters most and apply its limited resources more cost-effectively.
That being said, there exists a very real possibility that the additional focus on risk will result in no meaningful improvements. In some cases, it could result in poorer cybersecurity and more risk. This possibility exists because there is so much confusion and inconsistency in how risk is defined, measured, and communicated in the cybersecurity industry.
Risk Confusion
Every senior executive I’ve ever spoken with – whether in the private or public sector – expects risk to be measured as the likelihood and impact of adverse events. It seems logical then, that “risks” are these potential adverse events. After all, those are the only things to which you can apply a likelihood and impact value. Unfortunately, if you were to examine the risk register within almost any organization, or ask almost any information security professional to cite his/her organization’s top risks, you would almost certainly see something like the following:
- Cyber criminals
- Disgruntled insiders
- The cloud
- Sensitive customer information
- Reputation
- Weak passwords
- Patching (or unpatched systems)
- Lack of awareness
- Phishing
- Ransomware
Although these all are part of the cyber risk landscape, you may notice that none of them are, in and of themselves, adverse events (i.e., they aren’t risks), which means you can’t logically assign likelihood and impact values to them. Yet that is exactly what takes place in the vast majority of organizations. When pressed to defend those values, however, you’ll usually discover they don’t hold up and the organization is not prioritizing effectively.
Examples of actual cyber-related risks include:
- Disclosure of secret military strategies by a disgruntled insider with unauthorized access levels
- Compromise of sensitive political information by a nation-state intelligence organization through a successful phishing attack
- Disruption of critical infrastructure operations by a terrorist organization through the use of a logic bomb
If government organizations carefully define the cyber risks they’re faced with, they’ll have laid a strong foundation for measuring and managing their risk landscapes. If they don’t take this step, then the odds of being able to identify and focus on the things that matter most is dramatically reduced.
However, a poor understanding of what constitutes “a risk” is only part of the problem.
The Cost of Zero-Cost Risk Measurement
There’s an interesting dichotomy in the cybersecurity field. I doubt you’ll find many cybersecurity professionals who will deny the cyber landscape is complex and dynamic. Yet most cyber risk measurements I witness being performed are based on nothing more than a proverbial wet finger in the air and a proclamation of high/medium/low risk. You must ask the question: If the landscape is complex and dynamic, how reliable can a risk measurement be that has little to no analytic effort underpinning it?
The apparent advantages to this approach are that it imposes no up-front costs in terms of analytic effort, and anyone can do it. The bad news is that many of these risk ratings are inaccurate. In fact, in every organization I’ve examined over the past several years, 70 percent to 90 percent of the “high risk” ratings do not represent high risk. This significant signal-to-noise ratio problem means organizations are unable to focus on the things that matter most.
The downstream costs associated with poor prioritization can be boiled down to two things:
- Resources misallocated on things that shouldn’t be a priority
- Increased potential for significant losses, as mitigation of truly high-risk problems is delayed while the organization focuses on lower risk issues
The second of these is scariest, and I suspect if you examined any of the major breaches over the past several years, an inability to prioritize effectively will have been a meaningful contributing factor.
Of course, there’s also an argument that cybersecurity professionals simply don’t have time to get into the analytic weeds, especially given the complex and dynamic nature of the problem space. This is a valid concern but, as in many analytic disciplines, deeper analyses often have diminishing returns in terms of their information value. In other words, organizations can eliminate a significant percentage of measurement noise through simple methods and relatively minimal effort. They can choose to get deep into the analytic weeds only when there’s a reason to do so.
Which ‘High Risk’ Is Highest?
Even if an organization defines its risks well and applies an appropriate level of analytic effort to measure those risks, it can still struggle to manage cyber risk effectively. This is due to inherent limitations of commonly used qualitative and ordinal measurement scales. In other words, although you might be able to put your risks in the right low/medium/high, or 1 through 5 buckets, you won’t be able to reliably differentiate the risks within those buckets.
Another limitation of qualitative and ordinal scales is that you can’t do meaningful cost-benefit analyses on your risk-management options. What does it really mean when you say you’ve gone from “high” to “medium”? Where exactly is that boundary, and how far have you really moved?
A last limitation of qualitative and ordinal risk scales is that they aren’t inherently meaningful to executive management. How are they supposed to compare a “high cyber risk” to the other things that demand their attention and resources?
Measuring cyber risk in economic terms and/or the effect on an organization’s mission would seem to be the obvious answer. That answer, however, is also likely to raise the ire of many cybersecurity professionals who believe that cyber risk can’t be measured quantitatively. Fortunately, they’re mistaken. Cyber risk is not a special snowflake that’s afflicted with intractable measurement challenges. In fact, there are well-established models, i.e., Factor Analysis of Information Risk (FAIR) and methods, i.e., Monte Carlo functions, that can be leveraged to reliably quantify cyber risk. These models and methods have been used successfully for several years by many organizations, have been adopted as an international standard by the Open Group, and are being included in major university courses, such as Carnegie Mellon University. There is even a professional certification in FAIR that’s offered by The Open Group.
The Bottom Line
Focusing on risk should help the U.S. government significantly improve its ability to prioritize within the cybersecurity landscape, and to spend its limited resources more cost-effectively. However, that can only happen if it avoids the problems outlined above. Otherwise, the danger exists that a false sense of improved security will creep in, which could lead to even more misguided decisions.