47.6 F
Washington D.C.
Friday, March 29, 2024

HSTRisk: Accurately Scoring Cybersecurity Threat in a Maze of Vulnerabilities

Alice felt dreadfully puzzled. The Hatter’s remark seemed to her to have no sort of meaning in it, and yet it was certainly English. “I don’t quite understand you,” she said, as politely as she could.

–from Alice’s Adventures in Wonderland by Lewis Carroll

 

Many cybersecurity practitioners think of cybersecurity as a unique snowflake. A commonly held view is that these are complicated technological issues, therefore cyber risk must be evaluated differently than other forms of risk. As a result, many practitioners try to communicate cyber risk to their department and agency heads in terms of how many vulnerable systems and missing patches there are.

They’re both right and wrong

These practitioners aren’t entirely wrong — the cyber landscape is complicated and highly technical, which means that analyzing it needs to account for these characteristics. Where they’re mistaken is in thinking that cyber risk must be expressed differently than other forms of risk.

In mature risk disciplines, risk is expressed in terms of the probable frequency of loss events and their impact, typically in economic terms like annualized loss exposure, or in terms of the effect on an organization’s mission. The reason why it’s expressed in these terms is because:

  • That’s how the damage ultimately materializes
  • It’s what leadership understands and cares about
  • It enables decision-makers to see better apples-to-apples comparisons between the many risk, opportunity, and operational cost concerns on their plates

Fortunately, there is no inherent reason why cyber risk can’t be expressed similarly if you approach it properly.

It begins (and ends) with “So what?”

Vulnerabilities only matter because they, to some degree, increase the potential frequency and/or magnitude of future loss events. Consequently, if we want to understand and communicate their relevance to decision-makers, we must analyze vulnerabilities within that context by defining the specific loss event scenarios to which they contribute for the organization in question, e.g., a power grid outage.

With that specific starting point, we can estimate the frequency and magnitude of the loss event scenario(s) both with the vulnerability in place, and with the vulnerability corrected. The resulting difference in loss event frequency and/or magnitude provides the “So what?” that decision-makers need.

Challenges

Even as I write this, I can imagine the protests from some cybersecurity practitioners. “It’s not that simple” and “You can’t know the frequency of attacks or their impact” are two of the most commonly expressed concerns. Again, here is why they’re both right and wrong.

The approach I described in the previous section does involve more effort than what takes place today, where we just toss vulnerability data over the wall and expect decision-makers to somehow intuitively understand their significance. The truth of the matter is, if we want to be effective in communicating relevance to leaders so they can be effective in their decision-making, then a different level of effort is required than is commonly being applied today. There is no “easy button” for this.

As for knowing the frequency and/or magnitude of future loss events – it’s the future, so of course there will be uncertainty. The key is to minimize uncertainty as much as possible using available data and vetted models – for instance, Factor Analysis of Information Risk (FAIR), an open, international model for measuring risk – as well as to faithfully represent our uncertainty through the use of ranges and distributions rather than point estimates. When combined with well-established stochastic methods like Monte Carlo, we can provide reliable analysis results.

Another complaint some practitioners might raise is that today’s vulnerability scanning tools already provide severity scores. It’s true, they do. Unfortunately, those scores are rarely highly correlated to risk, and they’re ordinal values, e.g., a vulnerability might be rated an 8 out of 10. As a result, they misinform far more often than not, and senior officials still don’t know what an 8, for example, means relative to the other things on their plates. In fact, the common approach to vulnerability scoring is perhaps one of the most significant contributors to poor cybersecurity management today. This deserves deeper discussion, so stay tuned for more about this in a future article.

The bottom line

When cyber risk is expressed as vulnerabilities or with severity scores that don’t accurately represent risk, decision-makers are placed at a huge disadvantage because they’re forced to compare organizational apples – opportunities, operational costs, and other forms of risk – to cybersecurity oranges. This is a no-win situation. Fortunately, it also isn’t necessary.

The question is whether we’re willing to spend the additional time and effort to provide executives with better information. Given the critical nature of cybersecurity today, that seems like a rhetorical question.

HSTRisk: Accurately Scoring Cybersecurity Threat in a Maze of Vulnerabilities Homeland Security Today
Jack Jones
Jack Jones is one of the foremost authorities in the field of information risk management. As the Chairman of the FAIR Institute and Executive VP of Research and Development for RiskLens, he continues to lead the way in developing effective and pragmatic ways to manage and quantify information risk. A veteran in the security industry, Jack is a three-time Chief Information Security Officer (CISO) with forward-thinking financial institutions such as Nationwide Insurance, Huntington Bank, and CBC Innovis. He has received numerous recognitions for his work, including: the ISSA Excellence in the Field of Security Practices award in 2006; a finalist award for the Information Security Executive of the Year, Central US in 2007; and the CSO Compass Award in 2012, for advancing risk management within the profession. Prior to that, his career included assignments in the military, government intelligence, and consulting, as well as in the financial and insurance industries.
Jack Jones
Jack Jones
Jack Jones is one of the foremost authorities in the field of information risk management. As the Chairman of the FAIR Institute and Executive VP of Research and Development for RiskLens, he continues to lead the way in developing effective and pragmatic ways to manage and quantify information risk. A veteran in the security industry, Jack is a three-time Chief Information Security Officer (CISO) with forward-thinking financial institutions such as Nationwide Insurance, Huntington Bank, and CBC Innovis. He has received numerous recognitions for his work, including: the ISSA Excellence in the Field of Security Practices award in 2006; a finalist award for the Information Security Executive of the Year, Central US in 2007; and the CSO Compass Award in 2012, for advancing risk management within the profession. Prior to that, his career included assignments in the military, government intelligence, and consulting, as well as in the financial and insurance industries.

Related Articles

Latest Articles