41.3 F
Washington D.C.
Tuesday, April 23, 2024

HSTRisk: Addressing Cyber Risk Under the New DHS Directive

The new DHS Binding Operational Directive 19-02 has been released and has made news for mandating an increased control posture. It contains two new required actions. The first involves ensuring the scope of the vulnerability scans are correct but it’s the second that has made waves: the need to accelerate the patching schedule for government agencies to 15 days for a critical security flaw, 30 days for a vulnerability with a high severity rating.

I’d argue there is a third hidden requirement that agencies are likely to run into when they try to comply with the new directive, and it stems from some basic limitations in the way the federal frameworks address cyber risk.

Closing the gap from vulnerability discovery to closure is indisputably a net positive. In the parlance of risk management, what is being done is not only reducing attack surface but also limiting the window of vulnerability (WOV). Zero-day exploits are more potent prior to a vendor patch being made available. This initially discovered yet pre-disclosure period is so valuable that it has nation-state interest as a potential cyber weapon. The government’s Vulnerabilities Equities Process (VEP) even suggests that a stockpile of zero-day exploits held in reserve is a national security interest. So, encouraging our government organizations to patch quickly is helpful in closing a clear gap that exists among the Internet-accessible systems.

Here’s where some underlying assumptions may hinder the way in which agencies could approach cyber risk more effecively. Directive 19-02 continues the practice of relying on the Common Vulnerability Scoring System (CVSS) framework to determine vulnerability criticality, which suggests that agencies use the base score by default. Although, where practical, the CVSS framework allows for modification of the base score using environmental metrics to localize the scores. CVSS uses this additional variable to provide a weight to the score that favors highly critical systems. As a result, it requires that an agency has already prioritized their systems using the CIA triad: confidentiality, integrity, and availability. Effectively, this means having a risk rating for each IT asset in your environment. By not adding this additional scoring, it has the effect of treating all the systems on your network as equal.

If you are attempting to patch all critical vulnerabilities on external systems in 15 days (per the new directive), for any reasonably sized organization this will likely mean you need to automate that patching process. The previous guidance allowed 30 days and, within that timeline, manual processes could likely manage the load, even for moderately sized organizations. Automated patching probably means that you need to acquire a new tool, reconfigure one that you already have, or design and build an entire vulnerability management process (VMP), inclusive of the necessary tools and the temporary staffing uplift to put it all into practice. There is very clearly a cost to comply with this and, so too, the cost for not accomplishing this task needs to be evaluated.

This is where federal standards and practices on cyber risk fall short. Determining risk ratings for your IT assets and evaluating economic impact of noncompliance with 19-02 can’t effectively be done without using a Cyber Risk Quantification (CRQ) model.

Non-CRQ models rely on flawed ordinal-number based systems or direct selection of the risk variable labels. You know you are in the realm of bad modeling when you find yourself considering whether likelihood is High or Medium-High versus estimating how often these critical vulnerabilities will be attacked in a year.

Indeed, your model is likely flawed if it cannot withstand scrutiny over the selection of the variables. If, when being questioned about why an impact rating is High, you are unable to articulate the economic impact to the organization, then your model is not really aiding decision-making. A CRQ model will respond to such scrutiny with direct financial numbers, such as: Organizational reputation impact would be between $1 million and $5 million, the loss of revenue would be $100,000 to $500,000, and repair costs are estimated to be between $10,000 to $300,000.

It may seem like a challenge to utilize CRQ for public-sector concerns; however, it just requires understanding more about the public service your organization is providing. Like any business, there are customers (internal and external). Gaining a sense of what the impact of that service failing means to those constituencies allows you to have a better understanding of the economic impact.

Don’t be fooled into thinking that private sector concerns have some greater level of precision in their revenue models than the public sector. Most find it challenging to articulate what one hour of downtime means. Don’t be afraid to drive that kind of maturity into the organizations you are supporting. Collaborate better with internal clients to come up with agreed-upon ranges of impact. Many government organizations have value of a statistical life (VSL) calculations that they use to help drive priorities; this can be done for cyber concerns as well.

Other variables for measuring impact for public sector organizations can be calculated by thinking about the wider economic impact of service failure. How many lives are going to be impacted by an outage or hack caused by noncompliance with the new patching standard? To what extent and in what way? Does your organization have an hourly cost for its citizen clients’   times? Such variables, even for missions that do not support safety-critical systems, can still improve prioritization of investment. Mature cyber organizations in the private sector are often driving their businesses to think about these factors to not only solve their own problems, but to improve the overall organization.

It’s only with an approach like CRQ that an organization can help accomplish the two critical components of complying with Directive 19-02: First, determining the technology uplift requirements of having to comply with the new 15-day requirement and, second, understanding what happens if an agency cannot come up with the funding necesary to support that guideline. It’s a far better argument to say that by not complying we are putting an average of $10 million a day at risk than to say it’s of high impact to our constituencies.

HSTRisk: To Confront the Cybersecurity Skills Shortage, Prioritize

author avatar
Jack Freund
Dr. Jack Freund is a leading voice in cyber risk measurement and management. He is an expert at building relationships across the business to collaborate, persuade, and sell information risk and security programs. Jack is currently serving as Director, Risk Science at RiskLens and previously worked for TIAA as Director, Cyber Risk. Jack holds a PhD in Information Systems, Masters in Telecom and Project Management, and a BS in CIS. He holds the CISSP, CISA, CISM, CRISC, CIPP, and PMP designations. Jack has been named a Senior Member of the ISSA, IEEE, and ACM, a Visiting Professor, Academic Advisory Board member, and IAPP Fellow of Information Privacy. He is the 2018 recipient of ISACA’s John W. Lainhart IV Common Body of Knowledge Award, the FAIR Institute’s 2018 FAIR Champion Award, and presented Nova Southeastern University's Distinguished Alumni Award. Jack's book on quantifying risk, Measuring and Managing Information Risk: A FAIR Approach, was inducted into the Cybersecurity Canon in 2016. Jack’s writings have appeared in the ISSA Journal and he currently writes a column for the ISACA newsletter.
Jack Freund
Jack Freund
Dr. Jack Freund is a leading voice in cyber risk measurement and management. He is an expert at building relationships across the business to collaborate, persuade, and sell information risk and security programs. Jack is currently serving as Director, Risk Science at RiskLens and previously worked for TIAA as Director, Cyber Risk. Jack holds a PhD in Information Systems, Masters in Telecom and Project Management, and a BS in CIS. He holds the CISSP, CISA, CISM, CRISC, CIPP, and PMP designations. Jack has been named a Senior Member of the ISSA, IEEE, and ACM, a Visiting Professor, Academic Advisory Board member, and IAPP Fellow of Information Privacy. He is the 2018 recipient of ISACA’s John W. Lainhart IV Common Body of Knowledge Award, the FAIR Institute’s 2018 FAIR Champion Award, and presented Nova Southeastern University's Distinguished Alumni Award. Jack's book on quantifying risk, Measuring and Managing Information Risk: A FAIR Approach, was inducted into the Cybersecurity Canon in 2016. Jack’s writings have appeared in the ISSA Journal and he currently writes a column for the ISACA newsletter.

Related Articles

Latest Articles