Vulnerability Management Blog

Re-defining Vulnerability Remediation Prioritization

 

This blog post is the first in a series of articles (read the second in the series here) that describes in great detail our approach to vulnerability prioritization. In this article, we introduce general concepts about contextual vulnerability prioritization and how it relates to current risk evaluation methods, and why we think a more modern approach is necessary.

Despite significant investments and global market growth over the last 20 years in cybersecurity, the high technological velocity of the IT market forces understaffed cybersecurity teams to handle a growing list of processes & responsibilities. Organizations today still struggle to efficiently and effectively address all foundational elements of industry best practices such as vulnerability management

Vulnerability management is a well established pillar of cybersecurity basic hygiene, yet cybersecurity incidents stemming from already known vulnerabilities at large organizations with well funded and equipped cybersecurity teams demonstrate that there is still a struggle to effectively remediate vulnerabilities on the most valuable targets for attackers.

In practice, vulnerability management requires most organizations to use one or multiple tools with the objective to regularly surveil IT assets & their respective vulnerabilities on a global scale.  This activity is the starting point of a continuous process that ultimately aims at remediating these vulnerabilities.

Basic Vulnerability Management Process

With global corporate networks that can sometimes number in the hundreds of thousands of assets compounded by the ever-increasing number of reported vulnerabilities, the output of these scanning products can quickly become overwhelming, with the largest networks featuring millions of vulnerabilities to consider for remediation.

Faced with the near-impossible task to remediate all of these vulnerabilities, organizations have to devise strategies to balance resource allocation versus remediation coverage: organizations must prioritize.

Prioritization is a crucial prerequisite activity for remediation, as organizations will have to assess the risk that each and every one of the findings represent, and decide what should be addressed first. In order to do this, there are multiple methods that organizations make use of today.

Should I Prioritize Vulnerabilities using the CVSS?

 

One very basic method to prioritize vulnerabilities is to use the severity ratings provided with the vulnerability definition (CVSSv2/CVSSv3) as a proxy value for risk calculation, and attempt to address all vulnerabilities over a certain threshold. This is an approach recommended (and sometimes mandated) at the very minimum by some industry standards like PCI-DSS (6.1/6.2) or the (now retired) NIST SP-800-117 and other governmental entities.

Though this approach might seem efficient at first, it can be shown that using the CVSS score as an indicator of risk comes with critical limitations (especially for large environments).

Quick recap on the CVSS

The score aims to measure the severity of IT vulnerabilities based on intrinsic properties of the vulnerabilities. It is composed of a series of computations of fixed and ordinal values in the form of qualitative scores (low, medium, high) meant to be human-interpretable. 

By design, it does not allow for a fine grained analysis of the vulnerabilities. In fact, one of the main drawbacks of using the published CVSS scores as a proxy value for risk-based prioritization in addition to their total absence of context, is its lack of granularity & diversity.

As others have noted there’s only a few hundred “paths” available [1] when using the CVSS system, but whether you use V2 or V3 criterias, all final scores have to be chosen from 101 possibilities that the scale allows  (from 0.0 to 10.0).

A quick investigation on real data shows that from all these 101 possibilities, the NIST only has attributed about 73 of them, and 6 scores make up to 67% of all vulnerabilities present in the NVD.

This is reflected in enterprise networks where we usually see less than 60 different scores and with about 12 scores making up to 90% of instantiated vulnerabilities.

Distribution of CVSS scores in enterprise networks

These trends confirm past publications that demonstrated that using the baseline-only published CVSS scores without any contextual information as a risk measure is prone to substantial waste of time.

How about the Temporal and Environmental Components of the CVSS?

The baseline CVSS can be supplemented with more context-aware elements through its environmental and temporal scoring provision. This could very well be considered in order to gain sufficient granularity and contextual information density.

CVSS Evaluation Process

Unfortunately, this approach comes with already-established limitations, first as an error-prone activity that hardly scales with modern network complexity, and second, with great variability in judgement, even for the most seasoned security experts.

Does this approach give us more granularity? Well, even taking into account all the possible variations in the temporal and environmental factors, which would increase the potential number of “paths” to get the final score to a few thousands, the system will still map this analysis to scores on the same scale with 101 possibilities from 0.0 to 10.0. This is still insufficient to prioritize vulnerabilities in environments with even mere thousands of assets.

Can Risk Frameworks help Prioritize Vulnerabilities?

More mature organizations will usually put the CVSS aside as a proxy value for risk, and might want to benefit from more elaborate risk management frameworks such as NIST, ISO, FAIR, Risk IT, or JGERR, that provide for more granularity and flexibility for risk evaluation.

Quick recap on risk management frameworks

Throughout frameworks of risk management, risk (R) is generally defined as a product of the likelihood (L) that an incident occurs (here the probability that a specific vulnerability be used by malicious adversaries) combined with the impact (I) of that incident on the organization.

R = L x I

Pretty straightforward, right?

Over the years, quite a few papers have been published on refining the L part of risk with different approaches, for instance:

But in fact, very little research has been published on automating the estimation of the impact (I) at scale, especially as it relates to specifics of the organization itself.

How can we approximate the Impact (I) measure of risk?

Measuring the Impact portion of the risk a vulnerability represents is theoretically a task of “near-perfect knowledge” of the operational context in which it is exposed. Why? Because in practice, there are infinitely many factors to consider to evaluate the impact (I) to obtain a near-perfect measure.

As such, most risk frameworks are aiming for an approximate measure.

One of the most direct methods is to maintain specific business-related quantitative (precise dollar amount) or qualitative (low/med/high/critical) value scoring for each concerned asset, as assumed by the likes of FAIR, RISK IT or NIST SP800-30:

Looking at the above graph that represents one of the risk assessment methodologies, it’s pretty obvious that this approach will come with scalability issues. Networks with mere thousands of assets can’t afford to go through all these steps for most of them.

To improve on that, other models suggest great speed improvements on impact evaluation, and rely more on the input of experienced practitioners as they try to distill & peer-review their organizational tribal knowledge.

While these approaches are theoretically sound, in practice they have to be limited in scope to a subset of the assets, for reasons of scalability & repeatability:

  • Strenuous work is required to maintain the organization’s cultural & historical knowledge over time, which could quickly become a labour-intensive task, incompatible with the high dynamism and continuous stream of vulnerabilities of modern IT environments [3].
  • The human-driven estimation at the core of these systems is barely repeatable over time: potentially influenced by fatigue, depression, resentment, or simply because of undue influence from internal corporate politics.

A more modern approach should include a continuous automated & repeatable evaluation of all vulnerabilities within their organizational context.

Note that this approach should not discard work that has already been done. Rather, it should build on the manual labour required by existing frameworks & risk evaluation methods, and provide a single risk-based, granular metric for vulnerability prioritization.

Introducing Contextual Prioritization

Through this series of blog posts, we’re going to propose a new risk-based data aggregation model called Contextual Prioritization that brings together both elements of the likelihood (L) of a vulnerability being successfully exploited as well as its estimated Impact (I) for the organization. 

This scalable and flexible model that we will explore in upcoming blog posts works in multiple ways:

  • By continuously extracting & learning from a large number of expert analysts signals in a non intrusive way from both internal, organization-specific activity, as well as external, open-source intelligence (OSINT) & private feeds. This passive signal extraction makes it capable of supplementing existing risk calculation techniques and understanding organizational priorities by transferring them to machine learning models.
  • By being supplemented with elements of unsupervised machine learning in order to statistically characterize the different assets on subnets as a way to simulate attacker’s intuition & behavior when targeting specific systems in a defined network.
  • By being flexible enough so that it can be complemented with more simple heuristics in order to take into account pre-existing risk categorization techniques.

This statistical & machine-learning driven approach of the model makes it:

  • Much more scalable than traditional methods, as this technique can be applied continuously, to the entire network and not just a subset of assets.
  • Able to work with incomplete knowledge (partial information), and naturally improves as the data set does.
  • Able to evaluate risk in a fault-tolerant way through the principle of Wisdom of the Crowd: with more non-competing elements to evaluate, the risk of making wrong judgements decreases.

On the importance of the Wisdom of the Crowd

This model relies on the principle of Wisdom of the Crowd for risk computation through statistical analysis and machine-learning techniques:

From the automated collection and aggregation of numerous proxy values into a single risk score emerges a representation of the organizational risk that is as close as possible to the “true” (contextual) risk.

We’ll explore the concepts behind the Wisdom of the Crowd in the next blog post.

How does this approach help with vulnerability prioritization?

Once put in place, this global & continuous risk scoring for all vulnerabilities in their organization-specific context (taking into account vulnerabilities properties, asset properties, network properties, organization-specific information as well as external factors of influence) can be used to provide security analysts with an ordered list of priorities to address, hence contextual vulnerability prioritization.

The way this list of priorities is produced will be explored further in the next blog post, stay tuned!

[1] Scarfone, K., & Mell, P. (2009). An analysis of CVSS version 2 vulnerability scoring. 2009 3rd International Symposium on Empirical Software Engineering and Measurement. doi:10.1109/esem.2009.5314220

[2] VULCON: A System for Vulnerability Prioritization, Mitigation, and Management

KATHERYN A. FARRIS, ANKIT SHAH, GEORGE CYBENKO, RAJESH GANESAN and SUSHIL JAJODIA, https://doi.org/10.1145/3196884

[3] The Effects of Information Overload on Software Project Risk Assessment*

Robin Pennington  Brad Tuttle https://doi.org/10.1111/j.1540-5915.2007.00167.x

Most Recent Related Stories

What is Risk Based Vulnerability Management?

Read More

Risk Based Vulnerability Management Product Update

Read More

Growing a Machine Learning project - Lessons from the field

Read More