Vulnerability Management Blog

Contextual Prioritization: An Introduction to Ranking a Vulnerability’s Priority Based on its Unique Network Context

Introduction

There have been a few attempts to quantify the risk of individual vulnerabilities in the security community, most notably, the CVSS risk score offered by the National Vulnerability Database. Others have attempted to assign a vulnerability risk score based on factors independent of – or external to – an organization’s unique network architecture. Seasoned IT and security professionals, however, understand that a vulnerability’s criticality can’t be meaningfully assessed without accounting for its context: where it lies in the network, what machine it’s running on, how it’s connected to other services and devices, if it hosts any web applications  (and how large and vulnerable they are), if it’s on the Internet, its importance to the business, etc. Building remediation priorities through the lens of not only external factors, but also more importantly, the specific environment in which the vulnerabilities reside, is called Contextual Prioritization.

This paper will introduce a modern approach to vulnerability risk scoring that accounts for this nonnegotiable element of meaningful vulnerability risk analysis, and consequently its ramifications for determining priorities in the remediation process.

Exploitability Prediction is Something; Context-Based Prediction
is Everything

Often, the sheer complexity of IT systems, rapid changes in technology, under-staffing, and business decisions that rarely include IT and Cyber Security’s input can make it very difficult for IT teams to keep up. This can result in a large number of exposed application services and an array of convoluted and misconfigured infrastructure, leaving doors open for attackers to exploit.

Certainly, robust vulnerability management is a key aspect of any serious cyber security program. It is also one of the first steps to consider for small and medium size businesses (SMBs) building a security strategy. In recent years, thanks to the disclosure of major breaches in a variety of both public and private organizations, attention to vulnerability management has become an increasingly important part of the cyber security landscape.

This increasing awareness of the risk posed by network assets vulnerable to exploitation has grabbed the attention of major vendors, driven the focus on so-called Zero Day vulnerabilities, and prompted the development of tools to predict the exploitation of these very fresh vulnerabilities. Yet, there is an argument to be made that this conventional approach is not the most effective for the vast majority of organizations.

Predictive exploitability – projecting which recently-discovered vulnerabilities attackers will most likely choose to exploit and assigning a risk score based on that factor alone - has been the topic of many published papers. It therefore comes as no surprise that it is the current state-of-the-art in legacy vulnerability management tools. But most of the time, it is not through Zero (or even fewer) Day vulnerabilities that companies are breached. Rather, it is older, forgotten, and poorly configured resources – known as “N-Day” vulnerabilities - that are exploited by well established and easily-accessible techniques: 

“N-day vulnerabilities are a goldmine for attackers because the hard work has already been done. In certain cases, active exploits may already exist and be readily available from public disclosure documents. Compare this with zero-days, which are time-consuming and expensive to find and exploit — the reason why their use is declining among criminal groups.”

Understanding Context: Scrutinize, Recognize, Prioritize

As discussed previously, most companies today face the same challenges: too many assets and not enough resources and people to address all identified security issues practically. Moreover, traditional vulnerability assessment tools generate too much data, are often off the mark, inaccurate or simply wrong (e.g. false positives). Surely, the reporting of vulnerabilities is inherently of little value if we can’t put these vulnerabilities into context. Sadly, inferring context is knowingly difficult and seldom done correctly, if done at all.

But that shouldn’t be the case. Understanding context is a very human and intuitive thing to do. It is also the Holy Grail of artificial intelligence. Understanding the context of an object means inferring the situation in which the object exists. More than that, it means being able to usefully relate the object and its environment.

In vulnerability management, this is much more than an academic exercise. Here, context means not only finding a security issue, but understanding the causes and implications of the issue. Context in vulnerability management includes the use of the underlying asset, the potential vectors of attack that could enter this hole, the surroundings of the asset, the business line affected by a potential breach, among other factors. To put it bluntly, it is impossible to meaningfully quantify the risk of a given vulnerability without accounting for its context.

Learning Context

Unfortunately, inferring all of the specific attributes at each level of abstraction throughout an organization is a practically impossible task. But what if we could approximate this with a sufficient level of accuracy? An approach accurate enough to enable an efficient and risk-centric remediation prioritization strategy? One executed systematically, without the need for expensive consultants or a team of security experts? The solution to this problem is to combine three areas of insight:

1. Cross-sectional Data Aggregation
2. Automated Statistical Analysis
3. Domain Expertise

One useful analogy is the impressionist style of painting. For example, take a look at Vincent van Gogh’s “The Starry Night.” Even though any local point is not depicting a perfectly accurate element of reality, when we take in the whole picture, it is possible to appreciate the object in perspective. Contextual prediction works the same way. By aggregating specific facts from individual models - different remediation behaviors, comparing various naming schemes, website content and network pattern complexity, asset and tool usage - it is possible to paint a realistic picture of what should be important for a given organization. And with today’s technology, all this can be accomplished without active human interaction.

Putting it all Together

After the aggregation step, we can correlate this knowledge with industry-wide practices for similar organizations, anonymized cloud-based remediation data, and human security expertise. Once this is done, we can, for example, differentiate between an interesting target and one that is surely not, which asset is underestimated or overestimated, or which has most probably been forgotten or intentionally left unresolved. We can infer elements such as network context, business line priorities, or likely or unlikely scenarios of attack. It even enables an improvement in detection
reliability.

With such knowledge of the vulnerabilities and business context in hand, the path to a risk-based, efficient remediation program becomes a lot shorter.

Most Recent Related Stories

Vulnerability Management Service Provider

Read More

Prioritizing Network Vulnerabilities: A Survey and Comparison of Today’s Options

Read More

Leveraging AI to Modernize Vulnerability Management and Remediation

Read More