Vulnerability Management Blog

What is Vulnerability Scanning?

Why do Enterprises Need Vulnerability Scanning?

All modern enterprises have countless software applications on all their information system assets, including servers, laptops, workstations, routers, firewalls, and connected devices like printers.  Each of these “assets” runs multiple software applications, from the core operating system to everyday applications like web browsers or word processing applications.  Each of these applications is complex software, and when it’s initially built and tested, can be delivered with built-in, inadvertent flaws that make the software susceptible to exploitation by bad actors with nefarious intent. These built-in software flaws are known as vulnerabilities. Not all software applications have vulnerabilities, but many do.

Closely related to vulnerabilities are another class of security gaps:  inadvertent data exposures.  These typically result from misconfigured servers that make data available accidently.

It’s important to note as well that a software application is not a static entity. Software applications comprise countless versions over the course of their life cycles, and each new version could introduce new features..and new vulnerabilities.  The only way to determine if software on a given enterprise network has exposed vulnerabilities is to run a vulnerability scan. So a vulnerability scan is like a home inspection in that it generates a list of issues that need to be fixed. A vulnerability scan is important, but it is only one part of the vulnerability management process. Much like the results of a home inspection, if those results aren’t acted on and fixed, the inspection added very little or no value.

How Does Vulnerability Scanning Work?

Vulnerability scanning works by making a series of inquiries to software and gauging the reaction. Scan templates for different vulnerabilities are built manually or automated, and they run on network assets and their software. Based on the reaction of the software to the inquiries, a vulnerability could be identified.   Asset “discovery,” closely tied to scanning, probes machines for the services they might be running (e.g. HTTP or SQL).  Scanning can either be agentless or require agents. Agents are small software programs or “bots” that are installed on assets. Some vulnerability management professionals believe agents are required to achieve a high quality scan, but newer scanning technologies are available that are increasingly obviating the need for agents. In addition, the installation and maintenance of agents is time-consuming, and the presence of an agent, by definition, increases the attack surface of the asset. Some scanning vendors have an agentless philosophy, while others just the opposite. Whether or not your enterprise should use agents depends on your scanning requirements.

How are Vulnerabilities Discovered?

Vulnerabilities can be discovered by multiple entities, including:

  • The software companies that develop the software
  • “White Hat” hackers that identify vulnerabilities and report them to the company to be fixed, and are typically cyber security researchers at universities and from other origins
  • Bad actors that discover the vulnerabilities, and gain access to a network
  • “Bug Bounty” program participants who are incentivized to find vulnerabilities and report them to the organization sponsoring the program, for example, Microsoft, Facebook, and even government entities like the US Department of Defense 

How are Vulnerabilities Identified and Cataloged for Scanners?

The most prominent vulnerabilities database is maintained by the MITRE Corporation, a non-profit company that serves as a think-tank and research operation for the US Federal Government.  MITRE maintains the Common Vulnerabilities and Exposure (CVE) database. MITRE’s work on the CVE is funded by the US Department of Homeland Security.

The National Institute of Standards and Technology (NIST) maintains the National Vulnerability Database (NVD), which pulls vulnerability data from the MITRE CVE database and adds a criticality score of 1 to 10 for each vulnerability.  The NIST Common Vulnerability Scoring System (CVSS) is the primary score used by vulnerability teams to rank the severity of vulnerabilities.  As we’ll discuss later, the CVSS is a good starting point, but its value is the same for each vulnerability irrespective of its network context.

MITRE and NIST make the CVE and CVSS data available to the public free of charge.

What are False Positives?

False positives occur when a scanning engine identifies a vulnerability that is actually not a vulnerability; a false alarm, if you will. False positives present a substantial challenge to vulnerability management practitioners. In a typical enterprise division of labor arrangement, security teams are responsible for scanning and vulnerability identification, while IT operations teams are responsible for fixing (or “patching”) vulnerabilities. So, if the security team is sending the IT team a large quantity of false positives, it will not only waste the precious resources of the IT operations team, but will also ruin the credibility of the security team in the eyes of the IT operations people. Moreover, most enterprises have tens of thousands of legitimate vulnerabilities, so adding false positives to that overwhelming number just compounds an already intractable challenge. Scanning products that include techniques for minimizing false positive offer significantly more value.  As will be discussed below, the real value in a vulnerability scanner is in prioritization, or the process of identifying the vulnerabilities that pose the most risk to the organization. And, step 0 in the prioritization process is to de-prioritize false positive “vulnerabilities.”

How do Vulnerability Scanners Differentiate from One Another?

There are many basic scanning products on the market, including some open source, free products.  Some commercial products, however, have distinguished themselves by employing more sophisticated technology, for example, artificial intelligence or machine learning.  One of the vulnerability scanning challenges that these newer technologies attempt to address is the negative impact on the asset being scanned by the scan itself.  For those quantum physics buffs out there, it’s the equivalent of the Heisenberg Uncertainty Principle, or the notion that the observation of an atomic particle impacts the particle. By scanning an asset too aggressively, the asset, especially a poorly-configured or heavily-used one, can crash or experience a significant performance degradation. 

Thus, technology can be employed to greatly reduce or virtually eliminate the risk of asset compromise during a scan. One machine learning technique uses historical asset data to determine the best times to run scans on certain assets based on their usage. The data is derived from previous attempted scans.  A scan is attempted, and the response time of the asset to inquiries is collected. The asset is then scanned at different times, and the optimal response time is then used as the asset’s baseline scan time.  Given the machine learning foundation of the technology, this optimal scan time automatically adjusts as the software “learns” and improves over time.

How Else Can AI or Machine Learning be Applied to Vulnerability Scanning?

We previously discussed how machine learning can be employed to optimize scan schedules based on the asset’s usage and how its query load varies at different times. Similarly, a real-time evaluation of the response time of an asset during a scan can also be used to optimize the scan’s execution.  Specifically, the rate at which inquiries are made to an asset is adjustable in real-time, and can be increased if the response time of the asset is fast. Alternatively, if the response time of the asset being scanned is slow, its workload is presumed to be high at the time of the scan, and therefore the query rate can be reduced, all but assuring that the scanning process will not compromise the integrity of the asset’s operation.

What About Web Applications and Vulnerability Scanning?

Some of the most critical “assets” on any enterprise network are not tangible assets at all, but are, rather, websites or web applications.  Web applications are particularly vulnerable to threat actors because, by definition, they can be accessed and exploited from any location. Some vulnerabilities are only “locally exploitable,” meaning they pose a risk only if the bad actor has physical access to the device on which the vulnerability resides. Since web application vulnerabilities are accessible from anywhere, all of the vulnerabilities associated with a web application are “remotely exploitable.” Many vulnerability scanning products scan only devices, and rely on third party software to conduct “web application security testing,” or the process of evaluating web applications for their vulnerability to common attack vectors like SQL injection or cross-site scripting.  However, it’s important to closely analyze device and web vulnerabilities collectively, as the risk they pose to the organization in combination can exceed their independent risks.  For example, if a vulnerability is only locally-exploitable, but it resides on an asset that also houses one or two websites, that vulnerability becomes, effectively, remotely exploitable.  Thus, an ostensibly low-risk vulnerability could represent a much higher risk to the organization than it might appear to be at first glance.

What is Vulnerability Scanning Prioritization?

Irrespective of their sophistication, all vulnerability scanning products will identify thousands, tens of thousands, or even millions of vulnerabilities.  Some of those vulnerabilities will pose a high cyber security risk to the enterprise, while others are significantly less critical.  Knowing which vulnerabilities are critical and which are not is perhaps the most important element of vulnerability management. The process of determining which vulnerabilities are high risk and which are low risk is known as vulnerability prioritization.

What is Vulnerability Patching?

Fixing a vulnerability is called “remediation.”  There are a few ways to remediate a vulnerability, for example isolating it or building other compensating controls. By far, however, the most common form of vulnerability remediation is “patching.” Patching is simple in concept, but can be challenging in reality.  Put simply, patching is installing a new version of the software that has the identified vulnerability or vulnerabilities. The challenge is that installing a new version of software can sometimes break the system on which it is running, or adjacent systems. Further, even when the installation of the new software goes smoothly, an asset may have to be brought off-line for a period of time, and in either scenario, resources are required to complete the patch. 

Why is Vulnerability Scanning Prioritization so Important?

Since vulnerabilities are so voluminous on the typical corporate network, and because remediation of those vulnerabilities is resource-intensive and can be risky, knowing which vulnerabilities pose the most risk to the organization and which can be either ignored or generally de-prioritized is crucial to reducing vulnerability risk. Prioritization is what transforms a basic vulnerability scan from a report into actionable information that can drive intelligent remediation activities and ultimately reduce vulnerability risk.

What are the Best Methods of Vulnerability Prioritization?

Until recently, most organizations prioritized vulnerabilities using the only means available, the CVSS (Common Vulnerabilities Scoring System) score. Each vulnerability discovered and cataloged is assigned a CVSS score from one to ten based on a number of factors. Generally, vulnerabilities with scores of 8, 9, or 10 are considered “critical.” The CVSS score serves as a solid starting point for the prioritization of vulnerabilities, but it’s primary deficiency is that it remains constant irrespective of the context of the vulnerability.

A more recent approach to prioritizing scanned vulnerabilities is predictive exploitation. This technique attempts to predict whether or not an exploit for a given vulnerability will be developed and deployed such that threat actors will use it to compromise enterprise networks. Predictive exploitation is an improvement over the CVSS score, to be sure, but it’s primary issue is that it bucketizes priorities into those that are likely to be exploited, and those that aren’t. This approach still leaves thousands of vulnerabilities in the “likely to be exploited” category, so remediation teams don’t know which of those remaining thousands of vulnerabilities are the highest priority.

The most recent and technologically advanced technique is Contextual Prioritization. Using this technique, all vulnerabilities on a network are numbered from 1 to n in order of priority, providing a prescriptive remediation list for IT teams, with no ambiguity. The list is developed using over 3 dozen factors and advanced machine learning, and is recalculated every 5 minutes. Contextual prioritization accounts for not only the CVSS score and the likelihood of exploitation, but many other factors like the vulnerability’s network environment, proximity to web applications, and the importance of the asset to the business, all automated and requiring no human effort.

Most Recent Related Stories

Machine Learning, Penetration Testing, and Your Most Hackable Assets

Read More

Criminal Sentencing, William F. Buckley, and Modern Vulnerability Prioritization

Read More

Why Are We Still Worrying About Vulnerabilities?

Read More