Vulnerability Management Blog

Vulnerability Hype Considered Harmful

 

For all the disagreements that exist in cybersecurity, the one constant that unites most of us, is our curiosity and interest for “new things”. 

There’s no better example of this than newly published vulnerabilities & exploits, one of the rare things that gathers high interest from both marketing/sales functions as well as technical experts: one is seeing new business opportunities, while the others see new knowledge to acquire and potentially new techniques to add to their arsenal.

Like clockwork, for every substantial vulnerability that gets published, you can expect a slew of mostly redundant articles to be produced on one hand, echoing back to new acronyms, logos and cool™ names™ that will be delivered on the other.

Our industry thrives on hype: budgets that have increased 35x since 2004, with a growth rate that closely matches the increase in published vulnerabilities. The industry hype is delivered with an abundance of new concepts & acronyms quickly formalized by analysts into our common industry lingo, all competing to consume their share of these growing capex budgets.

Are we secure yet?

Are we secure yet?

This abundance of hype is probably not something that is exclusive to the cybersecurity industry (after all, Gartner publishes hype cycle updates for many technology areas), and though this might help us to be alerted about new concepts and stay up-to-date about industry trends, hype is a very powerful enemy of repeatability and predictability: two key elements of  mature security programs.

If there’s one sub-field of cybersecurity where hype can be a particular hinderance to operational effectiveness, it’s in vulnerability management.  

As a core element of any cybersecurity program for many years now, the uninitiated might feel like it’s strongly established, commoditized in most companies, and a less trendy field of cybersecurity. As such, it’s often relegated to opex budgets with less visibility, and must work within established resource boundaries to address an ever increasing number of issues (new published vulnerabilities). 

It is within this context that vulnerability hype competes for experts’ time and attention, along with a theoretically mature & repeatable process that has to work efficiently with scarce resources.

But what happens in practice?

Let’s look at this through the lenses of a real life scenario.  Last month, the NSA and Microsoft announced the release of a security fix for a core crypto library affecting almost all Windows versions, CVE-2020-0601.

Predictably, the hype machine was quickly spun up to relay the news about this new vulnerability, and, speaking with an industry expert in charge of a VM program, we learned that as soon as Microsoft announced their patch for this vulnerability, this expert was under substantial pressure from this company's executive team to produce more-than-daily updates about the patching status of the entire environment of over hundreds of thousands of machines.

This patch was announced very publicly and delivered as part of a “Patch Tuesday” release before any exploit was made available, and for an OS that has an established and somewhat reliable enterprise patch delivery mechanism. Patches are delivered by Microsoft like this every month, and with each of them comes a high potential of a very critical vulnerability being published. In short, vulnerabilities discoveries are not man-bites-dog stories. Quite the opposite: they’re routine.

See any problem with this situation?

Seems familiar?

This scenario could very well mean that for every new CVE that receives the infosec hype treatment, there will potentially be demand for special measures which will interrupt the established vulnerability management process (if any) to ensure these new (perceived) risks are managed in an expedited way, and everyone can sleep at night...

And that’s the crux of the issue at hand.

It’s a known fact that humans are not particularly efficient at multitasking, a fact confirmed by a 2007 study from Microsoft that showed that people took up to 25 minutes to go back to their previous task when responding to an interruption. This piles up during a typical work day, and some experts say it can add up to 6 hours per day of  loss of productive time.

Interruptions are not just a time suck, they also cause substantial employee demotivation.  In another study of 20,500 employees conducted in partnership with Harvard Business School, results showed clearly that the lack of opportunity to focus in an absorbed way on a specific task was a major demotivator for a given job.

Some more recent studies even show that some types of multi-tasking may very well affect certain brain functions and structures negatively.

The key takeaway here is to recognize that these interruptions do not just affect people, but most importantly for companies, they interrupt established processes.

By side-tracking regular planned activities and asking employees to focus on another process before going back to their regular activities, we’re making it extremely hard for people to focus on delivering the original process and optimize for it to attain a higher state of maturity level.

CMM Maturity Levels

So, what can be done?

Well, there have been some theoretical advances in the field of Machine Learning with the objective to help us be more efficient at dealing with multitasking and driving towards what some call “augmented intelligence,”  but in the immediate future, there are some more pedestrian approaches that can be used while we wait for the science to be implemented.

Going back to my introductory example, it’s clear that having an established remediation process is one of the core tenets of any VM program, but a key element of that remediation process is realizing that the company cannot patch everything at once. 

As the SANS literature on VM puts it, “Vulnerability management is the process in which vulnerabilities in IT are identified and the risks of these vulnerabilities are evaluated.”

That’s right: with limited resources and an ever-increasing pool of issues to address, the process should already have provisions for prioritizing remediation. And this prioritization should certainly not be based on the current level of hype for this specific vulnerability (which would mean you are effectively outsourcing your risk assessments to cyber security journalists), but rather on the company’s own corporate context, taking into account the underlying asset severity, business tolerance to risk, exposure, surrounding assets, etc. In an era of vulnerability overload, this type of context-based vulnerability risk prioritization is key to an effective vulnerability management program.

Estimating the risk a new vulnerability represents is a complex task (a science in and of itself), and as such there are multiple risk assessment methodologies for new vulnerabilities (using the CVSS is absolutely not a good one)  But, with an ever increasing number of vulnerabilities being reported, the main objective of your risk rating methodology should be to give the best approximation, the quickest way, and if possible, automatable at scale. 

One efficient methodology to do so is the JGERR, and with machine learning technology and data analytics at scale, there are now other options available, for example, Delve’s Contextual Prioritization.

But what should be driving the optimization of the entire VM process is making sure that limited resources are patching the right thing at the right time, and not getting sidetracked by artificially risk-inflated vulnerabilities

Worst case scenarios should be discussed in advance and response strategies should target minimizing the interruption of the established process. This also means that you have to make sure that most equipment can be remediated under a certain timeframe for which management accepts the risk. Management shouldn’t be breathing down the security’s team neck because of the hype echo chamber a certain vulnerability suffers from, while other legitimately higher-risk vulnerabilities get discovered and aren’t being addressed.

TL;DR;

  • Hype creates interruptions in your VM process, creates distractions, slows it down and demotivates employees.
  • Your VM process should have provisions for these out-of-band vulnerabilities, and provide the ability to quickly evaluate risks for newly discovered vulnerabilities across the enterprise.
  • Try your best to ignore the hype and focus on continuous delivery and optimization of the VM process.
  • Drive towards optimization, it’s important to validate what works over time by following practical improvement KPIs: 
    • Did you fix the right things on the right assets first with regards to what the business considers more important?
    • How has the landscape evolved since I patched my machines? Have more PoC/Exploits been delivered? Was it useful to patch quickly or should I have waited for the hype to die out?.
    • Which CVEs ended up being used by malware for the year (this information is sometimes accessible in yearly reports)? Did we patch according to these CVEs or were we focused on something else?

Fun fact: It took me 17 attempts to write this blogpost over the span of 2 weeks because of interruptions. 😉

 

Most Recent Related Stories

What is Risk Based Vulnerability Management?

Read More

Risk Based Vulnerability Management Product Update

Read More

Growing a Machine Learning project - Lessons from the field

Read More