Vulnerability Management Blog

Removing the Mystery from AI: Easier Said Than Done

One of the challenges encountered by all software companies that have built AI or machine learning into their products is eliminating “black box” syndrome, or the perception that inputs are consumed by the AI black box, and magically, reliable results are spit out. The results are valid, useful, and can be relied upon to make critical business decisions...trust us. Some time ago, we at Delve set out to provide a window into our prioritization “wizardware” for our customers that was informative, understandable, and user-friendly. We discovered that it wasn’t as straightforward as we might have expected.

Even though it was only introduced a few months ago, I can't imagine Delve without our contextual prioritization graph anymore. There are many things to love about it. It's easy to understand, yet it accurately reflects a very complex process. It looks like an obvious and natural decision, and yet it was not our first attempt at explainability.

From the moment we began introducing contextualized vulnerability ranking in the product, we wanted to explain why things were the way they were. A number is not enough. To be trusted, the factors impacting the result had to be transparent. We began with a list of factors and threw in a few bar graphs. It was OK with a few factors, but it was never great. As we kept improving the engine and adding more prioritization factors, limitations became more apparent... for those who could venture deep enough in the application to even see them.

We wanted the explanations to take the central stage and not be hidden. I would love to say that we tasked our team with creating the dashboard widget shown in the figure here.



If that were the case, this would have been a much shorter story, but these drawings are actually quite far into the reflection.

Digging into this blog's history, you may find some posts by Serge-Olivier with early attempts at explaining the score calculation and how specific factors impact it. It was the topic of ongoing whiteboard discussions. Eventually the “flamethrower” surfaced. I believe it was a somewhat unrelated attempt to validate that our score distributions were working as expected. The idea was attractive from the start. However, the early renderings were not as clean and caused more questions than they answered.

At the time, calculated scores were unbounded. Looking back, it was obviously a problem. No one knew if 1300 was a high score. The truth is that it varied over time as our factors evolved. In every attempt to graph this flamethrower on a whiteboard, the only way it made sense was if the scales on both sides were the same. However, that did not make much sense mathematically, and wasn’t consistent with the way we calculated the aggregate scores.

This is about when the above sketch came up. We had a valid solution for the groupings on the horizontal scale, a solution for the too-large number of lines to render, and really a consensus between research, user experience, engineering, customer success and marketing - if only we could normalize those values. It was time to rip off the band-aid. Our unbounded score was wrong. It had to be out of 10. It meant changing much of the aggregation pipeline, giving up on the dream of commutative factors and patching up a nearly-infinite amount of user interface details.

We figured out the normalization, did the work, altered the pipeline, created a brand new graph component, clarified the product interactions iteratively and the rest is history. The graph looks obvious and natural to the data not because it was - but because we reframed the data to fit our goals. Our product today includes an interactive version of the contextual prioritization progress graph which provides our customers graphical and text-based detail on how the prioritization score for each vulnerability was calculated, and which factors impacted the score most prominently. Good bye black box. Hello AI transparency. When the goal is worthwhile, nothing should be kept off the table.

How Does AI Help Vulnerability Management?


We spent the first part of this blog describing the process the Delve Labs’ team went through to make its AI transparent in its product to customers. But that entire discussion does beg the question, why is AI needed in vulnerability management? Well, there are a number ways AI - more specifically, machine learning - can be leveraged to automate and improve vulnerability management operations. One of our white papers walks through 7 examples:  7 Ways AI Can Automate and Improve Vulnerability Management Operations.  Two additional examples built into the Delve product are the Exploit Publication Prediction Score (EPPS), and the Vulnerability Trend Score (VTS), both developed using machine learning, and both results provided publicly for free on Delve’s Vulnerability Threat Intelligence Feed webpage. There are many additional example, but they all have one thing in common:  1) they either eliminate or greatly reduce the manual effort required to complete traditional vulnerability management tasks, or 2) they provide insights into very large data sets that would otherwise be impossible for a human being to discern, insights that result in better decisions, that ultimately results in lower vulnerability risk for the same or less manual effort.

Should I Trust AI in Vulnerability Management?


No. That may sound like a peculiar answer coming from a blog post written by an AI vulnerability management company, but black box AI systems are not only dangerous, but they hurt the credibility of legitimate organizations delivering legitimate AI-based solutions, especially in vulnerability management. Thus, the key to trusting AI in vulnerability management is to look for transparency. Some questions you should ask to help address the trust/transparency question include:

Can the final results of the AI analysis be drilled into for insight into how the conclusion was reached? 
Is it easily-consumable for an AI layperson to understand the rationale behind the AI-based results?  
Is the vendor forthcoming about the details of its AI methodology, or are they defensive when questioned?  

Any AI vendor worth its salt will be able to address these issues forthrightly, openly, and honestly.  If they don’t or can’t, a red flag should be raised with respect to that product.


Why Vulnerability Management is Required?


The short answer to this question is that a significant portion of enterprise breaches can be tied to unpatched vulnerabilities.  Although a definitive percentage is not available, and most organizations are apprehensive about detailing the nuts and bolts of breaches they’ve encountered, surveys show that a non-trivial percentage of enterprise breaches can be connected to unpatched vulnerabilities.

60% of Breaches in 2019 Involved Unpatched Vulnerabilities - “key to these findings was that 60% of breaches were linked to a vulnerability where a patch was available, but not applied, reminiscent of the Equifax mega breach in late 2017, and other high-profile security incidents.”

Cybersecurity: One in three breaches are caused by unpatched vulnerabilities - “Forget the stealthy hacker deploying a never-before-seen zero day to bring down your network. IT security professionals admit that one in three breaches are the result of vulnerabilities that they should have already patched.”

Unpatched Vulnerabilities the Source of Most Data Breaches - “Nearly 60% of organizations that suffered a data breach in the past two years cite as the culprit a known vulnerability for which they had not yet patched...Patching software security flaws by now should seem like a no-brainer for organizations, yet most organizations still struggle to keep up with and manage the process of applying software updates. ‘Detecting and prioritizing and getting vulnerabilities solved seems to be the most significant thing an organization can do [to prevent] getting breached,’ says Piero DePaoli, senior director of marketing at ServiceNow, of the report.

63% of organizations face security breaches due to hardware vulnerabilities - “Hardware-level breaches are one of the latest modes of attack by cybercriminals, according to a Dell report released on Wednesday. The majority (63%) of organizations said they experienced at least one data breach in the past year due to a hardware security vulnerability.”

In a nutshell, vulnerability management is required to reduce the likelihood your enterprise will be breached. No vulnerability management program is perfect, but a good one can materially reduce the chances of a breach.

Most Recent Related Stories

What is Risk Based Vulnerability Management?

Read More

Risk Based Vulnerability Management Product Update

Read More

Growing a Machine Learning project - Lessons from the field

Read More