Vulnerability Management Blog

What's that Machine Learning Thing Going to do for Me?

Building Trust through Understandability & Interpretability

For many years people have been drowning in AI marketing lingo, to a point where it makes it very difficult to understand exactly what AI - or more precisely Machine Learning - can effectively do for them. A lot of AI-based solutions work in a blackbox, with very little explanation for the users, and almost no feedback loop that can make the platform better, or help supplement users instead of working to replace them.

At Delve, we’re convinced that ML has to serve one purpose: helping users be even better at what they do. Just like an exoskeleton that gets people to do more, in less time, while removing substantial effort.

Machine learning systems never work in a vacuum, especially when expert knowledge is necessary. As such, building trust with users is a pivotal element of any ML-driven solution, and that trust can only be built through communication with the user that helps them understand and interpret how the machine views the world, so that she can correct it, and trust that it’s going to be useful over time.

In order to address this issue in our product, we needed to bring transparency to what Delve’s use of ML, aptly called DelveAI™, can bring to the VM industry.

The road to figuring out how we could bring transparency to our use of ML wasn’t necessarily a straightforward one, this adventure has been covered in a previous article from our CTO.

As this article introduces, to provide this transparency to our customers, we’ve released not one, but three versions of our contextual prioritization scoring (CPS) diagrams. They visually demonstrate how the underlying ML algorithms analyze multiple layers of contextual information around a vulnerability, processing 40+ prioritization factors over thousands of data points, re-ordering our customers’ vulnerabilities in priorities and bring a new era of remediation effectiveness through data analytics at scale.

These diagrams will allow our customers to follow not only how their entire environment gets re-prioritized, and how every single vulnerability is re-ranked, but also see how this happens on every asset, for a complete understanding of the platform.

We published three different versions for the CPS diagrams that follow the same general layout, comprised of three main parts and an optional one, explained in their own section below.

  1. The Factor Category Selector

Our contextual prioritization algorithm analyzes 40+ individual factors that belong to five big families of the contextual information data pane: vulnerability properties, asset properties, network influence, organizational and external factors.  In order to narrow down the analysis to a specific layer of contextual influence, the factor category selector can be used to highlight the specific factor category of interest.

Once highlighted, the CPS diagram will show specifically which factors have had the most influence, and with them specific metrics that have driven this influence, for the specific visible or selected vulnerabilities.

This allows our customers to better understand the underlying factors’ influence globally across all the organization as well as their relative impact on the vulnerability’s prioritization: maybe you’re looking at an asset that has lots of very large websites hosted on it compared to the rest of your organization? Maybe this asset is present more often in simulated attack paths, with a vulnerability that has lots of published exploits and shares characteristics with other vulnerabilities being discussed online right now?

2.  The Main CPS Graph

The main CPS graph is literally at the center of it all, there to explain exactly what happened during the CPS scoring. The lines displayed represent a single vulnerability (or a group of vulnerabilities) as they are scored and re-ranked from their initial starting CVSS score, to their final CPS score. If you want to learn more about how ML is used to construct the CPS score, you can read our whitepapers here.

Every stage of the graph represents a contextual factor category that visually shows how much of an increase or decrease the score of this vulnerability (or group of vulnerabilities) has seen.

For versions of this graph representing data on multiple vulnerabilities (or groups of vulnerability) you can select, on either side, one or multiple sections of the CVSS or CPS in order to “drill down” on specifics of these vulnerabilities.

3. The CPS Radar Chart

The CPS radar chart on the right shows a combined relative view of increase or decrease of CPS for all the visible (selected) vulnerabilities in the main CPS graph, with every axis representing a family of factors: vulnerability properties, the asset context, network context, organizational context, and external context.

The inner pentagon-shaped lines represent the 0 scale (no increase/decrease of score for that category of factors) while the outer lines represent the maximum increase possible, and the center of the chart being the maximum decrease possible.

The objective here is to view at glance, which entire category of factors have had the most influence on the final CPS score, whether it increased or decreased it.

For instance, this can be useful to quickly understand if specificities of a local network helped attenuate the risk of vulnerabilities in it (let’s say it’s a regularly patched network with few peers, few exposure and not a lot of total attack surface).

4. Additional Information Section

Underneath these diagrams there can be an additional section with more precise statistical information about the visible (selected) vulnerabilities. For instance, these can represent the number of vulnerabilities with an increase/decrease in CPS for this factor family. It can also show the average CPS increase/decrease in absolute numbers. This makes it very useful to go to the bottom of things as it pertains to the CPS score and understand by exactly how much this prioritization factor family has influenced it. No more blind trusting numbers!

Delve provides three versions of these contextual prioritization diagrams throughout the product, each giving a specific subset of information allowing our customers to better understand how the machine views and analyses every single vulnerability. We’ll cover these specifically in subsequent articles.

As our CTO said in a previous blog: Good bye black box. Hello AI transparency.

Most Recent Related Stories

What is Risk Based Vulnerability Management?

Read More

Risk Based Vulnerability Management Product Update

Read More

Growing a Machine Learning project - Lessons from the field

Read More