AI in Cyber Security is Dead...Long Live AI in Cyber Security (Part 1)
May 12, 2020
This blog is a transcription of a previous Delve webinar. Click here to view the original on-demand webinar video.
Why is the market skeptical of AI in vulnerability management?
Delve started hitting the market roughly in 2016 and we had this brand new approach to vulnerability management that was driven by machine learning. And we actually got pushback on two very different fronts. The first one is what we like to call the show me the A.I. question. So we were, of course, selling to security experts and security experts are curious. So they wanted to make sure that their dollars were spent on real things and they wanted to understand what the system does. So they wanted to see in a visual way what decisions the AI is making and why. And the second opinion, actually, that we were seeing is a general doubt about the concept of A.I. and what it brings to the table. Those two points are tied. But, some analysts now even go as far as to say you shouldn't talk about A.I. at all.
We felt there was too much FUD (fear, uncertainty, and doubt) about machine learning versus a rational discussion. So this presentation's objective is to try to address this situation in a more fair way. How can we start to have a rational discussion about this and especially with vendors? So that's why this presentation is laid out in three parts. In the first part, we're going to see how A.I. has evolved in the recent years. We're going to see through examples how it's met potential or has changed society and why it’s a sign of being inevitable in some domains. And we're going to talk a bit about the hype that surrounds AI in the second part.
We'll see how this hype creates a different vision in cybersecurity and how it relates to cybersecurity and how potentially we can move beyond that. I hope to have a more constructive discussion about real use cases. And then finally, in the third and last part, I'll present the four questions that I think you should ask any cybersecurity vendor, just to make sure that they're really using what they pretend they're using, and to start a more serious discussion on how they use their data to learn and improve their product.
AI or Machine Learning?
Let's begin with part one and the inevitable, inevitable sorry nature of A.I. And actually, we're going to stop using “A.I.” You're going to start using machine learning from now on. And part of the objective of today is to help shift the discussion from more of a marketing talk to a more factual one. So I think that stopping to use too generic and broad concepts like AI, and trying to be a bit more precise is a step in the right direction.
So there's a popular saying among data scientists: if it's a Powerpoint it’s A.I. If it's Python, it's machine learning and if it's R, it's data science. So, let's have a PowerPoint that is not A.I. for once. So why are we even talking about this? It's the first thing that we have to recognize is that we're at a technological crossing of paths. There's greater pressure on production efficiency than ever. We need to automate brain power. We have automated most of the physical work today. And at the same time, we've reached a level of computing power that has made some algorithms more accessible, more feasible. This has combined a bit with the explosion of data sets that are generated by machine sensors and users. So we're at a point where machine learning is actually considered a pivotal stone in what the World Economic Forum calls a fourth industrial revolution. And, you know, they think that is a fourth industrial revolution because it's really a new industrial era. It's not just an extension of the third one that was seen with the birth of automation in computing. It's different, only because of the speed at which changes are happening, but because the scale which they are, they're impacting almost every means of prediction and social models.
The Rise of Machine Learning
So let's look at the rise of machine learning for a few moments. What do you think of the rise of machine learning? It's always good to refer to specific points in time where progress was publicly demonstrated, and these demonstrations often took place in games. And why games? Because they're a good benchmark, and there are lots of competitive human players that you can play against. And, some games have a clear set of rules and objectives that you can actually optimize for. One of the most famous examples of such a demonstration is the 1997 chess match between Kasparov and IBM's Deep Blue, where Deep Blue prevailed in the end, three and a half to two and a half. This sent a message that machines are catching up to humans on certain specific tasks. And yet it was still what's called now good old fashioned A.I., symbolic A.I. where the machine simply acted on a set of rules that were set forth with an opening library of moves which were then then tweaked by grandmasters. It was a very basic form of artificial intelligence...or machine learning. And yet there were still some strange human-like, intelligent moves. There is a famous move called The Move 36 that, at the time, it wasn't clear if it was intelligent or even if IBM had potentially cheated. And this was more than 20 years ago.
More recently, we saw a very different approach when IBM actually did another public demonstration of machine learning with Watson. Watson won a game of Jeopardy against two of the top historical players, Ken Jennings and Brad Rutter. But what's very different about this use of machine learning is that, for the first time, it was data heavy machine learning training. There was use of an immense data set to train it. And they were also able to combine multiple initial language processing algorithms at speed to produce responses in seconds. Speed is part of the game and they couldn't cheat. They couldn't press a button before actually having the answer, like most players do in this game. So there is fair use of heavy computing power. So at the time, this was the first public demonstration of what some people call cognitive computing.
Can AI be used for social good?
It raises the question, can we use machine learning for social good? And IBM even talked immediately about using Watson to diagnose patients more quickly. Can we sift through tons of medical studies, drugs interactions? Can we help doctors make more informed decisions? And so more recently, we even got a more impressive demo of how certain aspects of a specific machine learning algorithm can help compute large data sets more efficiently. In that case, it was deep neural networks, and they were able not only to master a task up to the point that they can easily perform better than humans, but they can also learn on their own about this task and even devise strategies that humans can’t understand easily.
Can AI beat humans?
In 2016, after having defeated the European Go champion, the AlphaGo team from Google faced the world Go champion. Lee Sedol is a bit like Kasparov in chess. Experts agree he's kind of in a league of his own. He's been playing chess since he was almost six years old. And the question is why did the AlphaGo team choose Go? Well, Go is a game that is much more difficult for computers to address with typical machine learning algorithms. The search space of Go is about 300 times more vast than chess. And so the AlphaGo team has to devise specific strategies to implement machine learning to address this. A brute force approach was really not possible. So in the end, they combined two neural networks which calculated the certain amount of possible interesting positions. And then they had a chance of winning the game and they were continuously re-adapting throughout the game.
In the end, AlphaGo won four to one. But during this face-off, AlphaGo displayed an ability to make very powerful, traditional Go moves, but also very interesting moves that were called slack moves. They didn't not seem particularly smart at that time. And this really baffled analysts that were seeing the game live. But, some of these moves were later deemed to be incredibly beautiful. And we'll revisit this a bit later on in the presentation, because what’s important to get here is that, even though AlphaGo prevailed, they continued improving all these algorithms. They released an online version called AlphaGo Master that defeated the world experts 60 to zero during an online tournament. And then they coded Alpha Zero that taught itself to play in 24 hours and was able to beat the original AlphaGo lead, the one that faced Lee Sedol, one hundred to one. And finally, after having built AlphaGo 0, they built Alpha 0 and Alpha 0 was able to win against AlphaGo 60 to 40.
These are big numbers, and they even tried to play chess and Shogi. So the point is, for these specific tasks, we have an algorithmic machine learning solution that works better than humans. And, I think it was three or four days ago that Lee Sedalt said that he wouldn't play Gore anymore. So he basically gave up. And, we're kind of at a point where we're having discussions such as the one from a big McKinsey study that was done in 2018. This study was interesting because it started as part of a discussion: we have machine learning capabilities, so what can we do with it? We should think about how we could use it in the future. So they did this study and they intersected about 160 potential use cases across 17 domains of the sustainable development goals of the United Nations. And it laid out pretty straightforward examples of concrete use of machine learning for social good.
AI hype headlines
For instance, the study thought about using machine learning for detecting live fires earlier by looking at satellite imagery or identifying chainsaw sounds in rainforest sections where there is a potential for illegal logging activities, or using machine learning to identify poachers from drone video feeds. The people that protect animals have access to these kinds of technologies. But, as with everything, with the rise of this hope in new technologies, there is an inevitable, less positive side of things where we see a rise of what I like to call shocking headlines that are giving machine learning potential capabilities in the social realm that it doesn't have, or try to have it predict social outcomes. And, for instance, automation is coming for your job or we will merge with A.I. That was actually Elon Musk that said this, that we’ll get technology that effectively merges humans with A.I. or, maybe that an A.I. god will emerge and rule the human species. Those headlines are really, really out there and, in effect, these headlines actually have to be contrasted with a harsh reality check about machine learning capabilities.
Limitations of AI
And it comes in multiple forms. There are slightly more clumsy things that happen, like when Microsoft released an AI machine learning driven millennial chat bot that, after a day ended up spewing garbage and racist rants over Twitter. And then there was the case of Google and their main image recognition algorithm was actually tricked with a few adversarial pixels and ended up recognizing a cat picture as if it was guacamole.
Certainly, there are some clear limitations of machine learning, but there are a bit more dangerous things that could happen with machine learning, for instance, two very recent public examples of these potential dangers. The first example is when the ACLU used Amazon's recognition technology. It set up facial recognition technology and they were able, with default settings offered by the platform, to falsely link 28 members of Congress with mugshots of criminals. So, the security algorithm was trained on data that skewed the results toward certain skin tones. The second example is actually Watson itself. After having been implemented in a hospital, it was supposed to be used to diagnose cancers. And actually, at the end of the trial, the lead oncologist said it's not good. It's giving unsafe and incorrect treatments. It turns out Watson was trained with synthetic cancer patient data.
AI and Gartner Hype Cycle
So there are these consecutive spikes of hope and then inevitable reality check about new technology, and this is actually what Gartner calls the hype cycle. If you don't know what the hype cycle is, let me introduce it briefly. It is really nothing super scientific. It was publicized by Gartner and it's kind of a conceptual rendering of the potential steps that new technologies usually go through. What’s important to understand here is that not all technologies go through all these steps, and most technology does not even survive through the cycle. So, of course, after the initial technology trigger, when something starts to be something new, technology starts to be something. There is usually an initial peak of inflated expectations where we think that this new technology will solve X, Y and Z, and revolutionize society. And then, inevitably, there is a bit of a downfall. Gartner calls this the trough of disillusionment, where we realize this tech actually isn't that revolutionary. And hopefully this precedes the next steps where the tech slowly begins to find specific users, and eventually reaches a plateau of productivity.
So how does this hype cycle curve relate to A.I. and cybersecurity? That’s what we're going to discuss in part two. Part two is moving beyond the hype. The first thing we need to do to move beyond the hype curve in cybersecurity is to recognize that the cybersecurity industry absolutely thrives on hype. Cybersecurity spending budgets have exploded. They grew 35x in 15 years. And, ironically enough, these yearly studies on the cost of a cyber attack and data breaches mentions that these costs have substantially increased as well. It's unclear if these two points are linked, but I think there's no best demonstration for this than looking at the RSA conference vendor floor plan.
This is RSA 2011, eight years ago. This is the San Francisco Mosconi Center. And this is RSA last year. So, you can see RSA 2011 at the left in orange roughy in eight years. It grew more than threefold. And all these vendors sell products and services for a growing number of technologies. And if you're an executive or a decision maker in cybersecurity, the number of technology buzzwords and trends there are being hyped, and that you have to follow. This keeps on growing. In the 90s, you’d have anti-virus everywhere and a good firewall, and you were good. Now, you have to know about containers, zero-trust networks, SOAR, blockchain, sec ops, beach and attack simulation. And, what's true for an executive is also true for technical people. If we look at the first quarter of this year, there were more vulnerabilities than in any previous three month period. An, collectively, the industry decided that it was a good idea to advertise even more vulnerabilities and even brand them with their own logos.
AI in cyber security takes off
Now, there are just too many vulnerabilities that come out, and we don't have enough graphics to actually produce these logos at a decent rate. But the question is, how does the hype affect A.I. inside cybersecurity? Well, the use of A.I. and machine learning and cyber security kind of has been a Gartner Top Trend for years. So I went back and got the CIO agendas back in until 2016, and the use of A.I. as a disruptive technology is there for all these years 2016, 2017, 18, 19 and 20.
Even the very latest one mentions the use of A.I. in the top five technologies that CIOs have to worry about. So, curiously enough, if we look at the RSA conference, they mentioned that A.I. and machine learning-related submissions actually peaked in 2018. There AI seems to have been dethroned in 2019, where the RSA organizing committee says that they saw more enterprises talk more about tangible ways that machine learning and A.I. could be used instead of fairy dust submissions that sprinkle it everywhere. So that's a bit consistent with the pushback that we're seeing about AI marketing hype and cyber security. You know, we saw headlines in multiple newspapers where AI hype was denounced. Some vendors even went as far as to use the doubt about A.I. to market aggressively that the hype was potentially putting businesses at risk. And, this is consistent with what we're seeing from analysts. The Forrester A.I. Predictions for 2019 starts with the following first sentence: A.I. washing has reached such a point where you can purportedly solve fill-in-the-blank problem with AI.
Where is AI in cyber security today on Gartner hype cycle?
So, with these headlines in mind, where does that put the role of AI in cyber security on the hype cycle? Where are we? Well, clearly, there's a bit of pushback from buyers. I think we're pretty much past the peak of inflated expectations in that regard. If we look at analysts from Forrester to Gartner, there seems to be down this trough of disillusionment where it is recommended to stop marketing altogether. So from there, I think the logical question is how do we get to the plateau of productivity and can we even reach the plateau of productivity with machine learning and cyber security?
In Part 2 of this blog, we'll answer that question, look at some practical applications of AI, introduce Tesler's Theorem, and suggest 4 questions everyone should as their AI vendor before making a purchase.