AI and Cyber Security

Research Topics

  • Explore how artificial intelligence (AI) is being used in the context of information security currently. This will include an analysis of what the current models are and the specifics surrounding those models. Additionally, AI will be examined and defined so as to differentiate from other systems purported to be AI.
  • Look at the strengths and weaknesses of AI at a high-level, across multiple industries. Using this data, examine the strengths and weaknesses that are being seen in its adaptation to information security. Examining this area will help reveal what areas AI is excelling at, as well as areas that need to be developed further to become effective as a tool in information security.
  • Analyze threat psychology and see how it is being applied to the field of information security. As emphasis has shifted from the traditional preventative security measures, to more user behavioral driven hunting, the psychology which undergirds attacker behavior has become a critical part of successfully thwarting attacks early in the cyber kill chain.
  • Research studies in which AI was benchmarked against human defenders to see which performed better. This will provide insight for analyzing how AI needs to be improved to be effective at defending against cyber attacks. Additionally, this will help answer the question of if, and when, artificial intelligence will be able to replace humans in the field of information security.

What is AI and how does it apply to cyber security?

Cyber security and artificial intelligence (AI) are two very common topics of conversation among modern technologists (Harel, Gal, & Elovici, 2017). More and more AI is being discussed as it relates to cyber security (Harel, Gal, & Elovici, 2017). As such, products which are purported to utilize systems such as artificial intelligence, machine learning, and deep learning algorithms are becoming increasingly more popular in the industry (Harel, Gal, & Elovici, 2017). This has unfortunately led to some confusion surrounding what exactly those fields represent in the context of cyber security (Harel, Gal, & Elovici, 2017). Vendors, news outlets, and even cyber security practitioners, can often be heard using the terms rather loosely.

Artificial intelligence was originally established as a discipline that focused on machines mimicking the cognitive reasoning abilities, inherent to human beings (Harel, Gal, & Elovici, 2017). This includes functions such as one’s ability to learn, reason, and solve complex problems (Harel, Gal, & Elovici, 2017). As this technology has advanced in the past several years, a focus on leveraging the technology in the field of cyber security has become increasingly more prevalent, with a large amount of emphasis on machine learning (Harel, Gal, & Elovici, 2017). There is no shortage of research and resources being allocated to the study of this field, yet the term is often used in a manner which mischaracterizes its true nature (Harel, Gal, & Elovici, 2017). It is important to establish that there are distinct differences between expert systems and machine learning (Harel, Gal, & Elovici, 2017). In an expert system, a machine is loaded with a large amount of data, specific to a task or function which the system is expected to perform (Harel, Gal, & Elovici, 2017). An example of such a system would be anti-virus software (Harel, Gal, & Elovici, 2017). While the software may operate using a complex logic set, the boundaries of that logic are pre-programmed (Harel, Gal, & Elovici, 2017). Machine learning, however, operates in a semi-autonomous fashion, in which the machine is able to optimize its algorithmic function automatically (Harel, Gal, & Elovici, 2017). This means that it is able to take on new capabilities, without the intervention of a human programmer (Harel, Gal, & Elovici, 2017).

There are two primary classes of machine learning: unsupervised and supervised (Harel, Gal, & Elovici, 2017). In an unsupervised machine learning application, the system is fed large data sets, without any benchmark with which to compare the data (Harel, Gal, & Elovici, 2017). The goal in the case of unsupervised learning, is for the system to autonomously identify the trends present within the data, so that outliers can be identified for processing (Harel, Gal, & Elovici, 2017). These anomalies can include both “positive” and “negative” deviations from the norm (Harel, Gal, & Elovici, 2017). Within the realm of cyber security, the outliers would then be processed, by taking such actions as blocking an IP address with suspicious deviations from normal network traffic, or blacklisting a website (Harel, Gal, & Elovici, 2017).

In supervised machine learning, the data set is fed into the machine, with a corresponding desired output (Harel, Gal, & Elovici, 2017). The desired output serves as a calibration tool, against which the system can measure its results for quality (Harel, Gal, & Elovici, 2017). Put another way, the machine is told what the operator wishes to accomplish with the data, and the machine then autonomously works toward that goal, in (ideally) the most efficient way possible (Harel, Gal, & Elovici, 2017).

Within supervised machine learning, sub-disciplines exist. Two primary areas of focus right now are artificial neural networks and deep learning (Harel, Gal, & Elovici, 2017). Artificial neural networks seek to emulate the function of the human brain by using various nodes to represent neurons (Harel, Gal, & Elovici, 2017). These nodes are then interconnected to form a network (Harel, Gal, & Elovici, 2017). Artificial neural networks have made tremendous advances in recent years, and shown promise (Harel, Gal, & Elovici, 2017). Out of this sub-discipline of artificial intelligence, has come another field, known as deep learning (Harel, Gal, & Elovici, 2017). Deep learning has experienced tremendous success, specifically in the area of signals processing (Harel, Gal, & Elovici, 2017). Researchers have found that deep learning performs exceptionally well in the areas of image and language processing (Harel, Gal, & Elovici, 2017).

With an understanding of what artificial intelligence is capable of, it is apparent that there this technology could have significant implications in the field of cyber security. One application in which machine learning has shown effectiveness, is in identifying and classifying malicious network traffic (Harel, Gal, & Elovici, 2017). A study was performed by researchers at Ben-Gurion University of the Negev which focused on identifying malicious network traffic flowing to and from internet-of-things (IoT) devices (Harel, Gal, & Elovici, 2017). Data was collected from several internet-of-things devices, and labelled manually (Harel, Gal, & Elovici, 2017). The data was fed into a supervised machine learning algorithm to develop a white list of known-good IoT traffic within the network (Harel, Gal, & Elovici, 2017). The system was successful at autonomously extracting 274 features from each network session, to form an accurate profile of safe, non-malicious network traffic (Harel, Gal, & Elovici, 2017). The system was then tested, and was able to successfully classify devices as trusted or untrusted, based on the profile it had built using its machine learning algorithm (Harel, Gal, & Elovici, 2017).

Why is cyber security so hard?

With the field of artificial intelligence experiencing so much success, it is hard to imagine why it is not being used more in the field of cyber security. One of the difficulties with getting a machine to successfully defend an environment against a human adversary, is the fact that the machine lacks much of the cognitive reasoning that is native to human beings. While a machine may be able to train itself on the intricacies IoT network traffic, and subsequently identify malicious traffic with a high-degree of accuracy, a human actor may choose to compromise a different area of the network, without ever moving laterally to an IoT device. This is a simple example that serves to show that there are many factors to consider when analyzing a threat actor. As such, it is important to consider the psychology of the various threat actors, and how they might behave.

When the psychology of a threat actor is taken into consideration, a large number of variables have to be considered (Dutt, 2013). In one attack, the threat actor may expose many threats at one time, in an effort to expedite the kill-chain (Dutt, 2013). In another, the attacker may move slowly, so as to avoid detection (Dutt, 2013). The overarching challenge for the defender (whether human or computer), is maintaining appropriate situational awareness (Dutt, 2013). A human actor may be affected by their previous exposure to cyber threats (Dutt, 2013). For a seasoned defender, they may have a great deal of exposure to previous threats from their years of experience. This exposure can lead to a high threat tolerance, which in turn reduces their efficiency in responding to cyber attacks (Dutt, 2013). A low threat tolerance, however, may lead to more efficient detection and response to cyber attacks (Dutt, 2013). Additionally, the amount of time between when threats are observed, may affect the response of the defender in a significant manner (Dutt, 2013). Research supports the hypothesis that the time disparity between when threats are observed, negatively affects a defender’s ability to perceive threats (Dutt, 2013). It can then be reasoned, that a machine may have no such concept of time disparity, if programmed not to. Herein lies one example of how a machine may be able to augment the efforts of a human defender, based on research.

Where machine learning and game theory intersect

One field that has been used to study the effectiveness of defenders using a simulated environment, is instance-based learning theory (IBLT) (Dutt, 2013). IBLT is a theory which studies the decision-making patterns of individuals, based on their experiences in dynamic environments (Dutt, 2013). IBLT has been used to generate accurate, computerized models of human behavior, and specifically, those of network defenders (Dutt, 2013). IBLT assumes that a decision maker will progress through five mental stages: recognition, judgment, choice, education, and feedback (Dutt, 2013). These five phases then summarize the feedback loop, which is utilized by the human mind, to reinforce one’s decision making abilities (Dutt, 2013). An experiment conducted by researchers, utilized a simulated defender built using Matlab, to test the defender’s ability to identify threats (Dutt, 2013). The attack simulation ran at different speeds, to simulate different attacker methodologies (Dutt, 2013). The researchers ran the simulation against two different models of defenders: one that had high-sensitivity to threats (threat-prone) and one that had low-sensitivity to threats (nonthreat-prone) (Dutt, 2013). The goal was to see whether or not the simulated defender would identify behavior as a cyber attack or not, based on the number of threats that were in its recent memory; the more threats that were observed recently, the more threat-prone the defender (Dutt, 2013). Each simulation presented 25 events to a pool of simulated defenders, spanning the range of threat-prone to nonthreat-prone (Dutt, 2013). The results revealed a much higher success rate when the attacker was “impatient” in their methodology, exposing multiple threats at once (Dutt, 2013). This was shown to be true for the threat-prone defender, as well as the nonthreat-prone defender (Dutt, 2013). Additionally, it was revealed that the for both the threat-prone defender and the nonthreat-prone defender, detection was low for the “patient” attacker (Dutt, 2013). This data underscores the weaknesses in human psychology as relates to cyber defense. Additionally, this demonstrates an area in which a machine could potentially perform better, were it programmed to ignore time disparities between threats. What is especially interesting about IBLT in this scenario, is that it shows an area where fields such as game theory, and machine learning can be coupled together to enhance security.

Game theory has long been used to quantify human psychology, in a manner that helps predict human behavior (Roy, Ellis, Shiva, Dasgupta, Sandilya, & Wu, 2010). In game theory, individual players, actions, payoffs, and strategies are identified and studied, to predict the outcome of various situations, or, games (Roy, Ellis, Shiva, Dasgupta, Sandilya, & Wu, 2010). Some of these elements are used to classify the game (Roy, Ellis, Shiva, Dasgupta, Sandilya, & Wu, 2010). Among the various classifications of games are static and dynamic games, complete and perfect information games, incomplete and imperfect information games, as well as any combination of the above (Roy, Ellis, Shiva, Dasgupta, Sandilya, & Wu, 2010). The results of these games can also be classified as follows:

  • Nash equilibrium – a Nash equilibrium is achieved when both players have chosen the best possible action, taking into consideration the actions of the other players (Matusitz, 2009).
  • Zero-sum game – This occurs when one player’s benefit, is directly proportional to the loss of another player (Matusitz, 2009).
  • Positive-sum game – A positive-sum game occurs when the net result of the game is positive, albeit not necessarily for all players (Matusitz, 2009).
  • Negative-sum game – A negative-sum game occurs when the net result of the game is negative, albeit not necessarily for all players (Matusitz, 2009).

Game theory has been studied as it applies to information security, although much of the research has centered around games that are static or possess perfect information (Roy, Ellis, Shiva, Dasgupta, Sandilya, & Wu, 2010). Now, researchers are beginning to model games that are dynamic, with incomplete and imperfect information (Roy, Ellis, Shiva, Dasgupta, Sandilya, & Wu, 2010).

Conclusions

Artificial intelligence has gained a tremendous amount of attention in recent years, and rightfully so (Crosby, 2016). It has proven to be an exceptionally robust tool for companies such as Netflix an Google, across varying industries (Crosby, 2016). Currently, companies such as Crowdstrike are using machine learning, to enhance their endpoint detection and response capabilities, by using it to identify potentially malicious behavior (Crowdstrike, 2017). While artificial intelligence is certainly a fascinating technology with great potential, much of the hype surrounding its use in cyber security appears to be overstated. When considering this complex technology, it is important to note that implementations of artificial intelligence have failed from time to time (Crosby, 2016). In one such case, Google attempted to identify flu epidemics using artificial intelligence, with terrible results (Crosby, 2016). This underscores the need for organizations to exercise discernment when evaluating solutions that purport to be based on machine learning. As seen earlier, this is often nothing more than expert systems re-branded as artificial intelligence.

Human defenders have their weaknesses too. The studies presented here have shown that an attacker which simply slows down the rate at which they inject threats, is capable of defeating even seasoned defenders. This is one area that shows an opportunity for artificial intelligence to augment the actions of a network defender, by bridging the security gaps inherent to human psychology. Firewalls, intrusion detection systems, e-mail filters, and other perimeter defenses, serve as great examples of where machines excel (Crosby, 2016). This is due to the fact that computers are perfect logic machines, that will proceed down their pre-programmed path, no matter how flawed it may be. As such, an artificial intelligence system that can leverage this perfect logic to make up for the inefficiencies in human behavior, would serve the cyber security industry well.

The question still remains though, as to whether or not a human defender can be replaced by artificial intelligence. The answer, at the moment, appears to be no (Crosby, 2016). As has been underscored, computers do many things well, but their ability to reason, and pivot in a dynamic scenario, is still very limited for the purposes of cyber security, and robust network defense. It can be deduced from the data that has been examined here, that this is largely due to the fact that a machine simply lacks sentience. As such, a machine cannot reason based on empathetic or intuitive feelings. This is where humans stand-out as defenders. A human’s ability to study an individual, and make decisions based on intuition derived from empathy, experience, knowledge, and a combination of all three, is something that has yet to be reproduced by an algorithm. Additionally, there are numerous variables in human psychology that are yet to be understood by scientists, much less reproduced in a machine. The psyche of an attacker can be complex and arcane, especially to a defender observing their actions from a distance. The result of this is a sometimes-unpredictable adversary, which may change tactics, techniques, and procedures based on feelings such as fear, sadness, happiness, confidence, and a number of other variables. Machines do a fantastic job at protecting the defender’s environment from standard viruses, password spray attacks, and other common tactics. However, it is important to remember that this is the result of a well-developed expert machine, rather than robust artificial intelligence.

References

Harel, Y., Ben Gal, I., & Elovici, Y. (2017, July). Cyber security and the role of intelligent systems in addressing its challenges. ACM Transactions on Intelligent Systems and Technology, 8 (4). doi: 10.1145/3057729

Dutt, V. (2013, June). Cyber situation awareness: modeling detection of cyber attacks with instance-based learning theory. Human Factors, 55 (3)

Roy, S., Ellis, S., Shiva, D., Dasgupta, D., Sandilya, V., & Wu, Q. (2010). A survey of game theory as applied to network security. 2010 43rd International Conference on System Sciences, 2010. pp. 1-10. doi: 10.1109/HICSS.2010.35

Matusitz, J. (2009). A Postmodern Theory of Cyberterrorism: Game Theory. Information Security Journal: A Global Perspective, 18(6), 273-281. doi: 10.1080/19393550903200474

Crosby, S. (2016, January). Machine learning is cybersecurity’s latest pipe dream. Software World, January 2016.

Crowdstrike. (2017, April 28). A primer on machine learning in endpoint security. Retrieved from https://www.crowdstrike.com/blog/a-primer-on-machine-learning-in-endpoint-security/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s