Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

jsan-logo

Article Menu

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Featured papers on network security and privacy.

research paper on computer network security

1. Introduction

2. an overview of published articles, 3. conclusions, conflicts of interest.

  • Antony, S.N.F.M.A.; Bahari, M.F.A. Implementation of Elliptic Curves in the Polynomial Blom Key Pre-Distribution Scheme for Wireless Sensor Networks and Distributed Ledger Technology. J. Sens. Actuator Netw. 2023 , 12 , 15. [ Google Scholar ] [ CrossRef ]
  • Aljabri, M.; Alahmadi, A.A.; Mohammad, R.M.A.; Alhaidari, F.; Aboulnour, M.; Alomari, D.M.; Mirza, S. Machine Learning-Based Detection for Unauthorized Access to IoT Devices. J. Sens. Actuator Netw. 2023 , 12 , 27. [ Google Scholar ] [ CrossRef ]
  • Almuhaideb, A.M.; Aslam, N.; Alabdullatif, A.; Altamimi, S.; Alothman, S.; Alhussain, A.; Aldosari, W.; Alsunaidi, S.J.; Alissa, K.A. Homoglyph Attack Detection Model Using Machine Learning and Hash Function. J. Sens. Actuator Netw. 2022 , 11 , 54. [ Google Scholar ] [ CrossRef ]
  • Almuhaideb, A.M.; Algothami, S.S. Efficient Privacy-Preserving and Secure Authentication for Electric-Vehicle-to-Electric-Vehicle-Charging System Based on ECQV. J. Sens. Actuator Netw. 2022 , 11 , 28. [ Google Scholar ] [ CrossRef ]
  • Mannix, K.; Gorey, A.; O’Shea, D.; Newe, T. Sensor Network Environments: A Review of the Attacks and Trust Management Models for Securing Them. J. Sens. Actuator Netw. 2022 , 11 , 43. [ Google Scholar ] [ CrossRef ]
  • Alzahrani, R.A.; Aljabri, M. AI-Based Techniques for Ad Click Fraud Detection and Prevention: Review and Research Directions. J. Sens. Actuator Netw. 2023 , 12 , 4. [ Google Scholar ] [ CrossRef ]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Mongay Batalla, J. Featured Papers on Network Security and Privacy. J. Sens. Actuator Netw. 2024 , 13 , 11. https://doi.org/10.3390/jsan13010011

Mongay Batalla J. Featured Papers on Network Security and Privacy. Journal of Sensor and Actuator Networks . 2024; 13(1):11. https://doi.org/10.3390/jsan13010011

Mongay Batalla, Jordi. 2024. "Featured Papers on Network Security and Privacy" Journal of Sensor and Actuator Networks 13, no. 1: 11. https://doi.org/10.3390/jsan13010011

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Open access
  • Published: 10 August 2020

Using deep learning to solve computer security challenges: a survey

  • Yoon-Ho Choi 1 , 2 ,
  • Peng Liu 1 ,
  • Zitong Shang 1 ,
  • Haizhou Wang 1 ,
  • Zhilong Wang 1 ,
  • Lan Zhang 1 ,
  • Junwei Zhou 3 &
  • Qingtian Zou 1  

Cybersecurity volume  3 , Article number:  15 ( 2020 ) Cite this article

16k Accesses

21 Citations

1 Altmetric

Metrics details

Although using machine learning techniques to solve computer security challenges is not a new idea, the rapidly emerging Deep Learning technology has recently triggered a substantial amount of interests in the computer security community. This paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. In particular, the review covers eight computer security problems being solved by applications of Deep Learning: security-oriented program analysis, defending return-oriented programming (ROP) attacks, achieving control-flow integrity (CFI), defending network attacks, malware classification, system-event-based anomaly detection, memory forensics, and fuzzing for software security.

Introduction

Using machine learning techniques to solve computer security challenges is not a new idea. For example, in the year of 1998, Ghosh and others in ( Ghosh et al. 1998 ) proposed to train a (traditional) neural network based anomaly detection scheme(i.e., detecting anomalous and unknown intrusions against programs); in the year of 2003, Hu and others in ( Hu et al. 2003 ) and Heller and others in ( Heller et al. 2003 ) applied Support Vector Machines to based anomaly detection scheme (e.g., detecting anomalous Windows registry accesses).

The machine-learning-based computer security research investigations during 1990-2010, however, have not been very impactful. For example, to the best of our knowledge, none of the machine learning applications proposed in ( Ghosh et al. 1998 ; Hu et al. 2003 ; Heller et al. 2003 ) has been incorporated into a widely deployed intrusion-detection commercial product.

Regarding why not very impactful, although researchers in the computer security community seem to have different opinions, the following remarks by Sommer and Paxson ( Sommer and Paxson 2010 ) (in the context of intrusion detection) have resonated with many researchers:

Remark A: “It is crucial to have a clear picture of what problem a system targets: what specifically are the attacks to be detected? The more narrowly one can define the target activity, the better one can tailor a detector to its specifics and reduce the potential for misclassifications.” ( Sommer and Paxson 2010 )

Remark B: “If one cannot make a solid argument for the relation of the features to the attacks of interest, the resulting study risks foundering on serious flaws.” ( Sommer and Paxson 2010 )

These insightful remarks, though well aligned with the machine learning techniques used by security researchers during 1990-2010, could become a less significant concern with Deep Learning (DL), a rapidly emerging machine learning technology, due to the following observations. First, Remark A implies that even if the same machine learning method is used, one algorithm employing a cost function that is based on a more specifically defined target attack activity could perform substantially better than another algorithm deploying a less specifically defined cost function. This could be a less significant concern with DL, since a few recent studies have shown that even if the target attack activity is not narrowly defined, a DL model could still achieve very high classification accuracy. Second, Remark B implies that if feature engineering is not done properly, the trained machine learning models could be plagued by serious flaws. This could be a less significant concern with DL, since many deep learning neural networks require less feature engineering than conventional machine learning techniques.

As stated in NSCAI Intern Report for Congress (2019 ), “DL is a statistical technique that exploits large quantities of data as training sets for a network with multiple hidden layers, called a deep neural network (DNN). A DNN is trained on a dataset, generating outputs, calculating errors, and adjusting its internal parameters. Then the process is repeated hundreds of thousands of times until the network achieves an acceptable level of performance. It has proven to be an effective technique for image classification, object detection, speech recognition, and natural language processing–problems that challenged researchers for decades. By learning from data, DNNs can solve some problems much more effectively, and also solve problems that were never solvable before.”

Now let’s take a high-level look at how DL could make it substantially easier to overcome the challenges identified by Sommer and Paxson ( Sommer and Paxson 2010 ). First, one major advantage of DL is that it makes learning algorithms less dependent on feature engineering. This characteristic of DL makes it easier to overcome the challenge indicated by Remark B. Second, another major advantage of DL is that it could achieve high classification accuracy with minimum domain knowledge. This characteristic of DL makes it easier to overcome the challenge indicated by Remark A.

Key observation. The above discussion indicates that DL could be a game changer in applying machine learning techniques to solving computer security challenges.

Motivated by this observation, this paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. It should be noticed that since this paper aims to provide a dedicated review, non-deep-learning techniques and their security applications are out of the scope of this paper.

The remaining of the paper is organized as follows. In “ A four-phase workflow framework can summarize the existing works in a unified manner ” section, we present a four-phase workflow framework which we use to summarize the existing works in a unified manner. In “ A closer look at applications of deep learning in solving security-oriented program analysis challenges - A closer look at applications of deep learning in security-oriented fuzzing ” section, we provide a review of eight computer security problems being solved by applications of Deep Learning, respectively. In “ Discussion ” section, we will discuss certain similarity and certain dissimilarity among the existing works. In “ Further areas of investigation ” section, we mention four further areas of investigation. In “ Conclusion section, we conclude the paper.

A four-phase workflow framework can summarize the existing works in a unified manner

We found that a four-phase workflow framework can provide a unified way to summarize all the research works surveyed by us. In particular, we found that each work surveyed by us employs a particular workflow when using machine learning techniques to solve a computer security challenge, and we found that each workflow consists of two or more phases. By “a unified way”, we mean that every workflow surveyed by us is essentially an instantiation of a common workflow pattern which is shown in Fig.  1 .

figure 1

Overview of the four-phase workflow

Definitions of the four phases

The four phases, shown in Fig.  1 , are defined as follows. To make the definitions of the four phases more tangible, we use a running example to illustrate each of the four phases. Phase I.(Obtaining the raw data)

In this phase, certain raw data are collected. Running Example: When Deep Learning is used to detect suspicious events in a Hadoop distributed file system (HDFS), the raw data are usually the events (e.g., a block is allocated, read, written, replicated, or deleted) that have happened to each block. Since these events are recorded in Hadoop logs, the log files hold the raw data. Since each event is uniquely identified by a particular (block ID, timestamp) tuple, we could simply view the raw data as n event sequences. Here n is the total number of blocks in the HDFS. For example, the raw data collected in Xu et al. (2009) in total consists of 11,197,954 events. Since 575,139 blocks were in the HDFS, there were 575,139 event sequences in the raw data, and on average each event sequence had 19 events. One such event sequence is shown as follows:

research paper on computer network security

Phase II.(Data preprocessing)

Both Phase II and Phase III aim to properly extract and represent the useful information held in the raw data collected in Phase I. Both Phase II and Phase III are closely related to feature engineering. A key difference between Phase II and Phase III is that Phase III is completely dedicated to representation learning, while Phase II is focused on all the information extraction and data processing operations that are not based on representation learning. Running Example: Let’s revisit the aforementioned HDFS. Each recorded event is described by unstructured text. In Phase II, the unstructured text is parsed to a data structure that shows the event type and a list of event variables in (name, value) pairs. Since there are 29 types of events in the HDFS, each event is represented by an integer from 1 to 29 according to its type. In this way, the aforementioned example event sequence can be transformed to:

22, 5, 5, 7

Phase III.(Representation learning)

As stated in Bengio et al. (2013) , “Learning representations of the data that make it easier to extract useful information when building classifiers or other predictors.” Running Example: Let’s revisit the same HDFS. Although DeepLog ( Du et al. 2017 ) directly employed one-hot vectors to represent the event types without representation learning, if we view an event type as a word in a structured language, one may actually use the word embedding technique to represent each event type. It should be noticed that the word embedding technique is a representation learning technique.

Phase IV.(Classifier learning)

This phase aims to build specific classifiers or other predictors through Deep Learning. Running Example: Let’s revisit the same HDFS. DeepLog ( Du et al. 2017 ) used Deep Learning to build a stacked LSTM neural network for anomaly detection. For example, let’s consider event sequence {22,5,5,5,11,9,11,9,11,9,26,26,26} in which each integer represents the event type of the corresponding event in the event sequence. Given a window size h = 4, the input sample and the output label pairs to train DeepLog will be: {22,5,5,5 → 11 }, {5,5,5,11 → 9 }, {5,5,11,9 → 11 }, and so forth. In the detection stage, DeepLog examines each individual event. It determines if an event is treated as normal or abnormal according to whether the event’s type is predicted by the LSTM neural network, given the history of event types. If the event’s type is among the top g predicted types, the event is treated as normal; otherwise, it is treated as abnormal.

Using the four-phase workflow framework to summarize some representative research works

In this subsection, we use the four-phase workflow framework to summarize two representative works for each security problem. System security includes many sub research topics. However, not every research topics are suitable to adopt deep learning-based methods due to their intrinsic characteristics. For these security research subjects that can combine with deep-learning, some of them has undergone intensive research in recent years, others just emerging. We notice that there are 5 mainstream research directions in system security. This paper mainly focuses on system security, so the other mainstream research directions (e.g., deepfake) are out-of-scope. Therefore, we choose these 5 widely noticed research directions, and 3 emerging research direction in our survey:

In security-oriented program analysis, malware classification (MC), system-event-based anomaly detection (SEAD), memory forensics (MF), and defending network attacks, deep learning based methods have already undergone intensive research.

In defending return-oriented programming (ROP) attacks, Control-flow integrity (CFI), and fuzzing, deep learning based methods are emerging research topics.

We select two representative works for each research topic in our survey. Our criteria to select papers mainly include: 1) Pioneer (one of the first papers in this field); 2) Top (published on top conference or journal); 3) Novelty; 4) Citation (The citation of this paper is high); 5) Effectiveness (the result of this paper is pretty good); 6) Representative (the paper is a representative work for a branch of the research direction). Table  1 lists the reasons why we choose each paper, which is ordered according to their importance.

The summary is shown in Table  2 . There are three columns in the table. In the first column, we listed eight security problems, including security-oriented program analysis, defending return-oriented programming (ROP) attacks, control-flow integrity (CFI), defending network attacks (NA), malware classification (MC), system-event-based anomaly detection (SEAD), memory forensics (MF), and fuzzing for software security. In the second column, we list the very recent two representative works for each security problem. In the “Summary” column, we sequentially describe how the four phases are deployed at each work, then, we list the evaluation results for each work in terms of accuracy (ACC), precision (PRC), recall (REC), F1 score (F1), false-positive rate (FPR), and false-negative rate (FNR), respectively.

Methodology for reviewing the existing works

Data representation (or feature engineering) plays an important role in solving security problems with Deep Learning. This is because data representation is a way to take advantage of human ingenuity and prior knowledge to extract and organize the discriminative information from the data. Many efforts in deploying machine learning algorithms in security domain actually goes into the design of preprocessing pipelines and data transformations that result in a representation of the data to support effective machine learning.

In order to expand the scope and ease of applicability of machine learning in security domain, it would be highly desirable to find a proper way to represent the data in security domain, which can entangle and hide more or less the different explanatory factors of variation behind the data. To let this survey adequately reflect the important role played by data representation, our review will focus on how the following three questions are answered by the existing works:

Question 1: Is Phase II pervasively done in the literature? When Phase II is skipped in a work, are there any particular reasons?

Question 2: Is Phase III employed in the literature? When Phase III is skipped in a work, are there any particular reasons?

Question 3: When solving different security problems, is there any commonality in terms of the (types of) classifiers learned in Phase IV? Among the works solving the same security problem, is there dissimilarity in terms of classifiers learned in Phase IV?

To group the Phase III methods at different applications of Deep Learning in solving the same security problem, we introduce a classification tree as shown in Fig.  2 . The classification tree categorizes the Phase III methods in our selected survey works into four classes. First, class 1 includes the Phase III methods which do not consider representation learning. Second, class 2 includes the Phase III methods which consider representation learning but, do not adopt it. Third, class 3 includes the Phase III methods which consider and adopt representation learning but, do not compare the performance with other methods. Finally, class 4 includes the Phase III methods which consider and adopt representation learning and, compare the performance with other methods.

figure 2

Classification tree for different Phase III methods. Here, consideration , adoption , and comparison indicate that a work considers Phase III, adopts Phase III and makes comparison with other methods, respectively

In the remaining of this paper, we take a closer look at how each of the eight security problems is being solved by applications of Deep Learning in the literature.

A closer look at applications of deep learning in solving security-oriented program analysis challenges

Recent years, security-oriented program analysis is widely used in software security. For example, symbolic execution and taint analysis are used to discover, detect and analyze vulnerabilities in programs. Control flow analysis, data flow analysis and pointer/alias analysis are important components when enforcing many secure strategies, such as control flow integrity, data flow integrity and doling dangling pointer elimination. Reverse engineering was used by defenders and attackers to understand the logic of a program without source code.

In the security-oriented program analysis, there are many open problems, such as precise pointer/alias analysis, accurate and complete reversing engineer, complex constraint solving, program de-obfuscation, and so on. Some problems have theoretically proven to be NP-hard, and others still need lots of human effort to solve. Either of them needs a lot of domain knowledge and experience from expert to develop better solutions. Essentially speaking, the main challenges when solving them through traditional approaches are due to the sophisticated rules between the features and labels, which may change in different contexts. Therefore, on the one hand, it will take a large quantity of human effort to develop rules to solve the problems, on the other hand, even the most experienced expert cannot guarantee completeness. Fortunately, the deep learning method is skillful to find relations between features and labels if given a large amount of training data. It can quickly and comprehensively find all the relations if the training samples are representative and effectively encoded.

In this section, we will review the very recent four representative works that use Deep Learning for security-oriented program analysis. We observed that they focused on different goals. Shin, et al. designed a model ( Shin et al. 2015 ) to identify the function boundary. EKLAVYA ( Chua et al. 2017 ) was developed to learn the function type. Gemini ( Xu et al. 2017 ) was proposed to detect similarity among functions. DEEPVSA ( Guo et al. 2019 ) was designed to learn memory region of an indirect addressing from the code sequence. Among these works, we select two representative works ( Shin et al. 2015 ; Chua et al. 2017 ) and then, summarize the analysis results in Table  2 in detail.

Our review will be centered around three questions described in “ Methodology for reviewing the existing works ” section. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.

Key findings from a closer look

From a close look at the very recent applications using Deep Learning for solving security-oriented program analysis challenges, we observed the followings:

Observation 3.1: All of the works in our survey used binary files as their raw data. Phase II in our survey had one similar and straightforward goal – extracting code sequences from the binary. Difference among them was that the code sequence was extracted directly from the binary file when solving problems in static program analysis, while it was extracted from the program execution when solving problems in dynamic program analysis.

*Observation 3.2: Most data representation methods generally took into account the domain knowledge.

Most data representation methods generally took into the domain knowledge, i.e., what kind of information they wanted to reserve when processing their data. Note that the feature selection has a wide influence on Phase II and Phase III, for example, embedding granularities, representation learning methods. Gemini ( Xu et al. 2017 ) selected function level feature and other works in our survey selected instruction level feature. To be specifically, all the works except Gemini ( Xu et al. 2017 ) vectorized code sequence on instruction level.

Observation 3.3: To better support data representation for high performance, some works adopted representation learning.

For instance, DEEPVSA ( Guo et al. 2019 ) employed a representation learning method, i.e., bi-directional LSTM, to learn data dependency within instructions. EKLAVYA ( Chua et al. 2017 ) adopted representation learning method, i.e., word2vec technique, to extract inter-instruciton information. It is worth noting that Gemini ( Xu et al. 2017 ) adopts the Structure2vec embedding network in its siamese architecture in Phase IV (see details in Observation 3.7). The Structure2vec embedding network learned information from an attributed control flow graph.

Observation 3.4: According to our taxonomy, most works in our survey were classified into class 4.

To compare the Phase III, we introduced a classification tree with three layers as shown in Fig.  2 to group different works into four categories. The decision tree grouped our surveyed works into four classes according to whether they considered representation learning or not, whether they adopted representation learning or not, and whether they compared their methods with others’, respectively, when designing their framework. According to our taxonomy, EKLAVYA ( Chua et al. 2017 ), DEEPVSA ( Guo et al. 2019 ) were grouped into class 4 shown in Fig.  2 . Also, Gemini’s work ( Xu et al. 2017 ) and Shin, et al.’s work ( Shin et al. 2015 ) belonged to class 1 and class 2 shown in Fig.  2 , respectively.

Observation 3.5: All the works in our survey explain why they adopted or did not adopt one of representation learning algorithms.

Two works in our survey adopted representation learning for different reasons: to enhance model’s ability of generalization ( Chua et al. 2017 ); and to learn the dependency within instructions ( Guo et al. 2019 ). It is worth noting that Shin, et al. did not adopt representation learning because they wanted to preserve the “attractive” features of neural networks over other machine learning methods – simplicity. As they stated, “first, neural networks can learn directly from the original representation with minimal preprocessing (or “feature engineering”) needed.” and “second, neural networks can learn end-to-end, where each of its constituent stages are trained simultaneously in order to best solve the end goal.” Although Gemini ( Xu et al. 2017 ) did not adopt representation learning when processing their raw data, the Deep Learning models in siamese structure consisted of two graph embedding networks and one cosine function.

*Observation 3.6: The analysis results showed that a suitable representation learning method could improve accuracy of Deep Learning models.

DEEPVSA ( Guo et al. 2019 ) designed a series of experiments to evaluate the effectiveness of its representative method. By combining with the domain knowledge, EKLAVYA ( Chua et al. 2017 ) employed t-SNE plots and analogical reasoning to explain the effectiveness of their representation learning method in an intuitive way.

Observation 3.7: Various Phase IV methods were used.

In Phase IV, Gemini ( Xu et al. 2017 ) adopted siamese architecture model which consisted of two Structure2vec embedding networks and one cosine function. The siamese architecture took two functions as its input, and produced the similarity score as the output. The other three works ( Shin et al. 2015 ; Chua et al. 2017 ; Guo et al. 2019 ) adopted bi-directional RNN, RNN, bi-directional LSTM respectively. Shin, et al. adopted bi-directional RNN because they wanted to combine both the past and the future information in making a prediction for the present instruction ( Shin et al. 2015 ). DEEPVSA ( Guo et al. 2019 ) adopted bi-directional RNN to enable their model to infer memory regions in both forward and backward ways.

The above observations seem to indicate the following indications:

Indication 3.1: Phase III is not always necessary.

Not all authors regard representation learning as a good choice even though some case experiments show that representation learning can improve the final results. They value more the simplicity of Deep Learning methods and suppose that the adoption of representation learning weakens the simplicity of Deep Learning methods.

Indication 3.2: Even though the ultimate objective of Phase III in the four surveyed works is to train a model with better accuracy, they have different specific motivations as described in Observation 3.5.

When authors choose representation learning, they usually try to convince people the effectiveness of their choice by empirical or theoretical analysis.

*Indication 3.3: Observation 3.7 indicates that authors usually refer to the domain knowledge when designing the architecture of Deep Learning model.

For instance, the works we reviewed commonly adopt bi-directional RNN when their prediction partly based on future information in data sequence.

Despite the effectiveness and agility of deep learning-based methods, there are still some challenges in developing a scheme with high accuracy due to the hierarchical data structure, lots of noisy, and unbalanced data composition in program analysis. For instance, an instruction sequence, a typical data sample in program analysis, contains three-level hierarchy: sequence–instruction–opcode/operand. To make things worse, each level may contain many different structures, e.g., one-operand instructions, multi-operand instructions, which makes it harder to encode the training data.

A closer look at applications of deep learning in defending ROP attacks

Return-oriented programming (ROP) attack is one of the most dangerous code reuse attacks, which allows the attackers to launch control-flow hijacking attack without injecting any malicious code. Rather, It leverages particular instruction sequences (called “gadgets”) widely existing in the program space to achieve Turing-complete attacks ( Shacham and et al. 2007 ). Gadgets are instruction sequences that end with a RET instruction. Therefore, they can be chained together by specifying the return addresses on program stack. Many traditional techniques could be used to detect ROP attacks, such as control-flow integrity (CFI Abadi et al. (2009) ), but many of them either have low detection rate or have high runtime overhead. ROP payloads do not contain any codes. In other words, analyzing ROP payload without the context of the program’s memory dump is meaningless. Thus, the most popular way of detecting and preventing ROP attacks is control-flow integrity. The challenge after acquiring the instruction sequences is that it is hard to recognize whether the control flow is normal. Traditional methods use the control flow graph (CFG) to identify whether the control flow is normal, but attackers can design the instruction sequences which follow the normal control flow defined by the CFG. In essence, it is very hard to design a CFG to exclude every single possible combination of instructions that can be used to launch ROP attacks. Therefore, using data-driven methods could help eliminate such problems.

In this section, we will review the very recent three representative works that use Deep Learning for defending ROP attacks: ROPNN ( Li et al. 2018 ), HeNet ( Chen et al. 2018 ) and DeepCheck ( Zhang et al. 2019 ). ROPNN ( Li et al. 2018 ) aims to detect ROP attacks, HeNet ( Chen et al. 2018 ) aims to detect malware using CFI, and DeepCheck ( Zhang et al. 2019 ) aims at detecting all kinds of code reuse attacks.

Specifically, ROPNN is to protect one single program at a time, and its training data are generated from real-world programs along with their execution. Firstly, it generates its benign and malicious data by “chaining-up” the normally executed instruction sequences and “chaining-up” gadgets with the help of gadgets generation tool, respectively, after the memory dumps of programs are created. Each data sample is byte-level instruction sequence labeled as “benign” or “malicious”. Secondly, ROPNN will be trained using both malicious and benign data. Thirdly, the trained model is deployed to a target machine. After the protected program started, the executed instruction sequences will be traced and fed into the trained model, the protected program will be terminated once the model found the instruction sequences are likely to be malicious.

HeNet is also proposed to protect a single program. Its malicious data and benign data are generated by collecting trace data through Intel PT from malware and normal software, respectively. Besides, HeNet preprocesses its dataset and shape each data sample in the format of image, so that they could implement transfer learning from a model pre-trained on ImageNet. Then, HeNet is trained and deployed on machines with features of Intel PT to collect and classify the program’s execution trace online.

The training data for DeepCheck are acquired from CFGs, which are constructed by dissembling the programs and using the information from Intel PT. After the CFG for a protected program is constructed, authors sample benign instruction sequences by chaining up basic blocks that are connected by edges, and sample malicious instruction sequences by chaining up those that are not connected by edges. Although a CFG is needed during training, there is no need to construct CFG after the training phase. After deployed, instruction sequences will be constructed by leveraging Intel PT on the protected program. Then the trained model will classify whether the instruction sequences are malicious or benign.

We observed that none of the works considered Phase III, so all of them belong to class 1 according to our taxonomy as shown in Fig.  2 . The analysis results of ROPNN ( Li et al. 2018 ) and HeNet ( Chen et al. 2018 ) are shown in Table  2 . Also, we observed that three works had different goals.

From a close look at the very recent applications using Deep Learning for defending return-oriented programming attacks, we observed the followings:

Observation 4.1: All the works ( Li et al. 2018 ; Zhang et al. 2019 ; Chen et al. 2018 ) in this survey focused on data generation and acquisition.

In ROPNN ( Li et al. 2018 ), both malicious samples (gadget chains) were generated using an automated gadget generator (i.e. ROPGadget ( Salwant 2015 )) and a CPU emulator (i.e. Unicorn ( Unicorn-The ultimate CPU emulator 2015 )). ROPGadget was used to extract instruction sequences that could be used as gadgets from a program, and Unicorn was used to validate the instruction sequences. Corresponding benign sample (gadget-chain-like instruction sequences) were generated by disassembling a set of programs. In DeepCheck ( Zhang et al. 2019 ) refers to the key idea of control-flow integrity ( Abadi et al. 2009 ). It generates program’s run-time control flow through new feature of Intel CPU (Intel Processor Tracing), then compares the run-time control flow with the program’s control-flow graph (CFG) that generates through static analysis. Benign instruction sequences are that with in the program’s CFG, and vice versa. In HeNet ( Chen et al. 2018 ), program’s execution trace was extracted using the similar way as DeepCheck. Then, each byte was transformed into a pixel with an intensity between 0-255. Known malware samples and benign software samples were used to generate malicious data benign data, respectively.

Observation 4.2: None of the ROP works in this survey deployed Phase III.

Both ROPNN ( Li et al. 2018 ) and DeepCheck ( Zhang et al. 2019 ) used binary instruction sequences for training. In ROPNN ( Li et al. 2018 ), one byte was used as the very basic element for data pre-processing. Bytes were formed into one-hot matrices and flattened for 1-dimensional convolutional layer. In DeepCheck ( Zhang et al. 2019 ), half-byte was used as the basic unit. Each half-byte (4 bits) was transformed to decimal form ranging from 0-15 as the basic element of the input vector, then was fed into a fully-connected input layer. On the other hand, HeNet ( Chen et al. 2018 ) used different kinds of data. By the time this survey has been drafted, the source code of HeNet was not available to public and thus, the details of the data pre-processing was not be investigated. However, it is still clear that HeNet used binary branch information collected from Intel PT rather than binary instructions. In HeNet, each byte was converted to one decimal number ranging from 0 to 255. Byte sequences was sliced and formed into image sequences (each pixel represented one byte) for a fully-connected input layer.

Observation 4.3: Fully-connected neural network was widely used.

Only ROPNN ( Li et al. 2018 ) used 1-dimensional convolutional neural network (CNN) when extracting features. Both HeNet ( Chen et al. 2018 ) and DeepCheck ( Zhang et al. 2019 ) used fully-connected neural network (FCN). None of the works used recurrent neural network (RNN) and the variants.

Indication 4.1: It seems like that one of the most important factors in ROP problem is feature selection and data generation.

All three works use very different methods to collect/generate data, and all the authors provide very strong evidences and/or arguments to justify their approaches. ROPNN ( Li et al. 2018 ) was trained by the malicious and benign instruction sequences. However, there is no clear boundary between benign instruction sequences and malicious gadget chains. This weakness may impair the performance when applying ROPNN to real world ROP attacks. As oppose to ROPNN, DeepCheck ( Zhang et al. 2019 ) utilizes CFG to generate training basic-block sequences. However, since the malicious basic-block sequences are generated by randomly connecting nodes without edges, it is not guaranteed that all the malicious basic-blocks are executable. HeNet ( Chen et al. 2018 ) generates their training data from malware. Technically, HeNet could be used to detect any binary exploits, but their experiment focuses on ROP attack and achieves 100% accuracy. This shows that the source of data in ROP problem does not need to be related to ROP attacks to produce very impressive results.

Indication 4.2: Representation learning seems not critical when solving ROP problems using Deep Learning.

Minimal process on data in binary form seems to be enough to transform the data into a representation that is suitable for neural networks. Certainly, it is also possible to represent the binary instructions at a higher level, such as opcodes, or use embedding learning. However, as stated in ( Li et al. 2018 ), it appears that the performance will not change much by doing so. The only benefit of representing input data to a higher level is to reduce irrelevant information, but it seems like neural network by itself is good enough at extracting features.

Indication 4.3: Different Neural network architecture does not have much influence on the effectiveness of defending ROP attacks.

Both HeNet ( Chen et al. 2018 ) and DeepCheck ( Zhang et al. 2019 ) utilizes standard DNN and achieved comparable results on ROP problems. One can infer that the input data can be easily processed by neural networks, and the features can be easily detected after proper pre-process.

It is not surprising that researchers are not very interested in representation learning for ROP problems as stated in Observation 4.1. Since ROP attack is focus on the gadget chains, it is straightforward for the researcher to choose the gadgets as their training data directly. It is easy to map the data into numerical representation with minimal processing. An example is that one can map binary executable to hexadecimal ASCII representation, which could be a good representation for neural network.

Instead, researchers focus more in data acquisition and generation. In ROP problems, the amount of data is very limited. Unlike malware and logs, ROP payloads normally only contain addresses rather than codes, which do not contain any information without providing the instructions in corresponding addresses. It is thus meaningless to collect all the payloads. At the best of our knowledge, all the previous works use pick instruction sequences rather than payloads as their training data, even though they are hard to collect.

Even though, Deep Learning based method does not face the challenge to design a very complex fine-grained CFG anymore, it suffers from a limited number of data sources. Generally, Deep Learning based method requires lots of training data. However, real-world malicious data for the ROP attack is very hard to find, because comparing with benign data, malicious data need to be carefully crafted and there is no existing database to collect all the ROP attacks. Without enough representative training set, the accuracy of the trained model cannot be guaranteed.

A closer look at applications of deep learning in achieving CFI

The basic ideas of control-flow integrity (CFI) techniques, proposed by Abadi in 2005 ( Abadi et al. 2009 ), could be dated back to 2002, when Vladimir and his fellow researchers proposed an idea called program shepherding ( Kiriansky et al. 2002 ), a method of monitoring the execution flow of a program when it is running by enforcing some security policies. The goal of CFI is to detect and prevent control-flow hijacking attacks, by restricting every critical control flow transfers to a set that can only appear in correct program executions, according to a pre-built CFG. Traditional CFI techniques typically leverage some knowledge, gained from either dynamic or static analysis of the target program, combined with some code instrumentation methods, to ensure the program runs on a correct track.

However, the problems of traditional CFI are: (1) Existing CFI implementations are not compatible with some of important code features ( Xu et al. 2019 ); (2) CFGs generated by static, dynamic or combined analysis cannot always be precisely completed due to some open problems ( Horwitz 1997 ); (3) There always exist certain level of compromises between accuracy and performance overhead and other important properties ( Tan and Jaeger 2017 ; Wang and Liu 2019 ). Recent research has proposed to apply Deep Learning on detecting control flow violation. Their result shows that, compared with traditional CFI implementation, the security coverage and scalability were enhanced in such a fashion ( Yagemann et al. 2019 ). Therefore, we argue that Deep Learning could be another approach which requires more attention from CFI researchers who aim at achieving control-flow integrity more efficiently and accurately.

In this section, we will review the very recent three representative papers that use Deep Learning for achieving CFI. Among the three, two representative papers ( Yagemann et al. 2019 ; Phan et al. 2017 ) are already summarized phase-by-phase in Table  2 . We refer to interested readers the Table  2 for a concise overview of those two papers.

Our review will be centered around three questions described in Section 3 . In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.

From a close look at the very recent applications using Deep Learning for achieving control-flow integrity, we observed the followings:

Observation 5.1: None of the related works realize preventive Footnote 1 prevention of control flow violation.

After doing a thorough literature search, we observed that security researchers are quite behind the trend of applying Deep Learning techniques to solve security problems. Only one paper has been founded by us, using Deep Learning techniques to directly enhance the performance of CFI ( Yagemann et al. 2019 ). This paper leveraged Deep Learning to detect document malware through checking program’s execution traces that generated by hardware. Specifically, the CFI violations were checked in an offline mode. So far, no works have realized Just-In-Time checking for program’s control flow.

In order to provide more insightful results, in this section, we try not to narrow down our focus on CFI detecting attacks at run-time, but to extend our scope to papers that take good use of control flow related data, combined with Deep Learning techniques ( Phan et al. 2017 ; Nguyen et al. 2018 ). In one work, researchers used self-constructed instruction-level CFG to detect program defection ( Phan et al. 2017 ). In another work, researchers used lazy-binding CFG to detect sophisticated malware ( Nguyen et al. 2018 ).

Observation 5.2: Diverse raw data were used for evaluating CFI solutions.

In all surveyed papers, there are two kinds of control flow related data being used: program instruction sequences and CFGs. Barnum et al. ( Yagemann et al. 2019 ) employed statically and dynamically generated instruction sequences acquired by program disassembling and Intel ® Processor Trace. CNNoverCFG ( Phan et al. 2017 ) used self-designed algorithm to construct instruction level control-flow graph. Minh Hai Nguyen et al. ( Nguyen et al. 2018 ) used proposed lazy-binding CFG to reflect the behavior of malware DEC.

Observation 5.3: All the papers in our survey adopted Phase II.

All the related papers in our survey employed Phase II to process their raw data before sending them into Phase III. In Barnum ( Yagemann et al. 2019 ), the instruction sequences from program run-time tracing were sliced into basic-blocks. Then, they assigned each basic-blocks with an unique basic-block ID (BBID). Finally, due to the nature of control-flow hijacking attack, they selected the sequences ending with indirect branch instruction (e.g., indirect call/jump, return and so on) as the training data. In CNNoverCFG ( Phan et al. 2017 ), each of instructions in CFG were labeled with its attributes in multiple perspectives, such as opcode, operands, and the function it belongs to. The training data is generated are sequences generated by traversing the attributed control-flow graph. Nguyen and others ( Nguyen et al. 2018 ) converted the lazy-binding CFG to corresponding adjacent matrix and treated the matrix as a image as their training data.

Observation 5.4: All the papers in our survey did not adopt Phase III. We observed all the papers we surveyed did not adopted Phase III. Instead, they adopted the form of numerical representation directly as their training data. Specifically, Barnum ( Yagemann et al. 2019 ) grouped the instructions into basic-blocks, then represented basic-blocks with uniquely assigning IDs. In CNNoverCFG ( Phan et al. 2017 ), each of instructions in the CFG was represented by a vector that associated with its attributes. Nguyen and others directly used the hashed value of bit string representation.

Observation 5.5: Various Phase IV models were used. Barnum ( Yagemann et al. 2019 ) utilized BBID sequence to monitor the execution flow of the target program, which is sequence-type data. Therefore, they chose LSTM architecture to better learn the relationship between instructions. While in the other two papers ( Phan et al. 2017 ; Nguyen et al. 2018 ), they trained CNN and directed graph-based CNN to extract information from control-flow graph and image, respectively.

Indication 5.1: All the existing works did not achieve Just-In-Time CFI violation detection.

It is still a challenge to tightly embed Deep Learning model in program execution. All existing work adopted lazy-checking – checking the program’s execution trace following its execution.

Indication 5.2: There is no unified opinion on how to generate malicious sample.

Data are hard to collect in control-flow hijacking attacks. The researchers must carefully craft malicious sample. It is not clear whether the “handcrafted” sample can reflect the nature the control-flow hijacking attack.

*Observation 5.3: The choice of methods in Phase II are based on researchers’ security domain knowledge.

The strength of using deep learning to solve CFI problems is that it can avoid the complicated processes of developing algorithms to build acceptable CFGs for the protected programs. Compared with the traditional approaches, the DL based method could prevent CFI designer from studying the language features of the targeted program and could also avoid the open problem (pointer analysis) in control flow analysis. Therefore, DL based CFI provides us a more generalized, scalable, and secure solution. However, since using DL in CFI problem is still at an early age, which kinds of control-flow related data are more effective is still unclear yet in this research area. Additionally, applying DL in real-time control-flow violation detection remains an untouched area and needs further research.

A closer look at applications of deep learning in defending network attacks

Network security is becoming more and more important as we depend more and more on networks for our daily lives, works and researches. Some common network attack types include probe, denial of service (DoS), Remote-to-local (R2L), etc. Traditionally, people try to detect those attacks using signatures, rules, and unsupervised anomaly detection algorithms. However, signature based methods can be easily fooled by slightly changing the attack payload; rule based methods need experts to regularly update rules; and unsupervised anomaly detection algorithms tend to raise lots of false positives. Recently, people are trying to apply Deep Learning methods for network attack detection.

In this section, we will review the very recent seven representative works that use Deep Learning for defending network attacks. Millar et al. (2018 ); Varenne et al. (2019 ); Ustebay et al. (2019 ) build neural networks for multi-class classification, whose class labels include one benign label and multiple malicious labels for different attack types. Zhang et al. (2019 ) ignores normal network activities and proposes parallel cross convolutional neural network (PCCN) to classify the type of malicious network activities. Yuan et al. (2017 ) applies Deep Learning to detecting a specific attack type, distributed denial of service (DDoS) attack. Yin et al. (2017 ); Faker and Dogdu (2019 ) explores both binary classification and multi-class classification for benign and malicious activities. Among these seven works, we select two representative works ( Millar et al. 2018 ; Zhang et al. 2019 ) and summarize the main aspects of their approaches regarding whether the four phases exist in their works, and what exactly do they do in the Phase if it exists. We direct interested readers to Table  2 for a concise overview of these two works.

From a close look at the very recent applications using Deep Learning for solving network attack challenges, we observed the followings:

Observation 6.1: All the seven works in our survey used public datasets, such as UNSW-NB15 ( Moustafa and Slay 2015 ) and CICIDS2017 ( IDS 2017 Datasets 2019 ).

The public datasets were all generated in test-bed environments, with unbalanced simulated benign and attack activities. For attack activities, the dataset providers launched multiple types of attacks, and the numbers of malicious data for those attack activities were also unbalanced.

Observation 6.2: The public datasets were given into one of two data formats, i.e., PCAP and CSV.

One was raw PCAP or parsed CSV format, containing network packet level features, and the other was also CSV format, containing network flow level features, which showed the statistic information of many network packets. Out of all the seven works, ( Yuan et al. 2017 ; Varenne et al. 2019 ) used packet information as raw inputs, ( Yin et al. 2017 ; Zhang et al. 2019 ; Ustebay et al. 2019 ; Faker and Dogdu 2019 ) used flow information as raw inputs, and ( Millar et al. 2018 ) explored both cases.

Observation 6.3: In order to parse the raw inputs, preprocessing methods, including one-hot vectors for categorical texts, normalization on numeric data, and removal of unused features/data samples, were commonly used.

Commonly removed features include IP addresses and timestamps. Faker and Dogdu (2019 ) also removed port numbers from used features. By doing this, they claimed that they could “avoid over-fitting and let the neural network learn characteristics of packets themselves”. One outlier was that, when using packet level features in one experiment, ( Millar et al. 2018 ) blindly chose the first 50 bytes of each network packet without any feature extracting processes and fed them into neural network.

Observation 6.4: Using image representation improved the performance of security solutions using Deep Learning.

After preprocessing the raw data, while ( Zhang et al. 2019 ) transformed the data into image representation, ( Yuan et al. 2017 ; Varenne et al. 2019 ; Faker and Dogdu 2019 ; Ustebay et al. 2019 ; Yin et al. 2017 ) directly used the original vectors as an input data. Also, ( Millar et al. 2018 ) explored both cases and reported better performance using image representation.

Observation 6.5: None of all the seven surveyed works considered representation learning.

All the seven surveyed works belonged to class 1 shown in Fig.  2 . They either directly used the processed vectors to feed into the neural networks, or changed the representation without explanation. One research work ( Millar et al. 2018 ) provided a comparison on two different representations (vectors and images) for the same type of raw input. However, the other works applied different preprocessing methods in Phase II. That is, since the different preprocessing methods generated different feature spaces, it was difficult to compare the experimental results.

Observation 6.6: Binary classification model showed better results from most experiments.

Among all the seven surveyed works, ( Yuan et al. 2017 ) focused on one specific attack type and only did binary classification to classify whether the network traffic was benign or malicious. Also, ( Millar et al. 2018 ; Ustebay et al. 2019 ; Zhang et al. 2019 ; Varenne et al. 2019 ) included more attack types and did multi-class classification to classify the type of malicious activities, and ( Yin et al. 2017 ; Faker and Dogdu 2019 ) explored both cases. As for multi-class classification, the accuracy for selective classes was good, while accuracy for other classes, usually classes with much fewer data samples, suffered by up to 20% degradation.

Observation 6.7: Data representation influenced on choosing a neural network model.

Indication 6.1: All works in our survey adopt a kind of preprocessing methods in Phase II, because raw data provided in the public datasets are either not ready for neural networks, or that the quality of data is too low to be directly used as data samples.

Preprocessing methods can help increase the neural network performance by improving the data samples’ qualities. Furthermore, by reducing the feature space, pre-processing can also improve the efficiency of neural network training and testing. Thus, Phase II should not be skipped. If Phase II is skipped, the performance of neural network is expected to go down considerably.

Indication 6.2: Although Phase III is not employed in any of the seven surveyed works, none of them explains a reason for it. Also, they all do not take representation learning into consideration.

Indication 6.3: Because no work uses representation learning, the effectiveness are not well-studied.

Out of other factors, it seems that the choice of pre-processing methods has the largest impact, because it directly affects the data samples fed to the neural network.

Indication 6.4: There is no guarantee that CNN also works well on images converted from network features.

Some works that use image data representation use CNN in Phase IV. Although CNN has been proven to work well on image classification problem in the recent years, there is no guarantee that CNN also works well on images converted from network features.

From the observations and indications above, we hereby present two recommendations: (1) Researchers can try to generate their own datasets for the specific network attack they want to detect. As stated, the public datasets have highly unbalanced number of data for different classes. Doubtlessly, such unbalance is the nature of real world network environment, in which normal activities are the majority, but it is not good for Deep Learning. ( Varenne et al. 2019 ) tries to solve this problem by oversampling the malicious data, but it is better to start with a balanced data set. (2) Representation learning should be taken into consideration. Some possible ways to apply representation learning include: (a) apply word2vec method to packet binaries, and categorical numbers and texts; (b) use K-means as one-hot vector representation instead of randomly encoding texts. We suggest that any change of data representation may be better justified by explanations or comparison experiments.

One critical challenge in this field is the lack of high-quality data set suitable for applying deep learning. Also, there is no agreement on how to apply domain knowledge into training deep learning models for network security problems. Researchers have been using different pre-processing methods, data representations and model types, but few of them have enough explanation on why such methods/representations/models are chosen, especially for data representation.

A closer look at applications of deep learning in malware classification

The goal of malware classification is to identify malicious behaviors in software with static and dynamic features like control-flow graph and system API calls. Malware and benign programs can be collected from open datasets and online websites. Both the industry and the academic communities have provided approaches to detect malware with static and dynamic analyses. Traditional methods such as behavior-based signatures, dynamic taint tracking, and static data flow analysis require experts to manually investigate unknown files. However, those hand-crafted signatures are not sufficiently effective because attackers can rewrite and reorder the malware. Fortunately, neural networks can automatically detect large-scale malware variants with superior classification accuracy.

In this section, we will review the very recent twelve representative works that use Deep Learning for malware classification ( De La Rosa et al. 2018 ; Saxe and Berlin 2015 ; Kolosnjaji et al. 2017 ; McLaughlin et al. 2017 ; Tobiyama et al. 2016 ; Dahl et al. 2013 ; Nix and Zhang 2017 ; Kalash et al. 2018 ; Cui et al. 2018 ; David and Netanyahu 2015 ; Rosenberg et al. 2018 ; Xu et al. 2018 ). De La Rosa et al. (2018 ) selects three different kinds of static features to classify malware. Saxe and Berlin (2015 ); Kolosnjaji et al. (2017 ); McLaughlin et al. (2017 ) also use static features from the PE files to classify programs. ( Tobiyama et al. 2016 ) extracts behavioral feature images using RNN to represent the behaviors of original programs. ( Dahl et al. 2013 ) transforms malicious behaviors using representative learning without neural network. Nix and Zhang (2017 ) explores RNN model with the API calls sequences as programs’ features. Cui et al. (2018 ); Kalash et al. (2018 ) skip Phase II by directly transforming the binary file to image to classify the file. ( David and Netanyahu 2015 ; Rosenberg et al. 2018 ) applies dynamic features to analyze malicious features. Xu et al. (2018 ) combines static features and dynamic features to represent programs’ features. Among these works, we select two representative works ( De La Rosa et al. 2018 ; Rosenberg et al. 2018 ) and identify four phases in their works shown as Table  2 .

From a close look at the very recent applications using Deep Learning for solving malware classification challenges, we observed the followings:

Observation 7.1: Features selected in malware classification were grouped into three categories: static features, dynamic features, and hybrid features.

Typical static features include metadata, PE import Features, Byte/Entorpy, String, and Assembly Opcode Features derived from the PE files ( Kolosnjaji et al. 2017 ; McLaughlin et al. 2017 ; Saxe and Berlin 2015 ). De La Rosa et al. (2018 ) took three kinds of static features: byte-level, basic-level (strings in the file, the metadata table, and the import table of the PE header), and assembly features-level. Some works directly considered binary code as static features ( Cui et al. 2018 ; Kalash et al. 2018 ).

Different from static features, dynamic features were extracted by executing the files to retrieve their behaviors during execution. The behaviors of programs, including the API function calls, their parameters, files created or deleted, websites and ports accessed, etc, were recorded by a sandbox as dynamic features ( David and Netanyahu 2015 ). The process behaviors including operation name and their result codes were extracted ( Tobiyama et al. 2016 ). The process memory, tri-grams of system API calls and one corresponding input parameter were chosen as dynamic features ( Dahl et al. 2013 ). An API calls sequence for an APK file was another representation of dynamic features ( Nix and Zhang 2017 ; Rosenberg et al. 2018 ).

Static features and dynamic features were combined as hybrid features ( Xu et al. 2018 ). For static features, Xu and others in ( Xu et al. 2018 ) used permissions, networks, calls, and providers, etc. For dynamic features, they used system call sequences.

Observation 7.2: In most works, Phase II was inevitable because extracted features needed to be vertorized for Deep Learning models.

One-hot encoding approach was frequently used to vectorize features ( Kolosnjaji et al. 2017 ; McLaughlin et al. 2017 ; Rosenberg et al. 2018 ; Tobiyama et al. 2016 ; Nix and Zhang 2017 ). Bag-of-words (BoW) and n -gram were also considered to represent features ( Nix and Zhang 2017 ). Some works brought the concepts of word frequency in NLP to convert the sandbox file to fixed-size inputs ( David and Netanyahu 2015 ). Hashing features into a fixed vector was used as an effective method to represent features ( Saxe and Berlin 2015 ). Bytes histogram using the bytes analysis and bytes-entropy histogram with a sliding window method were considered ( De La Rosa et al. 2018 ). In ( De La Rosa et al. 2018 ), De La Rosa and others embeded strings by hashing the ASCII strings to a fixed-size feature vector. For assembly features, they extracted four different levels of granularity: operation level (instruction-flow-graph), block level (control-flow-graph), function level (call-graph), and global level (graphs summarized). bigram, trigram and four-gram vectors and n -gram graph were used for the hybrid features ( Xu et al. 2018 ).

Observation 7.3: Most Phase III methods were classified into class 1.

Following the classification tree shown in Fig.  2 , most works were classified into class 1 shown in Fig.  2 except two works ( Dahl et al. 2013 ; Tobiyama et al. 2016 ), which belonged to class 3 shown in Fig.  2 . To reduce the input dimension, Dahl et al. (2013 ) performed feature selection using mutual information and random projection. Tobiyama et al. generated behavioral feature images using RNN ( Tobiyama et al. 2016 ).

Observation 7.4: After extracting features, two kinds of neural network architectures, i.e., one single neural network and multiple neural networks with a combined loss function, were used.

Hierarchical structures, like convolutional layers, fully connected layers and classification layers, were used to classify programs ( McLaughlin et al. 2017 ; Dahl et al. 2013 ; Nix and Zhang 2017 ; Saxe and Berlin 2015 ; Tobiyama et al. 2016 ; Cui et al. 2018 ; Kalash et al. 2018 ). A deep stack of denoising autoencoders was also introduced to learn programs’ behaviors ( David and Netanyahu 2015 ). De La Rosa and others ( De La Rosa et al. 2018 ) trained three different models with different features to compare which static features are relevant for the classification model. Some works investigated LSTM models for sequential features ( Nix and Zhang 2017 ; Rosenberg et al. 2018 ).

Two networks with different features as inputs were used for malware classification by combining their outputs with a dropout layer and an output layer ( Kolosnjaji et al. 2017 ). In ( Kolosnjaji et al. 2017 ), one network transformed PE Metadata and import features using feedforward neurons, another one leveraged convolutional network layers with opcode sequences. Lifan Xu et al. ( Xu et al. 2018 ) constructed a few networks and combined them using a two-level multiple kernel learning algorithm.

Indication 7.1: Except two works transform binary into images ( Cui et al. 2018 ; Kalash et al. 2018 ), most works surveyed need to adapt methods to vectorize extracted features.

The vectorization methods should not only keep syntactic and semantic information in features, but also consider the definition of the Deep Learning model.

Indication 7.2: Only limited works have shown how to transform features using representation learning.

Because some works assume the dynamic and static sequences, like API calls and instruction, and have similar syntactic and semantic structure as natural language, some representation learning techniques like word2vec may be useful in malware detection. In addition, for the control-flow graph, call graph and other graph representations, graph embedding is a potential method to transform those features.

Though several pieces of research have been done in malware detection using Deep Learning, it’s hard to compare their methods and performances because of two uncertainties in their approaches. First, the Deep Learning model is a black-box, researchers cannot detail which kind of features the model learned and explain why their model works. Second, feature selection and representation affect the model’s performance. Because they do not use the same datasets, researchers cannot prove their approaches – including selected features and Deep Learning model – are better than others. The reason why few researchers use open datasets is that existing open malware datasets are out of data and limited. Also, researchers need to crawl benign programs from app stores, so their raw programs will be diverse.

A closer look at applications of Deep Learning in system-event-based anomaly detection

System logs recorded significant events at various critical points, which can be used to debug the system’s performance issues and failures. Moreover, log data are available in almost all computer systems and are a valuable resource for understanding system status. There are a few challenges in anomaly detection based on system logs. Firstly, the raw log data are unstructured, while their formats and semantics can vary significantly. Secondly, logs are produced by concurrently running tasks. Such concurrency makes it hard to apply workflow-based anomaly detection methods. Thirdly, logs contain rich information and complexity types, including text, real value, IP address, timestamp, and so on. The contained information of each log is also varied. Finally, there are massive logs in every system. Moreover, each anomaly event usually incorporates a large number of logs generated in a long period.

Recently, a large number of scholars employed deep learning techniques ( Du et al. 2017 ; Meng et al. 2019 ; Das et al. 2018 ; Brown et al. 2018 ; Zhang et al. 2019 ; Bertero et al. 2017 ) to detect anomaly events in the system logs and diagnosis system failures. The raw log data are unstructured, while their formats and semantics can vary significantly. To detect the anomaly event, the raw log usually should be parsed to structure data, the parsed data can be transformed into a representation that supports an effective deep learning model. Finally, the anomaly event can be detected by deep learning based classifier or predictor.

In this section, we will review the very recent six representative papers that use deep learning for system-event-based anomaly detection ( Du et al. 2017 ; Meng et al. 2019 ; Das et al. 2018 ; Brown et al. 2018 ; Zhang et al. 2019 ; Bertero et al. 2017 ). DeepLog ( Du et al. 2017 ) utilizes LSTM to model the system log as a natural language sequence, which automatically learns log patterns from the normal event, and detects anomalies when log patterns deviate from the trained model. LogAnom ( Meng et al. 2019 ) employs Word2vec to extract the semantic and syntax information from log templates. Moreover, it uses sequential and quantitative features simultaneously. Das et al. (2018 ) uses LSTM to predict node failures that occur in super computing systems from HPC logs. Brown et al. (2018 ) presented RNN language models augmented with attention for anomaly detection in system logs. LogRobust ( Zhang et al. 2019 ) uses FastText to represent semantic information of log events, which can identify and handle unstable log events and sequences. Bertero et al. (2017 ) map log word to a high dimensional metric space using Google’s word2vec algorithm and take it as features to classify. Among these six papers, we select two representative works ( Du et al. 2017 ; Meng et al. 2019 ) and summarize the four phases of their approaches. We direct interested readers to Table  2 for a concise overview of these two works.

From a close look at the very recent applications using deep learning for solving security-event-based anomaly detection challenges, we observed the followings:

Observation 8.1: Most works of our surveyed papers evaluated their performance using public datasets.

By the time we surveyed this paper, only two works in ( Das et al. 2018 ; Bertero et al. 2017 ) used their private datasets.

Observation 8.2: Most works in this survey adopted Phase II when parsing the raw log data.

After reviewing the six works proposed recently, we found that five works ( Du et al. 2017 ; Meng et al. 2019 ; Das et al. 2018 ; Brown et al. 2018 ; Zhang et al. 2019 ) employed parsing technique, while only one work ( Bertero et al. 2017 ) did not.

DeepLog ( Du et al. 2017 ) parsed the raw log to different log type using Spell ( Du and Li 2016 ) which is based a longest common subsequence. Desh ( Das et al. 2018 ) parsed the raw log to constant message and variable component. Loganom ( Meng et al. 2019 ) parsed the raw log to different log templates using FT-Tree ( Zhang et al. 2017 ) according to the frequent combinations of log words. Andy Brown et al. ( Brown et al. 2018 ) parsed the raw log into word and character tokenization. LogRobust ( Zhang et al. 2019 ) extracted its log event by abstracting away the parameters in the message. Bertero et al. (2017 ) considered logs as regular text without parsing.

Observation 8.3: Most works have considered and adopted Phase III.

Among these six works, only DeepLog represented the parsed data using the one-hot vector without learning. Moreover, Loganom ( Meng et al. 2019 ) compared their results with DeepLog. That is, DeepLog belongs to class 1 and Loganom belongs to class 4 in Fig.  2 , while the other four works follow in class 3.

The four works ( Meng et al. 2019 ; Das et al. 2018 ; Zhang et al. 2019 ; Bertero et al. 2017 ) used word embedding techniques to represent the log data. Andy Brown et al. ( Brown et al. 2018 ) employed attention vectors to represent the log messages.

DeepLog ( Du et al. 2017 ) employed the one-hot vector to represent the log type without learning. We have engaged an experiment replacing the one-hot vector with trained word embeddings.

Observation 8.4: Evaluation results were not compared using the same dataset.

DeepLog ( Du et al. 2017 ) employed the one-hot vector to represent the log type without learning, which employed Phase II without Phase III. However, Christophe Bertero et al. ( Bertero et al. 2017 ) considered logs as regular text without parsing, and used Phase III without Phase II. The precision of the two methods is very high, which is greater than 95%. Unfortunately, the evaluations of the two methods used different datasets.

Observation 8.5: Most works empolyed LSTM in Phase IV.

Five works including ( Du et al. 2017 ; Meng et al. 2019 ; Das et al. 2018 ; Brown et al. 2018 ; Zhang et al. 2019 ) employed LSTM in the Phase IV, while Bertero et al. (2017 ) tried different classifiers including naive Bayes, neural networks and random forest.

Indication 8.1: Phase II has a positive effect on accuracy if being well-designed.

Since Bertero et al. (2017 ) considers logs as regular text without parsing, we can say that Phase II is not required. However, we can find that most of the scholars employed parsing techniques to extract structure information and remove the useless noise.

Indication 8.2: Most of the recent works use trained representation to represent parsed data.

As shown in Table  3 , we can find Phase III is very useful, which can improve detection accuracy.

Indication 8.3: Phase II and Phase III cannot be skipped simultaneously.

Both Phase II and Phase III are not required. However, all methods have employed Phase II or Phase III.

Indication 8.4: Observation 8.3 indicates that the trained word embedding format can improve the anomaly detection accuracy as shown in Table  3 .

Indication 8.5: Observation 8.5 indicates that most of the works adopt LSTM to detect anomaly events.

We can find that most of the works adopt LSTM to detect anomaly event, since log data can be considered as sequence and there can be lags of unknown duration between important events in a time series. LSTM has feedback connections, which can not only process single data points, but also entire sequences of data.

As our consideration, neither Phase II nor Phase III is required in system event-based anomaly detection. However, Phase II can remove noise in raw data, and Phase III can learn a proper representation of the data. Both Phase II and Phase III have a positive effect on anomaly detection accuracy. Since the event log is text data that we can’t feed the raw log data into deep learning model directly, Phase II and Phase III can’t be skipped simultaneously.

Deep learning can capture the potentially nonlinear and high dimensional dependencies among log entries from the training data that correspond to abnormal events. In that way, it can release the challenges mentioned above. However, it still suffers from several challenges. For example, how to represent the unstructured data accurately and automatically without human knowledge.

A closer look at applications of deep learning in solving memory forensics challenges

In the field of computer security, memory forensics is security-oriented forensic analysis of a computer’s memory dump. Memory forensics can be conducted against OS kernels, user-level applications, as well as mobile devices. Memory forensics outperforms traditional disk-based forensics because although secrecy attacks can erase their footprints on disk, they would have to appear in memory ( Song et al. 2018 ). The memory dump can be considered as a sequence of bytes, thus memory forensics usually needs to extract security semantic information from raw memory dump to find attack traces.

The traditional memory forensic tools fall into two categories: signature scanning and data structure traversal. These traditional methods usually have some limitations. Firstly, it needs expert knowledge on the related data structures to create signatures or traversing rules. Secondly, attackers may directly manipulate data and pointer values in kernel objects to evade detection, and then it becomes even more challenging to create signatures and traversing rules that cannot be easily violated by malicious manipulations, system updates, and random noise. Finally, the high-efficiency requirement often sacrifices high robustness. For example, an efficient signature scan tool usually skips large memory regions that are unlikely to have the relevant objects and relies on simple but easily tamperable string constants. An important clue may hide in this ignored region.

In this section, we will review the very recent four representative works that use Deep Learning for memory forensics ( Song et al. 2018 ; Petrik et al. 2018 ; Michalas and Murray 2017 ; Dai et al. 2018 ). DeepMem ( Song et al. 2018 ) recognized the kernel objects from raw memory dumps by generating abstract representations of kernel objects with a graph-based Deep Learning approach. MDMF ( Petrik et al. 2018 ) detected OS and architecture-independent malware from memory snapshots with several pre-processing techniques, domain unaware feature selection, and a suite of machine learning algorithms. MemTri ( Michalas and Murray 2017 ) predicts the likelihood of criminal activity in a memory image using a Bayesian network, based on evidence data artefacts generated by several applications. Dai et al. (2018 ) monitor the malware process memory and classify malware according to memory dumps, by transforming the memory dump into grayscale images and adopting a multi-layer perception as the classifier.

Among these four works ( Song et al. 2018 ; Petrik et al. 2018 ; Michalas and Murray 2017 ; Dai et al. 2018 ), two representative works (i.e., ( Song et al. 2018 ; Petrik et al. 2018 )) are already summarized phase-by-phase in Table 1. We direct interested readers to Table  2 for a concise overview of these two works.

Our review will be centered around the three questions raised in Section 3 . In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.

From a close look at the very recent applications using Deep Learning for solving memory forensics challenges, we observed the followings:

Observation 9.1: Most methods used their own datasets for performance evaluation, while none of them used a public dataset.

DeepMem was evaluated on self-generated dataset by the authors, who collected a large number of diverse memory dumps, and labeled the kernel objects in them using existing memory forensics tools like Volatility. MDMF employed the MalRec dataset by Georgia Tech to generate malicious snapshots, while it created a dataset of benign memory snapshots running normal software. MemTri ran several Windows 7 virtual machine instances with self-designed suspect activity scenarios to gather memory images. Dai et al. built the Procdump program in Cuckoo sandbox to extract malware memory dumps. We found that each of the four works in our survey generated their own datasets, while none was evaluated on a public dataset.

Observation 9.2: Among the four works ( Song et al. 2018 ; Michalas and Murray 2017 ; Petrik et al. 2018 ; Dai et al. 2018 ), two works ( Song et al. 2018 ; Michalas and Murray 2017 ) employed Phase II while the other two works ( Petrik et al. 2018 ; Dai et al. 2018 ) did not employ.

DeepMem ( Song et al. 2018 ) devised a graph representation for a sequence of bytes, taking into account both adjacency and points-to relations, to better model the contextual information in memory dumps. MemTri ( Michalas and Murray 2017 ) firstly identified the running processes within the memory image that match the target applications, then employed regular expressions to locate evidence artefacts in a memory image. MDMF ( Petrik et al. 2018 ) and Dai et al. (2018 ) transformed the memory dump into image directly.

Observation 9.3: Among four works ( Song et al. 2018 ; Michalas and Murray 2017 ; Petrik et al. 2018 ; Dai et al. 2018 ), only DeepMem ( Song et al. 2018 ) employed Phase III for which it used an embedding method to represent a memory graph.

MDMF ( Petrik et al. 2018 ) directly fed the generated memory images into the training of a CNN model. Dai et al. (2018 ) used HOG feature descriptor for detecting objects, while MemTri ( Michalas and Murray 2017 ) extracted evidence artefacts as the input of Bayesian Network. In summary, DeepMem belonged to class 3 shown in Fig.  2 , while the other three works belonged to class 1 shown in Fig.  2 .

Observation 9.4: All the four works ( Song et al. 2018 ; Petrik et al. 2018 ; Michalas and Murray 2017 ; Dai et al. 2018 ) have employed different classifiers even when the types of input data are the same.

DeepMem chose fully connected network (FCN) model that has multi-layered hidden neurons with ReLU activation functions, following by a softmax layer as the last layer. MDMF ( Petrik et al. 2018 ) evaluated their performance both on traditional machine learning algorithms and Deep Learning approach including CNN and LSTM. Their results showed the accuracy of different classifiers did not have a significant difference. MemTri employed a Bayesian network model that is designed with three layers, i.e., a hypothesis layer, a sub-hypothesis layer, and an evidence layer. Dai et al. used a multi-layer perception model including an input layer, a hidden layer and an output layer as the classifier.

Indication 9.1: There lacks public datasets for evaluating the performance of different Deep Learning methods in memory forensics.

From Observation 9.1, we find that none of the four works surveyed was evaluated on public datasets.

Indication 9.2: From Observation 9.2, we find that it is disputable whether one should employ Phase II when solving memory forensics problems.

Since both ( Petrik et al. 2018 ) and ( Dai et al. 2018 ) directly transformed a memory dump into an image, Phase II is not required in these two works. However, since there is a large amount of useless information in a memory dump, we argue that appropriate prepossessing could improve the accuracy of the trained models.

Indication 9.3: From Observation 9.3, we find that Phase III is paid not much attention in memory forensics.

Most works did not employ Phase III. Among the four works, only DeepMem ( Song et al. 2018 ) employed Phase III during which it used embeddings to represent a memory graph. The other three works ( Petrik et al. 2018 ; Michalas and Murray 2017 ; Dai et al. 2018 ) did not learn any representations before training a Deep Learning model.

Indication 9.4: For Phase IV in memory forensics, different classifiers can be employed.

Which kind of classifier to use seems to be determined by the features used and their data structures. From Observation 9.4, we find that the four works have actually employed different kinds of classifiers even the types of input data are the same. It is very interesting that MDMF obtained similar results with different classifiers including traditional machine learning and Deep Learning models. However, the other three works did not discuss why they chose a particular kind of classifier.

Since a memory dump can be considered as a sequence of bytes, the data structure of a training data example is straightforward. If the memory dump is transformed into a simple form in Phase II, it can be directly fed into the training process of a Deep Learning model, and as a result Phase III can be ignored. However, if the memory dump is transformed into a complicated form in Phase II, Phase III could be quite useful in memory forensics.

Regarding the answer for Question 3 at “ Methodology for reviewing the existing works ” section, it is very interesting that during Phase IV different classifiers can be employed in memory forensics. Moreover, MDMF ( Petrik et al. 2018 ) has shown that they can obtain similar results with different kinds of classifiers. Nevertheless, they also admit that with a larger amount of training data, the performance could be improved by Deep Learning.

An end-to-end manner deep learning model can learn the precise representation of memory dump automatically to release the requirement for expert knowledge. However, it still needs expert knowledge to represent data and attacker behavior. Attackers may also directly manipulate data and pointer values in kernel objects to evade detection.

A closer look at applications of deep learning in security-oriented fuzzing

Fuzzing of software security is one of the state of art techniques that people use to detect software vulnerabilities. The goal of fuzzing is to find all the vulnerabilities exist in the program by testing as much program code as possible. Due to the nature of fuzzing, this technique works best on finding vulnerabilities in programs that take in input files, like PDF viewers ( Godefroid et al. 2017 ) or web browsers. A typical workflow of fuzzing can be concluded as: given several seed input files, the fuzzer will mutate or fuzz the seed inputs to get more input files, with the aim of expanding the overall code coverage of the target program as it executes the mutated files. Although there have already been various popular fuzzers ( Li et al. 2018 ), fuzzing still cannot bypass its problem of sometimes redundantly testing input files which cannot improve the code coverage rate ( Shi and Pei 2019 ; Rajpal et al. 2017 ). Some input files mutated by the fuzzer even cannot pass the well-formed file structure test ( Godefroid et al. 2017 ). Recent research has come up with ideas of applying Deep Learning in the process of fuzzing to solve these problems.

In this section, we will review the very recent four representative works that use Deep Learning for fuzzing for software security. Among the three, two representative works ( Godefroid et al. 2017 ; Shi and Pei 2019 ) are already summarized phase-by-phase in Table  2 . We direct interested readers to Table  2 for a concise overview of those two works.

Observation 10.1: Deep Learning has only been applied in mutation-based fuzzing.

Even though various of different fuzzing techniques, including symbolic execution based fuzzing ( Stephens et al. 2016 ), tainted analysis based fuzzing ( Bekrar et al. 2012 ) and hybrid fuzzing ( Yun et al. 2018 ) have been proposed so far, we observed that all the works we surveyed employed Deep Learning method to assist the primitive fuzzing – mutation-based fuzzing. Specifically, they adopted Deep Learning to assist fuzzing tool’s input mutation. We found that they commonly did it in two ways: 1) training Deep Learning models to tell how to efficiently mutate the input to trigger more execution path ( Shi and Pei 2019 ; Rajpal et al. 2017 ); 2) training Deep Learning models to tell how to keep the mutated files compliant with the program’s basic semantic requirement ( Godefroid et al. 2017 ). Besides, all three works trained different Deep Learning models for different programs, which means that knowledge learned from one programs cannot be applied to other programs.

Observation 10.2: Similarity among all the works in our survey existed when choosing the training samples in Phase I.

The works in this survey had a common practice, i.e., using the input files directly as training samples of the Deep Learning model. Learn&Fuzz ( Godefroid et al. 2017 ) used character-level PDF objects sequence as training samples. Neuzz ( Shi and Pei 2019 ) regarded input files directly as byte sequences and fed them into the neural network model. Rajpal et al. (2017 ) also used byte level representations of input files as training samples.

Observation 10.3: Difference between all the works in our survey existed when assigning the training labels in Phase I.

Despite the similarity of training samples researchers decide to use, there was a huge difference in the training labels that each work chose to use. Learn&Fuzz ( Godefroid et al. 2017 ) directly used the character sequences of PDF objects as labels, same as training samples, but shifted by one position, which is a common generative model technique already broadly used in speech and handwriting recognition. Unlike Learn&Fuzz, Neuzz ( Shi and Pei 2019 ) and Rajpal’s work ( Rajpal et al. 2017 ) used bitmap and heatmap respectively as training labels, with the bitmap demonstrating the code coverage status of a certain input, and the heatmap demonstrating the efficacy of flipping one or more bytes of the input file. Whereas, as a common terminology well-known among fuzzing researchers, bitmap was gathered directly from the results of AFL. Heatmap used by Rajpal et al. was generated by comparing the code coverage supported by the bitmap of one seed file and the code coverage supported by bitmaps of the mutated seed files. It was noted that if there is acceptable level of code coverage expansion when executing the mutated seed files, demonstrated by more “1”s, instead of “0”s in the corresponding bitmaps, the byte level differences among the original seed file and the mutated seed files will be highlighted. Since those bytes should be the focus of later on mutation, heatmap was used to denote the location of those bytes.

Different labels usage in each work was actually due to the different kinds of knowledge each work wants to learn. For a better understanding, let us note that we can simply regard a Deep Learning model as a simulation of a “function”. Learn&Fuzz ( Godefroid et al. 2017 ) wanted to learn valid mutation of a PDF file that was compliant with the syntax and semantic requirements of PDF objects. Their model could be seen as a simulation of f ( x , θ )= y , where x denotes sequence of characters in PDF objects and y represents a sequence that are obtained by shifting the input sequences by one position. They generated new PDF object character sequences given a starting prefix once the model was trained. In Neuzz ( Shi and Pei 2019 ), an NN(Neural Network) model was used to do program smoothing, which simultated a smooth surrogate function that approximated the discrete branching behaviors of the target program. f ( x , θ )= y , where x denoted program’s byte level input and y represented the corresponding edge coverage bitmap. In this way, the gradient of the surrogate function was easily computed, due to NN’s support of efficient computation of gradients and higher order derivatives. Gradients could then be used to guide the direction of mutation, in order to get greater code coverage. In Rajpal and others’ work ( Rajpal et al. 2017 ), they designed a model to predict good (and bad) locations to mutate in input files based on the past mutations and corresponding code coverage information. Here, the x variable also denoted program’s byte level input, but the y variable represented the corresponding heatmap.

Observation 10.4: Various lengths of input files were handled in Phase II.

Deep Learning models typically accepted fixed length input, whereas the input files for fuzzers often held different lengths. Two different approaches were used among the three works we surveyed: splitting and padding. Learn&Fuzz ( Godefroid et al. 2017 ) dealt with this mismatch by concatenating all the PDF objects character sequences together, and then splited the large character sequence into multiple training samples with a fixed size. Neuzz ( Shi and Pei 2019 ) solved this problem by setting a maximize input file threshold and then, padding the smaller-sized input files with null bytes. From additional experiments, they also found that a modest threshold gived them the best result, and enlarging the input file size did not grant them additional accuracy. Aside from preprocessing training samples, Neuzz also preprocessed training labels and reduced labels dimension by merging the edges that always appeared together into one edge, in order to prevent the multicollinearity problem, that could prevent the model from converging to a small loss value. Rajpal and others ( Rajpal et al. 2017 ) used the similar splitting mechanism as Learn&Fuzz to split their input files into either 64-bit or 128-bit chunks. Their chunk size was determined empirically and was considered as a trainable parameter for their Deep Learning model, and their approach did not require sequence concatenating at the beginning.

Observation 10.5: All the works in our survey skipped Phase III.

According to our definition of Phase III, all the works in our survey did not consider representation learning. Therefore, all the three works ( Godefroid et al. 2017 ; Shi and Pei 2019 ; Rajpal et al. 2017 ) fell into class 1 shown in Fig.  2 .While as in Rajpal and others’ work, they considered the numerical representation of byte sequences. They claimed that since one byte binary data did not always represent the magnitude but also state, representing one byte in values ranging from 0 to 255 could be suboptimal. They used lower level 8-bit representation.

Indication 10.1: No alteration to the input files seems to be a correct approach. As far as we concerned, it is due to the nature of fuzzing. That is, since every bit of the input files matters, any slight alteration to the input files could either lose important information or add redundant information for the neural network model to learn.

Indication 10.2: Evaluation criteria should be chosen carefully when judging mutation.

Input files are always used as training samples regarding using Deep Learning technique in fuzzing problems. Through this similar action, researchers have a common desire to let the neural network mode learn how the mutated input files should look like. But the criterion of judging a input file actually has two levels: on the one hand, a good input file should be correct in syntax and semantics; on the other hand, a good input file should be the product of a useful mutation, which triggers the program to behave differently from previous execution path. This idea of a fuzzer that can generate semantically correct input file could still be a bad fuzzer at triggering new execution path was first brought up in Learn&Fuzz ( Godefroid et al. 2017 ). We could see later on works trying to solve this problem by using either different training labels ( Rajpal et al. 2017 ) or use neural network to do program smoothing ( Shi and Pei 2019 ). We encouraged fuzzing researchers, when using Deep Learning techniques, to keep this problem in mind, in order to get better fuzzing results.

Indication 10.3: Works in our survey only focus on local knowledge. In brief, some of the existing works ( Shi and Pei 2019 ; Rajpal et al. 2017 ) leveraged the Deep Learning model to learn the relation between program’s input and its behavior and used the knowledge that learned from history to guide future mutation. For better demonstration, we defined the knowledge that only applied in one program as local knowledge . In other words, this indicates that the local knowledge cannot direct fuzzing on other programs.

Corresponding to the problems conventional fuzzing has, the advantages of applying DL in fuzzing are that DL’s learning ability can ensure mutated input files follow the designated grammar rules better. The ways in which input files are generated are more directed, and will, therefore, guarantee the fuzzer to increase its code coverage by each mutation. However, even if the advantages can be clearly demonstrated by the two papers we discuss above, some challenges still exist, including mutation judgment challenges that are faced both by traditional fuzzing techniques and fuzzing with DL, and the scalability of fuzzing approaches.

We would like to raise several interesting questions for the future researchers: 1) Can the knowledge learned from the fuzzing history of one program be applied to direct testing on other programs? 2) If the answer to question one is positive, we can suppose that global knowledge across different programs exists? Then, can we train a model to extract the global knowledge ? 3) Whether it is possible to combine global knowledge and local knowledge when fuzzing programs?

Using high-quality data in Deep Learning is important as much as using well-structured deep neural network architectures. That is, obtaining quality data must be an important step, which should not be skipped, even in resolving security problems using Deep Learning. So far, this study demonstrated how the recent security papers using Deep Learning have adopted data conversion (Phase II) and data representation (Phase III) on different security problems. Our observations and indications showed a clear understanding of how security experts generate quality data when using Deep Learning.

Since we did not review all the existing security papers using Deep Learning, the generality of observations and indications is somewhat limited. Note that our selected papers for review have been published recently at one of prestigious security and reliability conferences such as USENIX SECURITY, ACM CCS and so on ( Shin et al. 2015 )-( Das et al. 2018 ), ( Brown et al. 2018 ; Zhang et al. 2019 ), ( Song et al. 2018 ; Petrik et al. 2018 ), ( Wang et al. 2019 )-( Rajpal et al. 2017 ). Thus, our observations and indications help to understand how most security experts have used Deep Learning to solve the well-known eight security problems from program analysis to fuzzing.

Our observations show that we should transfer raw data to synthetic formats of data ready for resolving security problems using Deep Learning through data cleaning and data augmentation and so on. Specifically, we observe that Phases II and III methods have mainly been used for the following purposes:

To clean the raw data to make the neural network (NN) models easier to interpret

To reduce the dimensionality of data (e.g., principle component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE))

To scale input data (e.g., normalization)

To make NN models understand more complex relationships depending on security problems (e.g. memory graphs)

To simply change various raw data formats into a vector format for NN models (e.g. one-hot encoding and word2vec embedding)

In this following, we do further discuss the question, “What if Phase II is skipped?", rather than the question, “Is Phase III always necessary?". This is because most of the selected papers do not consider Phase III methods (76%), or adopt with no concrete reasoning (19%). Specifically, we demonstrate how Phase II has been adopted according to eight security problems, different types of data, various models of NN and various outputs of NN models, in depth. Our key findings are summarized as follows:

How to fit security domain knowledge into raw data has not been well-studied yet.

While raw text data are commonly parsed after embedding, raw binary data are converted using various Phase II methods.

Raw data are commonly converted into a vector format to fit well to a specific NN model using various Phase II methods.

Various Phase II methods are used according to the relationship between output of security problem and output of NN models.

What if phase II is skipped?

From the analysis results of our selected papers for review, we roughly classify Phase II methods into the following four categories.

Embedding: The data conversion methods that intend to convert high-dimensional discrete variables into low-dimensional continuous vectors ( Google Developers 2016 ).

Parsing combined with embedding: The data conversion methods that constitute an input data into syntactic components in order to test conformability after embedding.

One-hot encoding: A simple embedding where each data belonging to a specific category is mapped to a vector of 0s and a single 1. Here, the low-dimension transformed vector is not managed.

Domain-specific data structures: A set of data conversion methods which generate data structures capturing domain-specific knowledge for different security problems, e.g., memory graphs ( Song et al. 2018 ).

Findings on eight security problems

We observe that over 93% of the papers use one of the above-classified Phase II methods. 7% of the papers do not use any of the above-classified methods, and these papers are mostly solving a software fuzzing problem. Specifically, we observe that 35% of the papers use a Category 1 (i.e. embedding) method; 30% of the papers use a Category 2 (i.e. parsing combined with embedding) method; 15% of the papers use a Category 3 (i.e. one-hot encoding) method; and 13% of the papers use a Category 4 (i.e. domain-specific data structures) method. Regarding why one-hot encoding is not widely used, we found that most security data include categorical input values, which are not directly analyzed by Deep Learning models.

From Fig.  3 , we also observe that according to security problems, different Phase II methods are used. First, PA, ROP and CFI should convert raw data into a vector format using embedding because they commonly collect instruction sequence from binary data. Second, NA and SEAD use parsing combined with embedding because raw data such as the network traffic and system logs consist of the complex attributes with the different formats such as categorical and numerical input values. Third, we observe that MF uses various data structures because memory dumps from memory layout are unstructured. Fourth, fuzzing generally uses no data conversion since Deep Learning models are used to generate the new input data with the same data format as the original raw data. Finally, we observe that MC commonly uses one-hot encoding and embedding because malware binary and well-structured security log files include categorical, numerical and unstructured data in general. These observations indicate that type of data strongly influences on use of Phase II methods. We also observe that only MF among eight security problems commonly transform raw data into well-structured data embedding a specialized security domain knowledge. This observation indicates that various conversion methods of raw data into well-structure data which embed various security domain knowledge are not yet studied in depth.

figure 3

Statistics of Phase II methods for eight security problems

Findings on different data types

Note that according to types of data, a NN model works better than the others. For example, CNN works well with images but does not work with text. From Fig.  4 for raw binary data, we observe that 51.9%, 22.3% and 11.2% of security papers use embedding, one-hot encoding and Others , respectively. Only 14.9% of security papers, especially related to fuzzing, do not use one of Phase II methods. This observation indicates that binary input data which have various binary formats should be converted into an input data type which works well with a specific NN model. From Fig.  4 for raw text data, we also observe that 92.4% of papers use parsing with embedding as the Phase II method. Note that compared with raw binary data whose formats are unstructured, raw text data generally have the well-structured format. Raw text data collected from network traffics may also have various types of attribute values. Thus, raw text data are commonly parsed after embedding to reduce redundancy and dimensionality of data.

figure 4

Statistics of Phase II methods on type of data

Findings on various models of NN

According to types of the converted data, a specific NN model works better than the others. For example, CNN works well with images but does not work with raw text. From Fig.  6 b, we observe that use of embedding for DNN (42.9%), RNN (28.6%) and LSTM (14.3%) models approximates to 85%. This observation indicates that embedding methods are commonly used to generate sequential input data for DNN, RNN and LSTM models. Also, we observe that one-hot encoded data are commonly used as input data for DNN (33.4%), CNN (33.4%) and LSTM (16.7%) models. This observation indicates that one-hot encoding is one of common Phase II methods to generate numerical values for image and sequential input data because many raw input data for security problems commonly have the categorical features. We observe that the CNN (66.7%) model uses the converted input data using the Others methods to express the specific domain knowledge into the input data structure of NN networks. This is because general vector formats including graph, matrix and so on can also be used as an input value of the CNN model.

From Fig.  5 b, we observe that DNN, RNN and LSTM models commonly use embedding, one-hot encoding and parsing combined with embedding. For example, we observe security papers of 54.6%, 18.2% and 18.2% models use embedding, one-hot encoding and parsing combined with embedding, respectively. We also observe that the CNN model is used with various Phase II methods because any vector formats such as image can generally be used as an input data of the CNN model.

figure 5

Statistics of Phase II methods for various types of NNs

figure 6

Statistics of Phase II methods for various output of NN

Findings on output of NN models

According to the relationship between output of security problem and output of NN, we may use a specific Phase II method. For example, if output of security problem is given into a class (e.g., normal or abnormal), output of NN should also be given into classification.

From Fig.  6 a, we observe that embedding is commonly used to support a security problem for classification (100%). Parsing combined with embedding is used to support a security problem for object detection (41.7%) and classification (58.3%). One-hot encoding is used only for classification (100%). These observations indicate that classification of a given input data is the most common output which is obtained using Deep Learning under various Phase II methods.

From Fig.  6 b, we observe that security problems, whose outputs are classification, commonly use embedding (43.8%) and parsing combined with embedding (21.9%) as the Phase II method. We also observe that security problems, whose outputs are object detection, commonly use parsing combined with embedding (71.5%). However, security problems, whose outputs are data generation, commonly do not use the Phase III methods. These observations indicate that a specific Phase II method has been used according to the relationship between output of security problem and use of NN models.

Further areas of investigation

Since any Deep Learning models are stochastic, each time the same Deep Learning model is fit even on the same data, it might give different outcomes. This is because deep neural networks use random values such as random initial weights. However, if we have all possible data for every security problem, we may not make random predictions. Since we have the limited sample data in practice, we need to get the best-effort prediction results using the given Deep Learning model, which fits to the given security problem.

How can we get the best-effort prediction results of Deep Learning models for different security problems? Let us begin to discuss about the stability of evaluation results for our selected papers for review. Next, we will elaborate the influence of security domain knowledge on prediction results of Deep Learning models. Finally, we will discuss some common issues in those fields.

How stable are evaluation results?

When evaluating neural network models, Deep Learning models commonly use three methods: train-test split; train-validation-test split; and k -fold cross validation. A train-test split method splits the data into two parts, i.e., training and test data. Even though a train-test split method makes the stable prediction with a large amount of data, predictions vary with a small amount of data. A train-validation-test split method splits the data into three parts, i.e., training, validation and test data. Validation data are used to estimate predictions over the unknown data. k -fold cross validation has k different set of predictions from k different evaluation data. Since k -fold cross validation takes the average expected performance of the NN model over k -fold validation data, the evaluation result is closer to the actual performance of the NN model.

From the analysis results of our selected papers for review, we observe that 40.0% and 32.5% of the selected papers are measured using a train-test split method and a train-validation-test split method, respectively. Only 17.5% of the selected papers are measured using k -fold cross validation. This observation implies that even though the selected papers show almost more than 99% of accuracy or 0.99 of F1 score, most solutions using Deep Learning might not show the same performance for the noisy data with randomness.

To get stable prediction results of Deep Learning models for different security problems, we might reduce the influence of the randomness of data on Deep Learning models. At least, it is recommended to consider the following methods:

Do experiments using the same data many time : To get a stable prediction with a small amount of sample data, we might control the randomness of data using the same data many times.

Use cross validation methods, e.g. k -fold cross validation : The expected average and variance from k -fold cross validation estimates how stable the proposed model is.

How does security domain knowledge influence the performance of security solutions using deep learning?

When selecting a NN model that analyzes an application dataset, e.g., MNIST dataset ( LeCun and Cortes 2010 ), we should understand that the problem is to classify a handwritten digit using a 28×28 black. Also, to solve the problem with the high classification accuracy, it is important to know which part of each handwritten digit mainly influences the outcome of the problem, i.e., a domain knowledge.

While solving a security problem, knowing and using security domain knowledge for each security problem is also important due to the following reasons (we label the observations and indications that realted to domain knowledge with ‘ ∗ ’):

Firstly, the dataset generation, preprocess and feature selection highly depend on domain knowledge. Different from the image classification and natural language processing, raw data in the security domain cannot be sent into the NN model directly. Researchers need to adopt strong domain knowledge to generate, extract, or clean the training set. Also, in some works, domain knowledge is adopted in data labeling because labels for data samples are not straightforward.

Secondly, domain knowledge helps with the selection of DL models and its hierarchical structure. For example, the neural network architecture (hierarchical and bi-directional LSTM) designed in DEEPVSA ( Guo et al. 2019 ) is based on the domain knowledge in the instruction analysis.

Thirdly, domain knowledge helps to speed up the training process. For instance, by adopting strong domain knowledge to clean the training set, domain knowledge helps to spend up the training process while keeping the same performance. However, due to the influence of the randomness of data on Deep Learning models, domain knowledge should be carefully adopted to avoid potential decreased accuracy.

Finally, domain knowledge helps with the interpretability of models’ prediction. Recently, researchers try to explore the interpretability of the deep learning model in security areas, For instance, LEMNA ( Guo et al. 2018 ) and EKLAVYA ( Chua et al. 2017 ) explain how the prediction was made by models from different perspectives. By enhancing the trained models’ interpretability, they can improve their approaches’ accuracy and security. The explanation for the relation between input, hidden state, and the final output is based on domain knowledge.

Common challenges

In this section, we will discuss the common challenges when applying DL to solving security problems. These challenges as least shared by the majority of works, if not by all the works. Generally, we observe 7 common challenges in our survey:

The raw data collected from the software or system usually contains lots of noise.

The collected raw is untidy. For instance, the instruction trace, the Untidy data: variable length sequences,

Hierarchical data syntactic/structure. As discussed in Section 3 , the information may not simply be encoded in a single layer, rather, it is encoded hierarchically, and the syntactic is complex.

Dataset generation is challenging in some scenarios. Therefore, the generated training data might be less representative or unbalanced.

Different for the application of DL in image classification, and natural language process, which is visible or understandable, the relation between data sample and its label is not intuitive, and hard to explain.

Availability of trained model and quality of dataset.

Finally, we investigate the availability of the trained model and the quality of the dataset. Generally, the availability of the trained models affects its adoption in practice, and the quality of the training set and the testing set will affect the credibility of testing results and comparison between different works. Therefore, we collect relevant information to answer the following four questions and shows the statistic in Table  4 :

Whether a paper’s source code is publicly available?

Whether raw data, which is used to generate the dataset, is publicly available?

Whether its dataset is publicly available?

How are the quality of the dataset?

We observe that both the percentage of open source of code and dataset in our surveyed fields is low, which makes it a challenge to reproduce proposed schemes, make comparisons between different works, and adopt them in practice. Specifically, the statistic shows that 1) the percentage of open source of code in our surveyed fields is low, only 6 out of 16 paper published their model’s source code. 2) the percentage of public data sets is low. Even though, the raw data in half of the works are publicly available, only 4 out of 16 fully or partially published their dataset. 3) the quality of datasets is not guaranteed, for instance, most of the dataset is unbalanced.

The performance of security solutions even using Deep Learning might vary according to datasets. Traditionally, when evaluating different NN models in image classification, standard datasets such as MNIST for recognizing handwritten 10 digits and CIFAR10 ( Krizhevsky et al. 2010 ) for recognizing 10 object classes are used for performance comparison of different NN models. However, there are no known standard datasets for evaluating NN models on different security problems. Due to such a limitation, we observe that most security papers using Deep Learning do not compare the performance of different security solutions even when they consider the same security problem. Thus, it is recommended to generate and use a standard dataset for a specific security problem for comparison. In conclusion, we think that there are three aspects that need to be improved in future research:

Developing standard dataset.

Publishing their source code and dataset.

Improving the interpretability of their model.

This paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. In particular, the review covers eight computer security problems being solved by applications of Deep Learning: security-oriented program analysis, defending ROP attacks, achieving CFI, defending network attacks, malware classification, system-event-based anomaly detection, memory forensics, and fuzzing for software security. Our observations of the reviewed works indicate that the literature of using Deep Learning techniques to solve computer security challenges is still at an earlier stage of development.

Availability of data and materials

Not applicable.

We refer readers to ( Wang and Liu 2019 ) which systemizes the knowledge of protections by CFI schemes.

Abadi, M, Budiu M, Erlingsson Ú, Ligatti J (2009) Control-Flow Integrity Principles, Implementations, and Applications. ACM Trans Inf Syst Secur (TISSEC) 13(1):4.

Article   Google Scholar  

Bao, T, Burket J, Woo M, Turner R, Brumley D (2014) BYTEWEIGHT: Learning to Recognize Functions in Binary Code In: 23rd USENIX Security Symposium (USENIX Security 14), 845–860.. USENIX Association, San Diego.

Google Scholar  

Bekrar, S, Bekrar C, Groz R, Mounier L (2012) A Taint Based Approach for Smart Fuzzing In: 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation.. IEEE. https://doi.org/10.1109/icst.2012.182 .

Bengio, Y, Courville A, Vincent P (2013) Representation Learning: A Review and New Perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828.

Bertero, C, Roy M, Sauvanaud C, Tredan G (2017) Experience Report: Log Mining Using Natural Language Processing and Application to Anomaly Detection In: 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE).. IEEE. https://doi.org/10.1109/issre.2017.43 .

Brown, A, Tuor A, Hutchinson B, Nichols N (2018) Recurrent Neural Network Attention Mechanisms for Interpretable System Log Anomaly Detection In: Proceedings of the First Workshop on Machine Learning for Computing Systems, MLCS’18, 1:1–1:8.. ACM, New York.

Böttinger, K, Godefroid P, Singh R (2018) Deep Reinforcement Fuzzing In: 2018 IEEE Security and Privacy Workshops (SPW), pages 116–122.. IEEE. https://doi.org/10.1109/spw.2018.00026 .

Chen, L, Sultana S, Sahita R (2018) Henet: A Deep Learning Approach on Intel Ⓡ Processor Trace for Effective Exploit Detection In: 2018 IEEE Security and Privacy Workshops (SPW).. IEEE. https://doi.org/10.1109/spw.2018.00025 .

Chua, ZL, Shen S, Saxena P, Liang Z (2017) Neural Nets Can Learn Function Type Signatures from Binaries In: 26th USENIX Security Symposium (USENIX Security 17), 99–116.. USENIX Association. https://dl.acm.org/doi/10.5555/3241189.3241199 .

Cui, Z, Xue F, Cai X, Cao Y, Wang GG, Chen J (2018) Detection of Malicious Code Variants Based on Deep Learning. IEEE Trans Ind Inform 14(7):3187–3196.

Dahl, GE, Stokes JW, Deng L, Yu D (2013) Large-scale Malware Classification using Random Projections and Neural Networks In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).. IEEE. https://doi.org/10.1109/icassp.2013.6638293 .

Dai, Y, Li H, Qian Y, Lu X (2018) A Malware Classification Method Based on Memory Dump Grayscale Image. Digit Investig 27:30–37.

Das, A, Mueller F, Siegel C, Vishnu A (2018) Desh: Deep Learning for System Health Prediction of Lead Times to Failure in HPC In: Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing, HPDC ’18, 40–51.. ACM, New York.

Chapter   Google Scholar  

David, OE, Netanyahu NS (2015) DeepSign: Deep Learning for Automatic Malware Signature Generation and Classification In: 2015 International Joint Conference on Neural Networks (IJCNN).. IEEE. https://doi.org/10.1109/ijcnn.2015.7280815 .

De La Rosa, L, Kilgallon S, Vanderbruggen T, Cavazos J (2018) Efficient Characterization and Classification of Malware Using Deep Learning In: 2018 Resilience Week (RWS).. IEEE. https://doi.org/10.1109/rweek.2018.8473556 .

Du, M, Li F (2016) Spell: Streaming Parsing of System Event Logs In: 2016 IEEE 16th International Conference on Data Mining (ICDM).. IEEE. https://doi.org/10.1109/icdm.2016.0103 .

Du, M, Li F, Zheng G, Srikumar V (2017) DeepLog: Anomaly Detection and Diagnosis from System Logs Through Deep Learning In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, 1285–1298.. ACM, New York.

Faker, O, Dogdu E (2019) Intrusion Detection Using Big Data and Deep Learning Techniques In: Proceedings of the 2019 ACM Southeast Conference on ZZZ - ACM SE ’19, 86–93.. ACM. https://doi.org/10.1145/3299815.3314439 .

Ghosh, AK, Wanken J, Charron F (1998) Detecting Anomalous and Unknown Intrusions against Programs In: Proceedings 14th annual computer security applications conference (Cat. No. 98Ex217), 259–267.. IEEE, Washington, DC.

Godefroid, P, Peleg H, Singh R (2017) Learn&Fuzz: Machine Learning for Input Fuzzing In: 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE).. IEEE. https://doi.org/10.1109/ase.2017.8115618 .

Google Developers (2016) Embeddings . https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture .

Guo, W, Mu D, Xu J, Su P, Wang G, Xing X (2018) Lemna: Explaining deep learning based security applications In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 364–379. https://doi.org/10.1145/3243734.3243792 .

Guo, W, Mu D, Xing X, Du M, Song D (2019) { DEEPVSA }: Facilitating Value-set Analysis with Deep Learning for Postmortem Program Analysis In: 28th USENIX Security Symposium (USENIX Security 19), 1787–1804.. USENIX Association, Santa Clara, CA. https://www.usenix.org/conference/usenixsecurity19/presentation/guo .

Heller, KA, Svore KM, Keromytis AD, Stolfo SJ (2003) One Class Support Vector Machines for Detecting Anomalous Windows Registry Accesses In: Proceedings of the Workshop on Data Mining for Computer Security.. IEEE, Dallas, TX.

Horwitz, S (1997) Precise Flow-insensitive May-alias Analysis is NP-hard. ACM Trans Program Lang Syst 19(1):1–6.

Hu, W, Liao Y, Vemuri VR (2003) Robust Anomaly Detection using Support Vector Machines In: Proceedings of the international conference on machine learning, 282–289.. Citeseer, Washington, DC.

IDS 2017 Datasets (2019). https://www.unb.ca/cic/datasets/ids-2017.html .

Kalash, M, Rochan M, Mohammed N, Bruce NDB, Wang Y, Iqbal F (2018) Malware Classification with Deep Convolutional Neural Networks In: 2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS), 1–5. https://doi.org/10.1109/NTMS.2018.8328749 .

Kiriansky, V, Bruening D, Amarasinghe SP, et al. (2002) Secure Execution via Program Shepherding In: USENIX Security Symposium, volume 92, page 84.. USENIX Association, Monterey, CA.

Kolosnjaji, B, Eraisha G, Webster G, Zarras A, Eckert C (2017) Empowering Convolutional Networks for Malware Classification and Analysis. Proc Int Jt Conf Neural Netw 2017-May:3838–3845.

Krizhevsky, A, Nair V, Hinton G (2010) CIFAR-10 (Canadian Institute for Advanced Research). https://www.cs.toronto.edu/~kriz/cifar.html .

LeCun, Y, Cortes C (2010) MNIST Handwritten Digit Database. http://yann.lecun.com/exdb/mnist/ .

Li, J, Zhao B, Zhang C (2018) Fuzzing: A Survey. Cybersecurity 1(1):6.

Li, X, Hu Z, Fu Y, Chen P, Zhu M, Liu P (2018) ROPNN: Detection of ROP Payloads Using Deep Neural Networks. arXiv preprint arXiv:1807.11110.

McLaughlin, N, Martinez Del Rincon J, Kang BJ, Yerima S, Miller P, Sezer S, Safaei Y, Trickel E, Zhao Z, Doupe A, Ahn GJ (2017) Deep Android Malware Detection In: Proceedings of the 7th ACM Conference on Data and Application Security and Privacy, 301–308. https://doi.org/10.1145/3029806.3029823 .

Meng, W, Liu Y, Zhu Y, Zhang S, Pei D, Liu Y, Chen Y, Zhang R, Tao S, Sun P, Zhou R (2019) Loganomaly: Unsupervised Detection of Sequential and Quantitative Anomalies in Unstructured Logs In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence.. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2019/658 .

Michalas, A, Murray R (2017) MemTri: A Memory Forensics Triage Tool Using Bayesian Network and Volatility In: Proceedings of the 2017 International Workshop on Managing Insider Security Threats, MIST ’17, pages 57–66.. ACM, New York.

Millar, K, Cheng A, Chew HG, Lim C-C (2018) Deep Learning for Classifying Malicious Network Traffic In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, 156–161.. Springer. https://doi.org/10.1007/978-3-030-04503-6_15 .

Moustafa, N, Slay J (2015) UNSW-NB15: A Comprehensive Data Set for Network Intrusion Detection Systems (UNSW-NB15 Network Data Set) In: 2015 Military Communications and Information Systems Conference (MilCIS).. IEEE. https://doi.org/10.1109/milcis.2015.7348942 .

Nguyen, MH, Nguyen DL, Nguyen XM, Quan TT (2018) Auto-Detection of Sophisticated Malware using Lazy-Binding Control Flow Graph and Deep Learning. Comput Secur 76:128–155.

Nix, R, Zhang J (2017) Classification of Android Apps and Malware using Deep Neural Networks. Proc Int Jt Conf Neural Netw 2017-May:1871–1878.

NSCAI Intern Report for Congress (2019). https://drive.google.com/file/d/153OrxnuGEjsUvlxWsFYauslwNeCEkvUb/view .

Petrik, R, Arik B, Smith JM (2018) Towards Architecture and OS-Independent Malware Detection via Memory Forensics In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, pages 2267–2269.. ACM, New York.

Phan, AV, Nguyen ML, Bui LT (2017) Convolutional Neural Networks over Control Flow Graphs for Software defect prediction In: 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), 45–52.. IEEE. https://doi.org/10.1109/ictai.2017.00019 .

Rajpal, M, Blum W, Singh R (2017) Not All Bytes are Equal: Neural Byte Sieve for Fuzzing. arXiv preprint arXiv:1711.04596.

Rosenberg, I, Shabtai A, Rokach L, Elovici Y (2018) Generic Black-box End-to-End Attack against State of the Art API Call based Malware Classifiers In: Research in Attacks, Intrusions, and Defenses, 490–510.. Springer. https://doi.org/10.1007/978-3-030-00470-5_23 .

Salwant, J (2015) ROPGadget. https://github.com/JonathanSalwan/ROPgadget .

Saxe, J, Berlin K (2015) Deep Neural Network based Malware Detection using Two Dimensional Binary Program Features In: 2015 10th International Conference on Malicious and Unwanted Software (MALWARE).. IEEE. https://doi.org/10.1109/malware.2015.7413680 .

Shacham, H, et al. (2007) The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86) In: ACM conference on Computer and communications security, pages 552–561. https://doi.org/10.1145/1315245.1315313 .

Shi, D, Pei K (2019) NEUZZ: Efficient Fuzzing with Neural Program Smoothing. IEEE Secur Priv.

Shin, ECR, Song D, Moazzezi R (2015) Recognizing Functions in Binaries with Neural Networks In: 24th USENIX Security Symposium (USENIX Security 15).. USENIX Association. https://dl.acm.org/doi/10.5555/2831143.2831182 .

Sommer, R, Paxson V (2010) Outside the Closed World: On Using Machine Learning For Network Intrusion Detection In: 2010 IEEE Symposium on Security and Privacy (S&P).. IEEE. https://doi.org/10.1109/sp.2010.25 .

Song, W, Yin H, Liu C, Song D (2018) DeepMem: Learning Graph Neural Network Models for Fast and Robust Memory Forensic Analysis In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, 606–618.. ACM, New York.

Stephens, N, Grosen J, Salls C, Dutcher A, Wang R, Corbetta J, Shoshitaishvili Y, Kruegel C, Vigna G (2016) Driller: Augmenting Fuzzing Through Selective Symbolic Execution In: Proceedings 2016 Network and Distributed System Security Symposium.. Internet Society. https://doi.org/10.14722/ndss.2016.23368 .

Tan, G, Jaeger T (2017) CFG Construction Soundness in Control-Flow Integrity In: Proceedings of the 2017 Workshop on Programming Languages and Analysis for Security - PLAS ’17.. ACM. https://doi.org/10.1145/3139337.3139339 .

Tobiyama, S, Yamaguchi Y, Shimada H, Ikuse T, Yagi T (2016) Malware Detection with Deep Neural Network Using Process Behavior. Proc Int Comput Softw Appl Conf 2:577–582.

Unicorn-The ultimate CPU emulator (2015). https://www.unicorn-engine.org/ .

Ustebay, S, Turgut Z, Aydin MA (2019) Cyber Attack Detection by Using Neural Network Approaches: Shallow Neural Network, Deep Neural Network and AutoEncoder In: Computer Networks, 144–155.. Springer. https://doi.org/10.1007/978-3-030-21952-9_11 .

Varenne, R, Delorme JM, Plebani E, Pau D, Tomaselli V (2019) Intelligent Recognition of TCP Intrusions for Embedded Micro-controllers In: International Conference on Image Analysis and Processing, 361–373.. Springer. https://doi.org/10.1007/978-3-030-30754-7_36 .

Wang, Z, Liu P (2019) GPT Conjecture: Understanding the Trade-offs between Granularity, Performance and Timeliness in Control-Flow Integrity. eprint 1911.07828, archivePrefix arXiv, primaryClass cs.CR, arXiv.

Wang, Y, Wu Z, Wei Q, Wang Q (2019) NeuFuzz: Efficient Fuzzing with Deep Neural Network. IEEE Access 7:36340–36352.

Xu, W, Huang L, Fox A, Patterson D, Jordan MI (2009) Detecting Large-Scale System Problems by Mining Console Logs In: Proceedings of the ACM SIGOPS 22Nd Symposium on Operating Systems Principles SOSP ’09, 117–132.. ACM, New York.

Xu, X, Liu C, Feng Q, Yin H, Song L, Song D (2017) Neural Network-Based Graph Embedding for Cross-Platform Binary Code Similarity Detection In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 363–376.. ACM. https://doi.org/10.1145/3133956.3134018 .

Xu, L, Zhang D, Jayasena N, Cavazos J (2018) HADM: Hybrid Analysis for Detection of Malware 16:702–724.

Xu, X, Ghaffarinia M, Wang W, Hamlen KW, Lin Z (2019) CONFIRM: Evaluating Compatibility and Relevance of Control-flow Integrity Protections for Modern Software In: 28th USENIX Security Symposium (USENIX Security 19), pages 1805–1821.. USENIX Association, Santa Clara.

Yagemann, C, Sultana S, Chen L, Lee W (2019) Barnum: Detecting Document Malware via Control Flow Anomalies in Hardware Traces In: Lecture Notes in Computer Science, 341–359.. Springer. https://doi.org/10.1007/978-3-030-30215-3_17 .

Yin, C, Zhu Y, Fei J, He X (2017) A Deep Learning Approach for Intrusion Detection using Recurrent Neural Networks. IEEE Access 5:21954–21961.

Yuan, X, Li C, Li X (2017) DeepDefense: Identifying DDoS Attack via Deep Learning In: 2017 IEEE International Conference on Smart Computing (SMARTCOMP).. IEEE. https://doi.org/10.1109/smartcomp.2017.7946998 .

Yun, I, Lee S, Xu M, Jang Y, Kim T (2018) QSYM : A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing In: 27th USENIX Security Symposium (USENIX Security 18), pages 745–761.. USENIX Association, Baltimore.

Zhang, S, Meng W, Bu J, Yang S, Liu Y, Pei D, Xu J, Chen Y, Dong H, Qu X, Song L (2017) Syslog Processing for Switch Failure Diagnosis and Prediction in Datacenter Networks In: 2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS).. IEEE. https://doi.org/10.1109/iwqos.2017.7969130 .

Zhang, J, Chen W, Niu Y (2019) DeepCheck: A Non-intrusive Control-flow Integrity Checking based on Deep Learning. arXiv preprint arXiv:1905.01858.

Zhang, X, Xu Y, Lin Q, Qiao B, Zhang H, Dang Y, Xie C, Yang X, Cheng Q, Li Z, Chen J, He X, Yao R, Lou J-G, Chintalapati M, Shen F, Zhang D (2019) Robust Log-based Anomaly Detection on Unstable Log Data In: Proceedings of the 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2019, pages 807–817.. ACM, New York.

Zhang, Y, Chen X, Guo D, Song M, Teng Y, Wang X (2019) PCCN: Parallel Cross Convolutional Neural Network for Abnormal Network Traffic Flows Detection in Multi-Class Imbalanced Network Traffic Flows. IEEE Access 7:119904–119916.

Download references

Acknowledgments

We are grateful to the anonymous reviewers for their useful comments and suggestions.

This work was supported by ARO W911NF-13-1-0421 (MURI), NSF CNS-1814679, and ARO W911NF-15-1-0576.

Author information

Authors and affiliations.

The Pennsylvania State University, Pennsylvania, USA

Yoon-Ho Choi, Peng Liu, Zitong Shang, Haizhou Wang, Zhilong Wang, Lan Zhang & Qingtian Zou

Pusan National University, Busan, Republic of Korea

Yoon-Ho Choi

Wuhan University of Technology, Wuhan, China

Junwei Zhou

You can also search for this author in PubMed   Google Scholar

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Peng Liu .

Ethics declarations

Competing interests.

PL is currently serving on the editorial board for Journal of Cybersecurity.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Choi, YH., Liu, P., Shang, Z. et al. Using deep learning to solve computer security challenges: a survey. Cybersecur 3 , 15 (2020). https://doi.org/10.1186/s42400-020-00055-5

Download citation

Received : 11 March 2020

Accepted : 17 June 2020

Published : 10 August 2020

DOI : https://doi.org/10.1186/s42400-020-00055-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Deep learning
  • Security-oriented program analysis
  • Return-oriented programming attacks
  • Control-flow integrity
  • Network attacks
  • Malware classification
  • System-event-based anomaly detection
  • Memory forensics
  • Fuzzing for software security

research paper on computer network security

AIP Publishing Logo

The research of computer network security and protection strategy

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Reprints and Permissions
  • Cite Icon Cite
  • Search Site

Jian He; The research of computer network security and protection strategy. AIP Conf. Proc. 8 May 2017; 1839 (1): 020173. https://doi.org/10.1063/1.4982538

Download citation file:

  • Ris (Zotero)
  • Reference Manager

With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.

Citing articles via

Publish with us - request a quote.

research paper on computer network security

Sign up for alerts

  • Online ISSN 1551-7616
  • Print ISSN 0094-243X
  • For Researchers
  • For Librarians
  • For Advertisers
  • Our Publishing Partners  
  • Physics Today
  • Conference Proceedings
  • Special Topics

pubs.aip.org

  • Privacy Policy
  • Terms of Use

Connect with AIP Publishing

This feature is available to subscribers only.

Sign In or Create an Account

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Computer Network Security: Risks and Protective Measures

Profile image of Kwame Sarpong

With the ushering in of the information age, a wide range of technologies in the field of computer science have emerged such that many network techniques are broadly utilized in different sectors of the society at large and in its wake not only has made great economic and social benefits, but also have immensely promoted the rapid development in culture and systems. While the openness and adaptability of computer network has brought us a ton of benefits both throughout our lives and work, it also has caused a series of information security problems which put our life at risk. With the fast advancement of computer network technology, ensuring the security of the computer becomes a very essential factor that cannot be disregarded. Three major threats confronting computer network security include: threat from hackers, computer virus and denial of service attacks whiles some measures leading to the safety of the network also include legal measures, technical measures as well as management measures. This paper seeks to investigate the primary risk confronting computer network security, address system security innovations and advances to help tackle the shrouded risk of the current basic system security. KEYWORDS-Information age, Computer network, information security

RELATED PAPERS

INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT

ambarish patel

International Journal of Multidisciplinary Research and Growth Evaluation

Iskandar Muda

IRJET Journal

Liam Landers

Interal Res journa Managt Sci Tech

International Journal for Research in Applied Science and Engineering Technology IJRASET

Ijesrt Journal

Rutgers Journal of Computers and the Law

Ronald Slivka , Joel Darrow

AIP Conference Proceedings

mohit bisht

Bilge Karabacak

Journal of Information and Computational Science

Rizwan Ahmad , Tanvir Fatima Naik Bukht

Computer Engineering and Intelligent Systems

Oluwasanmi R I C H A R D Arogundade

Amrit Pal Singh

Journal of Information Security and Applications

Atilla ELCI

International Journal of Innovative Science and Research Technology

Narendra Chahar

Andysah Putera Utama Siahaan

CIZA THOMAS

Simulation in Computer Network Design and …

Alaa Al-Hamami

Lecture Notes in Computer Science

shyamasundar R.K.

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Research on Network Security Application Based on Data Mining Algorithm

  • Conference paper
  • First Online: 26 June 2024
  • Cite this conference paper

research paper on computer network security

  • JunJun Zhang 39 , 40 &
  • Qian Li 40  

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 1216))

Included in the following conference series:

  • International Conference on Innovative Computing

The security applications are critical in network security application research; however, it has an issue with erroneous performance positioning. The typical Bee colony algorithm is unable to address the inaccurate application positioning issue in network security application research, and the result is insufficient. As a result, a Data mining algorithms-based research on network security application is provided, and the research on network security application is assessed. To begin, the statistical set theory is used to discover the influencing elements, and the indicators are split based on the security applications’ needs to decrease interference factors in the security applications. The statistical set theory is then used to create a Data mining algorithms security applications scheme, and the outcomes of the security applications are thoroughly examined. The MATLAB simulation results reveal that, under evaluation conditions, the Data mining algorithms outperforms the standard Bee colony algorithm in terms of security applications accuracy and time of influencing variables.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Jiang, K.: Medical network security risk assessment based on data mining algorithms. Autom. Technol. Appl. 42 (5), 99–102 (2023)

Google Scholar  

Wang, J.: Research on the evaluation and decision of campus network security level protection based on data mining. Electron. Technol. Softw. Eng. (20), 5–9

Jiang, Y.: The application of data mining technology in network security. Inf. Syst. Eng. (5), 73–75 (2023)

Xu, H.: The application of data mining technology in computer network security. Integr. Circuit Appl. (2023)

Wang, X., Zuo, M., Xiao, K., Liu, T.: Food safety sampling data mining based on BP neural network. J. Food Sci. Technol. (6)

Perseverance during the day: Discussion on the application of computer network security technology in network security maintenance Changjiang. Inf. Commun. 36 (2), 3 (2023)

Xu, B.: Food safety public opinion monitoring and guidance based on information retrieval and data mining - taking the empirical study of food safety Weibo public opinion in large agricultural markets as an example. Food Sci. 44 (7) (2023)

Jiang, W., Li, Z., Tian, Y.: A method for constructing a network security platform based on network threat and intelligence analysis, CN202211208148.5 (2022)

Yu, C.: The application of data encryption technology in computer network security. Electron. Commun. Comput. Sci. (2023)

Zhao, J., Zhang, P., Sun, Y.: Research and application of data mining for population. Income Manag. (3) (2022)

Download references

Acknowledgements

Research Grant Program for Educational Teaching Reform in Hainan Higher Education Institutions (Project No. Hnjq2022-8).

Author information

Authors and affiliations.

School of International Communication, Hainan University, Haikou, 570100, China

JunJun Zhang

School of Cyberspace Security, Hainan University, Haikou, 570100, China

JunJun Zhang & Qian Li

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to JunJun Zhang .

Editor information

Editors and affiliations.

Computer Science and Engineering, University of Aizu, Aizuwakamatsu Shi, Japan

Department of Computer Science and Information Engineering, National Taichung University of Science, North District, Taiwan

Hao Shang Ma

Department of Computer Science and Information Management, Providence University, Taichung, Taiwan

Yu-Wei Chan

Humanitas College, Kyung Hee University, Dongdaemun-gu, Korea (Republic of)

Hwa-Young Jeong

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Zhang, J., Li, Q. (2024). Research on Network Security Application Based on Data Mining Algorithm. In: Pei, Y., Ma, H.S., Chan, YW., Jeong, HY. (eds) Proceedings of Innovative Computing 2024, Vol. 3. IC 2024. Lecture Notes in Electrical Engineering, vol 1216. Springer, Singapore. https://doi.org/10.1007/978-981-97-4121-2_25

Download citation

DOI : https://doi.org/10.1007/978-981-97-4121-2_25

Published : 26 June 2024

Publisher Name : Springer, Singapore

Print ISBN : 978-981-97-4120-5

Online ISBN : 978-981-97-4121-2

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. (PDF) Computer & Network Security Threats

    research paper on computer network security

  2. Research paper on computer network security pdf in 2021

    research paper on computer network security

  3. (PDF) Computer Networks & Information Security

    research paper on computer network security

  4. Computer network information security risks.

    research paper on computer network security

  5. (DOC) Analysis And Research of Computer Network Security.docx

    research paper on computer network security

  6. Network Security Research Paper

    research paper on computer network security

VIDEO

  1. Cybersecurity Responsibility

  2. Computer & Network Security Section 2: Authorization and Authentication with Python

  3. Computer Network Security (CNS) Answer Key BCA 6th Sem

  4. Computer Network Security| Unit

  5. Cryptography and Network Security

  6. Should You Read Research Paper As A COMPUTER SCIENCE Practitioner? #computerscience

COMMENTS

  1. (PDF) ADVANCES IN NETWORK SECURITY: A COMPREHENSIVE ...

    The report proposes new research directions to advance research. This paper discusses network security for secure data communication. ... Research in Computer ... is a review of papers with ...

  2. Research paper A comprehensive review study of cyber-attacks and cyber

    Network Security: Network security protects the computer network from disruptors, which can be malware or hacking. Network security is a set of solutions that enable organizations to keep computer networks out of the reach of hackers, organized attackers, and malware (Zhang, 2021). Download : Download high-res image (282KB)

  3. Present and Future of Network Security Monitoring

    Abstract: Network Security Monitoring (NSM) is a popular term to refer to the detection of security incidents by monitoring the network events. An NSM system is central for the security of current networks, given the escalation in sophistication of cyberwarfare. In this paper, we review the state-of-the-art in NSM, and derive a new taxonomy of the functionalities and modules in an NSM system.

  4. Full article: Network security

    The goal of this paper is to communicate an updated perspective of network security for organizations, and researchers in the field and present some recommendations to tackle the current situation of security threats. Keywords: cyber attacks. data breaches. intrusions. network security. security intelligence.

  5. Cyber security: State of the art, challenges and future directions

    Wasyihun Sema Admass(MSc):-Received a B.Sc. Degree in Information Technology from Assosa University in 2017, and the M.Sc.His-research and teaching interests include the theory and application of Machine Learning, the internet of Things, Deep Learning, Computer Vision, Wireless and Network security, and Natural Language processing. I am ...

  6. Exploring the landscape of network security: a comparative ...

    The field of computer networking is experiencing rapid growth, accompanied by the swift advancement of internet tools. As a result, people are becoming more aware of the importance of network security. One of the primary concerns in ensuring security is the authority over domains, and network owners are striving to establish a common language to exchange security information and respond ...

  7. Network Security: A Brief Overview of Evolving ...

    Abstract: Network Security strategies evolve parallel with the advancement and development of computer systems and services. The. ubiquity of ICT devices and services o ffers undeniab le ...

  8. Free Full-Text

    More research is being dedicated to systematising security assurance methodologies and developing open tools for increasing the testing capacity of networking stakeholders, including mobile networks (e.g., EvORAN, Evaluation of Open RAN network equipment including underlying, which is a project funded by the European Commission through Europe ...

  9. (PDF) Research on Computer Network Security Problems and Protective

    This paper summarizes the network security in the era of big data, analyzes the factors affecting computer network security under the background of big data era, and puts forward some suggestions ...

  10. (PDF) Network Security and Cryptography Challenges and ...

    secret. The most important goals of modern cryptography are. the preservation of users' privacy, the maintenance of data. integrity, and the verification of information validity. [4]. Finding a ...

  11. A review on graph-based approaches for network security ...

    This survey paper provides a comprehensive overview of recent research and development in network security that uses graphs and graph-based data representation and analytics. The paper focuses on the graph-based representation of network traffic records and the application of graph-based analytics in intrusion detection and botnet detection. The paper aims to answer several questions related ...

  12. Computer Network Security and Technology Research

    The rapid development of computer network system brings both a great convenience and new security threats for users. Network security problem generally includes network system security and data security. Specifically, it refers to the reliability of network system, confidentiality, integrity and availability of data information in the system. Network security problem exists through all the ...

  13. COSE

    The International Source of Innovation for the Information Security and IT Audit Professional. Computers & Security is one of the most respected journals in IT security, being recognized worldwide as THE primary source of reference for IT security research and applications expertise.. Computers & Security provides the IT security community with a unique blend of leading edge research and sound ...

  14. Analysis and Protection of Computer Network Security Issues

    Abstract: With the rapid development of information technology, the influence of network on human way of life is greatly enhanced, and network technology is rapidly popularized in all fields of society, which makes the security of computer network become an urgent problem to be solved. How to ensure the rapid and healthy development of the network has become the focus of attention of scholars ...

  15. Research on Computer Network Information Security Protection Strategy

    From the computer's own defects and the related prediction of computer security in the future, it can be known that security is the direction that needs to be focused on at present, and the protective measures and evaluation algorithms for computer network information security need further research. In this paper, from the perspective of ...

  16. Cybersecurity: Past, Present and Future

    It is rather a limited security model and acts as a starting point for many elaborative security systems engineering frameworks, processes and policies [Ross et al., 2016]. During the 1980s, the ARPANET (the first wide-area network) became the Internet. As computers started to become more connected computer viruses became more advanced.

  17. Using deep learning to solve computer security challenges: a survey

    Although using machine learning techniques to solve computer security challenges is not a new idea, the rapidly emerging Deep Learning technology has recently triggered a substantial amount of interests in the computer security community. This paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges.

  18. The research of computer network security and protection strategy

    For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master ...

  19. Computer Network Security: Risks and Protective Measures

    American Journal of Engineering Research (AJER) 2019 American Journal of Engineering Research (AJER) e-ISSN: 2320-0847 p-ISSN : 2320-0936 Volume-8, Issue-5, pp-52-58 www.ajer.org Research Paper Open Access Computer Network Security: Risks and Protective Measures Benjamin Sarpong1 and Dr. Zhihua Li2 1 School of Internet of Things and Engineering Jiangnan University-China, School of Internet of ...

  20. 349293 PDFs

    Abdul Rahman Kadafi. Terradata Computindo is a company that has used a network security system, namely Firewall with the Sophos brand for communication security and network security on each ...

  21. Research on Computer Network Security Protection System Based on Level

    Abstract: With the development of cloud computing technology, cloud services have been used by more and more traditional applications and products because of their unique advantages such as virtualization, high scalability and universality. In the cloud computing environment, computer networks often encounter security problems such as external attacks, hidden dangers in the network and hidden ...

  22. Computer Network Security and Preventive Measures in the Age of Big

    Preventive Measures of Computer Network Security (1). Virus Defense Technology Virus defense technology is an important precautionary measure for computer network security at present. The power of virus has also been mentioned above. The damage caused by the virus to humans is simply incalculable.

  23. Research on Network Security Application Based on Data ...

    The security applications are critical in network security application research; however, it has an issue with erroneous performance positioning. The typical Bee colony algorithm is unable to address the inaccurate application positioning issue in network security application research, and the result is insufficient.

  24. Discuss the Security Countermeasures Network 3UREOHPVDQG&RXQWHUPHDVXUHV

    At present, the network security protection countermeasures are as follows: 3.1. Create a secure environment of network It is very significant to create a secure network environment, including monitoring users, setting user permissions, using access control, identification, monitoring routers and so on. 3.2.

  25. (PDF) Network Security: Cyber-attacks & Strategies to ...

    Mitigate Risks and Threads. Priyanka Dedakia. Department of Computing and information. Bournemouth University. Bournemouth, U.K. [email protected]. Abstract — Network security is a set ...