Artificial Intelligence Cyber Security Strategy

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Sensors (Basel)

Logo of sensors

The Impact of Artificial Intelligence on Data System Security: A Literature Review

Ricardo raimundo.

1 ISEC Lisboa, Instituto Superior de Educação e Ciências, 1750-142 Lisbon, Portugal; [email protected]

Albérico Rosário

2 Research Unit on Governance, Competitiveness and Public Policies (GOVCOPP), University of Aveiro, 3810-193 Aveiro, Portugal

Associated Data

Not applicable.

Diverse forms of artificial intelligence (AI) are at the forefront of triggering digital security innovations based on the threats that are arising in this post-COVID world. On the one hand, companies are experiencing difficulty in dealing with security challenges with regard to a variety of issues ranging from system openness, decision making, quality control, and web domain, to mention a few. On the other hand, in the last decade, research has focused on security capabilities based on tools such as platform complacency, intelligent trees, modeling methods, and outage management systems in an effort to understand the interplay between AI and those issues. the dependence on the emergence of AI in running industries and shaping the education, transports, and health sectors is now well known in the literature. AI is increasingly employed in managing data security across economic sectors. Thus, a literature review of AI and system security within the current digital society is opportune. This paper aims at identifying research trends in the field through a systematic bibliometric literature review (LRSB) of research on AI and system security. the review entails 77 articles published in the Scopus ® database, presenting up-to-date knowledge on the topic. the LRSB results were synthesized across current research subthemes. Findings are presented. the originality of the paper relies on its LRSB method, together with an extant review of articles that have not been categorized so far. Implications for future research are suggested.

1. Introduction

The assumption that the human brain may be deemed quite comparable to computers in some ways offers the spontaneous basis for artificial intelligence (AI), which is supported by psychology through the idea of humans and animals operating like machines that process information by devices of associative memory [ 1 ]. Nowadays, researchers are working on the possibilities of AI to cope with varying issues of systems security across diverse sectors. Hence, AI is commonly considered an interdisciplinary research area that attracts considerable attention both in economics and social domains as it offers a myriad of technological breakthroughs with regard to systems security [ 2 ]. There is a universal trend of investing in AI technology to face security challenges of our daily lives, such as statistical data, medicine, and transportation [ 3 ].

Some claim that specific data from key sectors have supported the development of AI, namely the availability of data from e-commerce [ 4 ], businesses [ 5 ], and government [ 6 ], which provided substantial input to ameliorate diverse machine-learning solutions and algorithms, in particular with respect to systems security [ 7 ]. Additionally, China and Russia have acknowledged the importance of AI for systems security and competitiveness in general [ 8 , 9 ]. Similarly, China has recognized the importance of AI in terms of housing security, aiming at becoming an authority in the field [ 10 ]. Those efforts are already being carried out in some leading countries in order to profit the most from its substantial benefits [ 9 ]. In spite of the huge development of AI in the last few years, the discussion around the topic of systems security is sparse [ 11 ]. Therefore, it is opportune to acquaint the last developments regarding the theme in order to map the advancements in the field and ensuing outcomes [ 12 ]. In view of this, we intend to find out the principal trends of issues discussed on the topic these days in order to answer the main research question: What is the impact of AI on data system security?

The article is organized as follows. In Section 2 , we put forward diverse theoretical concepts related to AI in systems security. In Section 3 , we present the methodological approach. In Section 4 , we discuss the main fields of use of AI with regard to systems security, which came out from the literature. Finally, we conclude this paper by suggesting implications and future research avenues.

2. Literature Trends: AI and Systems Security

The concept of AI was introduced following the creation of the notion of digital computing machine in an attempt to ascertain whether a machine is able to “think” [ 1 ] or if the machine can carry out humans’ tasks [ 13 ]. AI is a vast domain of information and computer technologies (ICT), which aims at designing systems that can operate autonomously, analogous to the individuals’ decision-making process [ 14 ].In terms of AI, a machine may learn from experience through processing an immeasurable quantity of data while distinguishing patterns in it, as in the case of Siri [ 15 ] and image recognition [ 16 ], technologies based on machine learning that is a subtheme of AI, defined as intelligent systems with the capacity to think and learn [ 1 ].

Furthermore, AI entails a myriad of related technologies, such as neural networks [ 17 ] and machine learning [ 18 ], just to mention a few, and we can identify some research areas of AI:

  • (I) Machine learning is a myriad of technologies that allow computers to carry out algorithms based on gathered data and distinct orders, providing the machine the capabilities to learn without instructions from humans, adjusting its own algorithm to the situation, while learning and recoding itself, such as Google and Siri when performing distinct tasks ordered by voice [ 19 ]. As well, video surveillance that tracks unusual behavior [ 20 ];
  • (II) Deep learning constitutes the ensuing progress of machine learning, in which the machine carry out tasks directly from pictures, text, and sound, through a wide set of data architecture that entails numerous layers in order to learn and characterize data with several levels of abstraction imitating thus how the natural brain processes information [ 21 ]. This is illustrated, for example, in forming a certificate database structure of university performance key indicators, in order to fix issues such as identity authentication [ 21 ];
  • (III) Neural networks are composed of a pattern recognition system that machine/deep learning operates to perform learning from observational data, figuring out its own solutions such as an auto-steering gear system with a fuzzy regulator, which enables to select optimal neural network models of the vessel paths, to obtain in this way control activity [ 22 ];
  • (IV) Natural language processing machines analyze language and speech as it is spoken, resorting to machine learning and natural language processing, such as developing a swarm intelligence and active system, while mounting friendly human-computer interface software for users, to be implemented in educational and e-learning organizations [ 23 ];
  • (V) Expert systems are composed of software arrangements that assist in achieving answers to distinct inquiries provided either by a customer or by another software set, in which expert knowledge is set aside in a particular area of the application that includes a reasoning component to access answers, in view of the environmental information and subsequent decision making [ 24 ].

Those subthemes of AI are applied to many sectors, such as health institutions, education, and management, through varying applications related to systems security. These abovementioned processes have been widely deployed to solve important security issues such as the following application trends ( Figure 1 ):

  • (a) Cyber security, in terms of computer crime, behavior research, access control, and surveillance, as for example the case of computer vision, in which an algorithmic analyses images, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) techniques [ 6 , 7 , 12 , 19 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 ];
  • (b) Information management, namely in supporting decision making, business strategy, and expert systems, for example, by improving the quality of the relevant strategic decisions by analyzing big data, as well as in the management of the quality of complex objects [ 2 , 4 , 5 , 11 , 14 , 24 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 ];
  • (c) Societies and institutions, regarding computer networks, privacy, and digitalization, legal and clinical assistance, for example, in terms of legal support of cyber security, digital modernization, systems to support police investigations and the efficiency of technological processes in transport [ 8 , 9 , 10 , 15 , 17 , 18 , 20 , 21 , 23 , 28 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 ];
  • (d) Neural networks, for example, in terms of designing a model of human personality for use in robotic systems [ 1 , 13 , 16 , 22 , 74 , 75 ].

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07029-g001.jpg

Subthemes/network of all keywords of AI—source: own elaboration.

Through these streams of research, we will explain how the huge potential of AI can be deployed to over-enhance systems security that is in use both in states and organizations, to mitigate risks and increase returns while identifying, averting cyber attacks, and determine the best course of action [ 19 ]. AI could even be unveiled as more effective than humans in averting potential threats by various security solutions such as redundant systems of video surveillance, VOIP voice network technology security strategies [ 36 , 76 , 77 ], and dependence upon diverse platforms for protection (platform complacency) [ 30 ].

The design of the abovementioned conceptual and technological framework was not made randomly, as we did a preliminary search on Scopus with the keywords “Artificial Intelligence” and “Security”.

3. Materials and Methods

We carried out a systematic bibliometric literature review (LRSB) of the “Impact of AI on Data System Security”. the LRSB is a study concept that is based on a detailed, thorough study of the recognition and synthesis of information, being an alternative to traditional literature reviews, improving: (i) the validity of the review, providing a set of steps that can be followed if the study is replicated; (ii) accuracy, providing and demonstrating arguments strictly related to research questions; and (iii) the generalization of the results, allowing the synthesis and analysis of accumulated knowledge [ 78 , 79 , 80 ]. Thus, the LRSB is a “guiding instrument” that allows you to guide the review according to the objectives.

The study is performed following Raimundo and Rosário suggestions as follows: (i) definition of the research question; (ii) location of the studies; (iii) selection and evaluation of studies; (iv) analysis and synthesis; (v) presentation of results; finally (vi) discussion and conclusion of results. This methodology ensures a comprehensive, auditable, replicable review that answers the research questions.

The review was carried out in June 2021, with a bibliographic search in the Scopus database of scientific articles published until June 2021. the search was carried out in three phases: (i) using the keyword Artificial Intelligence “382,586 documents were obtained; (ii) adding the keyword “Security”, we obtained a set of 15,916 documents; we limited ourselves to Business, Management, and Accounting 401 documents were obtained and finally (iii) exact keyword: Data security, Systems security a total of 77 documents were obtained ( Table 1 ).

Screening methodology.

Source: own elaboration.

The search strategy resulted in 77 academic documents. This set of eligible break-downs was assessed for academic and scientific relevance and quality. Academic Documents, Conference Paper (43); Article (29); Review (3); Letter (1); and retracted (1).

Peer-reviewed academic documents on the impact of artificial intelligence on data system security were selected until 2020. In the period under review, 2021 was the year with the highest number of peer-reviewed academic documents on the subject, with 18 publications, with 7 publications already confirmed for 2021. Figure 2 reviews peer-reviewed publications published until 2021.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07029-g002.jpg

Number of documents by year. Source: own elaboration.

The publications were sorted out as follows: 2011 2nd International Conference on Artificial Intelligence Management Science and Electronic Commerce Aimsec 2011 Proceedings (14); Proceedings of the 2020 IEEE International Conference Quality Management Transport and Information Security Information Technologies IT and Qm and Is 2020 (6); Proceedings of the 2019 IEEE International Conference Quality Management Transport and Information Security Information Technologies IT and Qm and Is 2019 (5); Computer Law and Security Review (4); Journal of Network and Systems Management (4); Decision Support Systems (3); Proceedings 2021 21st Acis International Semi Virtual Winter Conference on Software Engineering Artificial Intelligence Networking and Parallel Distributed Computing Snpd Winter 2021 (3); IEEE Transactions on Engineering Management (2); Ictc 2019 10th International Conference on ICT Convergence ICT Convergence Leading the Autonomous Future (2); Information and Computer Security (2); Knowledge Based Systems (2); with 1 publication (2013 3rd International Conference on Innovative Computing Technology Intech 2013; 2020 IEEE Technology and Engineering Management Conference Temscon 2020; 2020 International Conference on Technology and Entrepreneurship Virtual Icte V 2020; 2nd International Conference on Current Trends In Engineering and Technology Icctet 2014; ACM Transactions on Management Information Systems; AFE Facilities Engineering Journal; Electronic Design; Facct 2021 Proceedings of the 2021 ACM Conference on Fairness Accountability and Transparency; HAC; ICE B 2010 Proceedings of the International Conference on E Business; IEEE Engineering Management Review; Icaps 2008 Proceedings of the 18th International Conference on Automated Planning and Scheduling; Icaps 2009 Proceedings of the 19th International Conference on Automated Planning and Scheduling; Industrial Management and Data Systems; Information and Management; Information Management and Computer Security; Information Management Computer Security; Information Systems Research; International Journal of Networking and Virtual Organisations; International Journal of Production Economics; International Journal of Production Research; Journal of the Operational Research Society; Proceedings 2020 2nd International Conference on Machine Learning Big Data and Business Intelligence Mlbdbi 2020; Proceedings Annual Meeting of the Decision Sciences Institute; Proceedings of the 2014 Conference on IT In Business Industry and Government An International Conference By Csi on Big Data Csibig 2014; Proceedings of the European Conference on Innovation and Entrepreneurship Ecie; TQM Journal; Technology In Society; Towards the Digital World and Industry X 0 Proceedings of the 29th International Conference of the International Association for Management of Technology Iamot 2020; Wit Transactions on Information and Communication Technologies).

We can say that in recent years there has been some interest in research on the impact of artificial intelligence on data system security.

In Table 2 , we analyze for the Scimago Journal & Country Rank (SJR), the best quartile, and the H index by publication.

Scimago journal and country rank impact factor.

Note: * data not available. Source: own elaboration.

Information Systems Research is the most quoted publication with 3510 (SJR), Q1, and H index 159.

There is a total of 11 journals on Q1, 3 journals on Q2 and 2 journals on Q3, and 2 journal on Q4. Journals from best quartile Q1 represent 27% of the 41 journals titles; best quartile Q2 represents 7%, best quartile Q3 represents 5%, and finally, best Q4 represents 5% each of the titles of 41 journals. Finally, 23 of the publications representing 56%, the data are not available.

As evident from Table 2 , the significant majority of articles on artificial intelligence on data system security rank on the Q1 best quartile index.

The subject areas covered by the 77 scientific documents were: Business, Management and Accounting (77); Computer Science (57); Decision Sciences (36); Engineering (21); Economics, Econometrics, and Finance (15); Social Sciences (13); Arts and Humanities (3); Psychology (3); Mathematics (2); and Energy (1).

The most quoted article was “CCANN: An intrusion detection system based on combining cluster centers and nearest neighbors” from Lin, Ke, and Tsai 290 quotes published in the Knowledge-Based Systems with 1590 (SJR), the best quartile (Q1) and with H index (121). the published article proposes a new resource representation approach, a cluster center, and the nearest neighbor approach.

In Figure 3 , we can analyze the evolution of citations of documents published between 2010 and 2021, with a growing number of citations with an R2 of 0.45%.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07029-g003.jpg

Evolution and number of citations between 2010 and 2021. Source: own elaboration.

The h index was used to verify the productivity and impact of the documents, based on the largest number of documents included that had at least the same number of citations. Of the documents considered for the h index, 11 have been cited at least 11 times.

In Appendix A , Table A1 , citations of all scientific articles until 2021 are analyzed; 35 documents were not cited until 2021.

Appendix A , Table A2 , examines the self-quotation of documents until 2021, in which self-quotation was identified for a total of 16 self-quotations.

In Figure 4 , a bibliometric analysis was performed to analyze and identify indicators on the dynamics and evolution of scientific information using the main keywords. the analysis of the bibliometric research results using the scientific software VOSviewe aims to identify the main keywords of research in “Artificial Intelligence” and “Security”.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07029-g004.jpg

Network of linked keywords. Source: own elaboration.

The linked keywords can be analyzed in Figure 4 , making it possible to clarify the network of keywords that appear together/linked in each scientific article, allowing us to know the topics analyzed by the research and to identify future research trends.

4. Discussion

By examining the selected pieces of literature, we have identified four principal areas that have been underscored and deserve further investigation with regard to cyber security in general: business decision making, electronic commerce business, AI social applications, and neural networks ( Figure 4 ). There is a myriad of areas in where AI cyber security can be applied throughout social, private, and public domains of our daily lives, from Internet banking to digital signatures.

First, it has been discussed the possible decreasing of unnecessary leakage of accounting information [ 27 ], mainly through security drawbacks of VOIP technology in IP network systems and subsequent safety measures [ 77 ], which comprises a secure dynamic password used in Internet banking [ 29 ].

Second, it has been researched some computer user cyber security behaviors, which includes both a naïve lack of concern about the likelihood of facing security threats and dependence upon specific platforms for protection, as well as the dependence on guidance from trusted social others [ 30 ], which has been partly resolved through a mobile agent (MA) management systems in distributed networks, while operating a model of an open management framework that provides a broad range of processes to enforce security policies [ 31 ].

Third, AI cyber systems security always aims at achieving stability of the programming and analysis procedures by clarifying the relationship of code fault-tolerance programming with code security in detail to strengthen it [ 33 ], offering an overview of existing cyber security tasks and roadmap [ 32 ].

Fourth, in this vein, numerous AI tools have been developed to achieve a multi-stage security task approach for a full security life cycle [ 38 ]. New digital signature technology has been built, amidst the elliptic curve cryptography, of increasing reliance [ 28 ]; new experimental CAPTCHA has been developed, through more interference characters and colorful background [ 8 ] to provide better protection against spambots, allowing people with little knowledge of sign languages to recognize gestures on video relatively fast [ 70 ]; novel detection approach beyond traditional firewall systems have been developed (e.g., cluster center and nearest neighbor—CANN) of higher efficiency for detection of attacks [ 71 ]; security solutions of AI for IoT (e.g., blockchain), due to its centralized architecture of security flaws [ 34 ]; and integrated algorithm of AI to identify malicious web domains for security protection of Internet users [ 19 ].

In sum, AI has progressed lately by advances in machine learning, with multilevel solutions to the security problems faced in security issues both in operating systems and networks, comprehending algorithms, methods, and tools lengthily used by security experts for the better of the systems [ 6 ]. In this way, we present a detailed overview of the impacts of AI on each of those fields.

4.1. Business Decision Making

AI has an increasing impact on systems security aimed at supporting decision making at the management level. More and more, it is discussed expert systems that, along with the evolution of computers, are able to integrate systems into corporate culture [ 24 ]. Such systems are expected to maximize benefits against costs in situations where a decision-making agent has to decide between a limited set of strategies of sparse information [ 14 ], while a strategic decision in a relatively short period of time is required demanded and of quality, for example by intelligent analysis of big data [ 39 ].

Secondly, it has been adopted distributed decision models coordinated toward an overall solution, reliant on a decision support platform [ 40 ], either more of a mathematical/modeling support of situational approach to complex objects [ 41 ], or more of a web-based multi-perspective decision support system (DSS) [ 42 ].

Thirdly, the problem of software for the support of management decisions was resolved by combining a systematic approach with heuristic methods and game-theoretic modeling [ 43 ] that, in the case of industrial security, reduces the subsequent number of incidents [ 44 ].

Fourthly, in terms of industrial management and ISO information security control, a semantic decision support system increases the automation level and support the decision-maker at identifying the most appropriate strategy against a modeled environment [ 45 ] while providing understandable technology that is based on the decisions and interacts with the machine [ 46 ].

Finally, with respect to teamwork, AI validates a theoretical model of behavioral decision theory to assist organizational leaders in deciding on strategic initiatives [ 11 ] while allowing understanding who may have information that is valuable for solving a collaborative scheduling problem [ 47 ].

4.2. Electronic Commerce Business

The third research stream focuses on e-commerce solutions to improve its systems security. This AI research stream focuses on business, principally on security measures to electronic commerce (e-commerce), in order to avoid cyber attacks, innovate, achieve information, and ultimately obtain clients [ 5 ].

First, it has been built intelligent models around the factors that induce Internet users to make an online purchase, to build effective strategies [ 48 ], whereas it is discussed the cyber security issues by diverse AI models for controlling unauthorized intrusion [ 49 ], in particular in some countries such as China, to solve drawbacks in firewall technology, data encryption [ 4 ] and qualification [ 2 ].

Second, to adapt to the increasingly demanding environment nowadays of a world pandemic, in terms of finding new revenue sources for business [ 3 ] and restructure business digital processes to promote new products and services with enough privacy and manpower qualified accordingly and able to deal with the AI [ 50 ].

Third, to develop AI able to intelligently protect business either by a distinct model of decision trees amidst the Internet of Things (IoT) [ 51 ] or by ameliorating network management through active networks technology, of multi-agent architecture able to imitate the reactive behavior and logical inference of a human expert [ 52 ].

Fourth, to reconceptualize the role of AI within the proximity’s spatial and non-spatial dimensions of a new digital industry framework, aiming to connect the physical and digital production spaces both in the traditional and new technology-based approaches (e.g., industry 4.0), promoting thus innovation partnerships and efficient technology and knowledge transfer [ 53 ]. In this vein, there is an attempt to move the management systems from a centralized to a distributed paradigm along the network and based on criteria such as for example the delegation degree [ 54 ] that inclusive allows the transition from industry 4.0 to industry 5.0i, through AI in the form of Internet of everything, multi-agent systems and emergent intelligence and enterprise architecture [ 58 ].

Fifth, in terms of manufacturing environments, following that networking paradigm, there is also an attempt to manage agent communities in distributed and varied manufacturing environments through an AI multi-agent virtual manufacturing system (e.g., MetaMorph) that optimizes real-time planning and security [ 55 ]. In addition, in manufacturing, smart factories have been built to mitigate security vulnerabilities of intelligent manufacturing processes automation by AI security measures and devices [ 56 ] as, for example, in the design of a mine security monitoring configuration software platform of a real-time framework (e.g., the device management class diagram) [ 26 ]. Smart buildings in manufacturing and nonmanufacturing environments have been adopted, aiming at reducing costs, the height of the building, and minimizing the space required for users [ 57 ].

Finally, aiming at augmenting the cyber security of e-commerce and business in general, other projects have been put in place, such as computer-assisted audit tools (CAATs), able to carry on continuous auditing, allowing auditors to augment their productivity amidst the real-time accounting and electronic data interchange [ 59 ] and a surge in the demand of high-tech/AI jobs [ 60 ].

4.3. AI Social Applications

As seen, AI systems security can be widely deployed across almost all society domains, be in regulation, Internet security, computer networks, digitalization, health, and other numerous fields (see Figure 4 ).

First, it has been an attempt to regulate cyber security, namely in terms of legal support of cyber security, with regard to the application of artificial intelligence technology [ 61 ], in an innovative and economical/political-friendly way [ 9 ] and in fields such as infrastructures, by ameliorating the efficiency of technological processes in transport, reducing, for example, the inter train stops [ 63 ] and education, by improving the cyber security of university E-Gov, for example in forming a certificate database structure of university performance key indicators [ 21 ] e-learning organizations by swarm intelligence [ 23 ] and acquainting the risk a digital campus will face according to ISO series standards and criteria of risk levels [ 25 ] while suggesting relevant solutions to key issues in its network information safety [ 12 ].

Second, some moral and legal issues have risen, in particular in relation to privacy, sex, and childhood. Is the case of the ethical/legal legitimacy of publishing open-source dual-purpose machine-learning algorithms [ 18 ], the needed legislated framework comprising regulatory agencies and representatives of all stakeholder groups gathered around AI [ 68 ], the gendering issue of VPAs as female (e.g., Siri) as replicate normative assumptions about the potential role of women as secondary to men [ 15 ], the need of inclusion of communities to uphold its own code [ 35 ] and the need to improve the legal position of people and children in particular that are exposed to AI-mediated risk profiling practices [ 7 , 69 ].

Third, the traditional industry also benefits from AI, given that it can improve, for example, the safety of coal mine, by analyzing the coal mine safety scheme storage structure, building data warehouse and analysis [ 64 ], ameliorating, as well, the security of smart cities and ensuing intelligent devices and networks, through AI frameworks (e.g., United Theory of Acceptance and Use of Technology—UTAUT) [ 65 ], housing [ 10 ] and building [ 66 ] security system in terms of energy balance (e.g., Direct Digital Control System), implying fuzzy logic as a non-precise program tool that allows the systems to function well [ 66 ], or even in terms of data integrity attacks to outage management system OMSs and ensuing AI means to detect and mitigate them [ 67 ].

Fourth, the citizens, in general, have reaped benefits from areas of AI such as police investigation, through expert systems that offer support in terms of profiling and tracking criminals based on machine-learning and neural network techniques [ 17 ], video surveillance systems of real-time accuracy [ 76 ], resorting to models to detect moving objects keeping up with environment changes [ 36 ], of dynamical sensor selection in processing the image streams of all cameras simultaneously [ 37 ], whereas ambient intelligence (AmI) spaces, in where devices, sensors, and wireless networks, combine data from diverse sources and monitor user preferences and their subsequent results on users’ privacy under a regulatory privacy framework [ 62 ].

Finally, AI has granted the society noteworthy progress in terms of clinical assistance in terms of an integrated electronic health record system into the existing risk management software to monitor sepsis at intensive care unit (ICU) through a peer-to-peer VPN connection and with a fast and intuitive user interface [ 72 ]. As well, it has offered an AI organizational solution of innovative housing model that combines remote surveillance, diagnostics, and the use of sensors and video to detect anomalies in the behavior and health of the elderly [ 20 ], together with a case-based decision support system for the automatic real-time surveillance and diagnosis of health care-associated infections, by diverse machine-learning techniques [ 73 ].

4.4. Neural Networks

Neural networks, or the process through which machines learn from observational data, coming up with their own solutions, have been lately discussed over some stream of issues.

First, it has been argued that it is opportune to develop a software library for creating artificial neural networks for machine learning to solve non-standard tasks [ 74 ], along a decentralized and integrated AI environment that can accommodate video data storage and event-driven video processing, gathered from varying sources, such as video surveillance systems [ 16 ], which images could be improved through AI [ 75 ].

Second, such neural networks architecture has progressed into a huge number of neurons in the network, in which the devices of associative memory were designed with the number of neurons comparable to the human brain within supercomputers [ 1 ]. Subsequently, such neural networks can be modeled on the base of switches architecture to interconnect neurons and to store the training results in the memory, on the base of the genetic algorithms to be exported to other robotic systems: a model of human personality for use in robotic systems in medicine and biology [ 13 ].

Finally, the neural network is quite representative of AI, in the attempt of, once trained in human learning and self-learning, could operate without human guidance, as in the case of a current positioning vessel seaway systems, involving a fuzzy logic regulator, a neural network classifier enabling to select optimal neural network models of the vessel paths, to obtain control activity [ 22 ].

4.5. Data Security and Access Control Mechanisms

Access control can be deemed as a classic security model that is pivotal do any security and privacy protection processes to support data access from different environments, as well as to protect unauthorized access according to a given security policy [ 81 ]. In this vein, data security and access control-related mechanisms have been widely debated these days, particularly with regard to their distinct contextual conditions in terms, for example, of spatial and temporal environs that differ according to diverse, decentralized networks. Those networks constitute a major challenge because they are dynamically located on “cloud” or “fog” environments, rather than fixed desktop structures, demanding thus innovative approaches in terms of access security, such as fog-based context-aware access control (FB-CAAC) [ 81 ]. Context-awareness is, therefore, an important characteristic of changing environs, where users access resources anywhere and anytime. As a result, it is paramount to highlight the interplay between the information, now based on fuzzy sets, and its situational context to implement context-sensitive access control policies, as well, through diverse criteria such as, for example, following subject and action-specific attributes. In this way, different contextual conditions, such as user profile information, social relationship information, and so on, need to be added to the traditional, spatial and temporal approaches to sustain these dynamic environments [ 81 ]. In the end, the corresponding policies should aim at defining the security and privacy requirements through a fog-based context-aware access control model that should be respected for distributed cloud and fog networks.

5. Conclusion and Future Research Directions

This piece of literature allowed illustrating the AI impacts on systems security, which influence our daily digital life, business decision making, e-commerce, diverse social and legal issues, and neural networks.

First, AI will potentially impact our digital and Internet lives in the future, as the major trend is the emergence of increasingly new malicious threats from the Internet environment; likewise, greater attention should be paid to cyber security. Accordingly, the progressively more complexity of business environment will demand, as well, more and more AI-based support systems to decision making that enables management to adapt in a faster and accurate way while requiring unique digital e-manpower.

Second, with regard to the e-commerce and manufacturing issues, principally amidst the world pandemic of COVID-19, it tends to augment exponentially, as already observed, which demands subsequent progress with respect to cyber security measures and strategies. the same, regarding the social applications of AI that, following the increase in distance services, will also tend to adopt this model, applied to improved e-health, e-learning, and e-elderly monitoring systems.

Third, subsequent divisive issues are being brought to the academic arena, which demands progress in terms of a legal framework, able to comprehend all the abovementioned issues in order to assist the political decisions and match the expectations of citizens.

Lastly, it is inevitable further progress in neural networks platforms, as it represents the cutting edge of AI in terms of human thinking imitation technology, the main goal of AI applications.

To summarize, we have presented useful insights with respect to the impact of AI in systems security, while we illustrated its influence both on the people’ service delivering, in particular in security domains of their daily matters, health/education, and in the business sector, through systems capable of supporting decision making. In addition, we over-enhance the state of the art in terms of AI innovations applied to varying fields.

Future Research Issues

Due to the aforementioned scenario, we also suggest further research avenues to reinforce existing theories and develop new ones, in particular the deployment of AI technologies in small medium enterprises (SMEs), of sparse resources and from traditional sectors that constitute the core of intermediate economies and less developed and peripheral regions. In addition, the building of CAAC solutions constitutes a promising field in order to control data resources in the cloud and throughout changing contextual conditions.


We would like to express our gratitude to the Editor and the Referees. They offered extremely valuable suggestions or improvements. the authors were supported by the GOVCOPP Research Unit of Universidade de Aveiro and ISEC Lisboa, Higher Institute of Education and Sciences.

Overview of document citations period ≤ 2010 to 2021.

Overview of document self-citation period ≤ 2010 to 2020.

Author Contributions

Conceptualization, R.R. and A.R.; data curation, R.R. and A.R.; formal analysis, R.R. and A.R.; funding acquisition, R.R. and A.R.; investigation, R.R. and A.R.; methodology, R.R. and A.R.; project administration, R.R. and A.R.; software, R.R. and A.R.; validation, R.R. and A.R.; resources, R.R. and A.R.; writing—original draft preparation, R.R. and A.R.; writing—review and editing, R.R. and A.R.; visualization, R.R. and A.R.; supervision, R.R. and A.R.; project administration, R.R. and A.R.; All authors have read and agreed to the published version of the manuscript.

This research received no external funding.

Institutional Review Board Statement

Informed consent statement, data availability statement, conflicts of interest.

The authors declare no conflict of interest. the funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Artificial intelligence in cyber security: research advances, challenges, and opportunities

  • Published: 13 March 2021
  • Volume 55 , pages 1029–1053, ( 2022 )

Cite this article

  • Zhimin Zhang   ORCID: 1 ,
  • Huansheng Ning   ORCID: 1 , 2 ,
  • Feifei Shi 1 ,
  • Fadi Farha 1 ,
  • Yang Xu 1 ,
  • Jiabo Xu 3 ,
  • Fan Zhang 1 &
  • Kim-Kwang Raymond Choo   ORCID: 4  

15k Accesses

63 Citations

24 Altmetric

Explore all metrics

In recent times, there have been attempts to leverage artificial intelligence (AI) techniques in a broad range of cyber security applications. Therefore, this paper surveys the existing literature (comprising 54 papers mainly published between 2016 and 2020) on the applications of AI in user access authentication, network situation awareness, dangerous behavior monitoring, and abnormal traffic identification. This paper also identifies a number of limitations and challenges, and based on the findings, a conceptual human-in-the-loop intelligence cyber security model is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research paper on artificial intelligence in cyber security

Cloud adoption risk report 2019 (pdf). (2019).

What’s the difference between network security & cyber security? (2020).

Ai in cybersecurity-capgemini worldwide. (2020).

Ai index 2019 report (pdf). (2020).

Enterprise immune system-darktrace. (2019).

Invincea launches x-as-a-service managed security. (2020).

Congnigo-infosecurity magazine. (2019).

Speech emotion recognition using semi-supervised learning with ladder networks. In: 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia), pp. 1–5 (2018).

Knowledge-directed artificial intelligence reasoning over schemas (kairos). (2020).

Darpa robotics challenge (DRC) using human-machine teamwork to perform disasterresponse with a humanoid robot. (2020).

Training ai to win a dogfight. (2020).

Cyborg super soldiers: Us army report reveals vision for deadly ‘machine humans’ with infrared sight, boosted strength and mind-controlled weapons by 2050. (2019).

Adekunle YA, Okolie SO, Adebayo AO, Ebiesuwa S, Ehiwe DD (2019) Holistic exploration of gaps vis-à-vis artificial intelligence in automated teller machine and internet banking. In: International journal of applied information systems (IJAIS), vol 12

Abdallah AE (2016) Detection and prediction of insider threats to cyber security: a systematic literature review and meta-analysis. Big Data Anal 1(1):6

Article   Google Scholar  

Ahmed M, Mahmood AN, Hu J (2015) A survey of network anomaly detection techniques. J. Netw. Comput. Appl. 60:19–31

Aljamal I, Tekeoğlu A, Bekiroglu K, Sengupta S (2019) Hybrid intrusion detection system using machine learning techniques in cloud computing environments. In: 2019 IEEE 17th international conference on software engineering research, management and applications (SERA), pp 84–89

Aljurayban NS, Emam A (2015) Framework for cloud intrusion detection system service. In: 2015 2nd world symposium on web applications and networking (WSWAN), pp 1–5

Amberkar A, Awasarmol P, Deshmukh G, Dave P (2018) Speech recognition using recurrent neural networks. In: 2018 international conference on current trends towards converging technologies (ICCTCT), pp 1–4

Bakhshi B, Veisi H (2019) End to end fingerprint verification based on convolutional neural network. In: 2019 27th Iranian conference on electrical engineering (ICEE), pp 1994–1998

Bao H, He H, Liu Z, Liu Z (2019) Research on information security situation awareness system based on big data and artificial intelligence technology. In: 2019 international conference on robots intelligent system (ICRIS), pp 318–322

Benias N, Markopoulos AP (2017) A review on the readiness level and cyber-security challenges in industry 4.0. In: 2017 south eastern European design automation, computer engineering, computer networks and social media conference (SEEDA-CECNSM), pp 1–5

Chang C, Eude T, Obando Carbajal LE (2016) Biometric authentication by keystroke dynamics for remote evaluation with one-class classification. In: Khoury R, Drummond C (eds) Advances in artificial intelligence. Springer, Cham, pp 21–32

Chapter   Google Scholar  

Deng M, Yang H, Cao J, Feng X (2019) View-invariant gait recognition based on deterministic learning and knowledge fusion. In: 2019 international joint conference on neural networks (IJCNN), pp 1–8

Ding C, Tao D (2018) Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE Trans Pattern Anal Mach Intell 40(4):1002–1014

Dongmei Z, Jinxing L (2018) Study on network security situation awareness based on particle swarm optimization algorithm. Comput Ind Eng 125:764–775.

Fairuz S, Habaebi MH, Elsheikh EMA (2018) Finger vein identification based on transfer learning of alexnet. In: 2018 7th international conference on computer and communication engineering (ICCCE), pp 465–469

Fernández Maimó L, Perales Gómez AL, García Clemente FJ, Gil Pérez M, Martínez Pérez G (2018) A self-adaptive deep learning-based system for anomaly detection in 5g networks. IEEE Access 6:7700–7712

Gangwar A, Joshi A (2016) Deepirisnet: deep iris representation with applications in iris recognition and cross-sensor iris recognition. In: 2016 IEEE international conference on image processing (ICIP), pp 2301–2305

Gu T, Dolan-Gavitt B, Garg S (2017) BadNets: identifying vulnerabilities in the machine learning model supply chain. ArXiv e-prints arXiv:1708.06733

Guan Y, Ge X (2018) Distributed attack detection and secure estimation of networked cyber-physical systems against false data injection attacks and jamming attacks. IEEE Trans Signal Inf Process Netw 4(1):48–59

MathSciNet   Google Scholar  

Han Z, Wang J (2019) Speech emotion recognition based on deep learning and kernel nonlinear PSVM. In: 2019 Chinese control and decision conference (CCDC), pp. 1426–1430

Hariyanto, Sudiro SA, Lukman S (2015) Minutiae matching algorithm using artificial neural network for fingerprint recognition. In: 2015 3rd international conference on artificial intelligence, modelling and simulation (AIMS), pp 37–41

Holzinger A, Plass M, Holzinger K, Crişan GC, Pintea CM, Palade V (2016) Towards interactive machine learning (IML): applying ant colony algorithms to solve the traveling salesman problem with the human-in-the-loop approach. In: Buccafurri F, Holzinger A, Kieseberg P, Tjoa AM, Weippl E (eds) Availability, reliability, and security in information systems. Springer, Cham, pp 81–95

Hong H, Lee M, Park K (2017) Convolutional neural network-based finger-vein recognition using nir image sensors. Sensors (Switzerland) 17:1297.

Hsieh C, Chan T (2016) Detection ddos attacks based on neural-network using apache spark. In: 2016 international conference on applied system innovation (ICASI), pp 1–4

Hu W, Tan Y (2017) Generating adversarial malware examples for black-box attacks based on gan. CoRR.

Jenab K, Moslehpour S (2016) Cyber security management: a review. Soc. Bus. Manag. Dyn. 5(11):16–39

Google Scholar  

Ji Y, Bowman B, Huang HH (2019) Securing malware cognitive systems against adversarial attacks. In: 2019 IEEE international conference on cognitive computing (ICCC), pp 1–9

Jyothi V, Wang X, Addepalli SK, Karri R (2016) Brain: behavior based adaptive intrusion detection in networks: Using hardware performance counters to detect ddos attacks. In: 2016 29th international conference on VLSI design and 2016 15th international conference on embedded systems (VLSID), pp 587–588

Sugandhi K, Raju G (2019) An efficient hog-centroid descriptor for human gait recognition. In: 2019 amity international conference on artificial intelligence (AICAI), pp 355–360

Kanimozhi, V, Jacob TP (2019) Artificial intelligence based network intrusion detection with hyper-parameter optimization tuning on the realistic cyber dataset cse-cic-ids2018 using cloud computing. In: 2019 international conference on communication and signal processing (ICCSP), pp 0033–0036

Kolosnjaji B, Demontis A, Biggio B, Maiorca D, Giacinto G, Eckert C, Roli F (2018) Adversarial malware binaries: evading deep learning for malware detection in executables. In: 2018 26th European signal processing conference (EUSIPCO), pp 533–537

Kong L, Huang G, Wu K (2017) Identification of abnormal network traffic using support vector machine. In: 2017 18th international conference on parallel and distributed computing, applications and technologies (PDCAT), pp 288–292

Kong L, Huang G, Wu K, Tang Q, Ye S (2018) Comparison of internet traffic identification on machine learning methods. In: 2018 international conference on big data and artificial intelligence (BDAI), pp 38–41

Kong L, Huang G, Zhou Y, Ye J (2018) Fast abnormal identification for large scale internet traffic. In: Proceedings of the 8th international conference on communication and network security, ICCNS 2018. Association for Computing Machinery, New York, pp 117–120 (2018).

Korkmaz Y (2016) Developing password security system by using artificial neural networks in user log in systems. In: 2016 electric electronics, computer science, biomedical engineerings’ meeting (EBBT), pp 1–4

Kowert W (2017) The foreseeability of human-artificial intelligence interactions. Texas Law Rev 96:181–204

Kruse C, Frederick B, Jacobson T, Monticone D (2016) Cybersecurity in healthcare: a systematic review of modern threats and trends. Technol Health Care 25:1–10.

Li C, Li XM (2017) Cyber performance situation awareness on fuzzy correlation analysis. In: 2017 3rd IEEE international conference on computer and communications (ICCC), pp 424–428

Li X, Zhang X, Wang D (2018) Spatiotemporal cyberspace situation awareness mechanism for backbone networks. In: 2018 4th international conference on big data computing and communications (BIGCOM), pp 168–173

Liu W, Li W, Sun L, Zhang L, Chen P (2017) Finger vein recognition based on deep learning. In: 2017 12th IEEE conference on industrial electronics and applications (ICIEA), pp 205–210

Lu X, Xiao L, Xu T, Zhao Y, Tang Y, Zhuang W (2020) Reinforcement learning based PHY authentication for Vanets. IEEE Trans Veh Technol 69(3):3068–3079

Lu Y, Xu LD (2019) Internet of things (IoT) cybersecurity research: a review of current research topics. IEEE Internet Things J 6(2):2103–2115

Mahmood T, Afzal U (2013) Security analytics: big data analytics for cybersecurity: a review of trends, techniques and tools. In: 2013 2nd national conference on information assurance (NCIA), pp 129–134

Marir N, Wang H, Feng G, Li B, Jia M (2018) Distributed abnormal behavior detection approach based on deep belief network and ensemble SVM using spark. IEEE Access 6:59657–59671

McIntire JP, McIntire LK, Havig PR (2009) A variety of automated turing tests for network security: Using ai-hard problems in perception and cognition to ensure secure collaborations. In: 2009 international symposium on collaborative technologies and systems, pp 155–162

Naderpour M, Lu J, Zhang G (2014) An intelligent situation awareness support system for safety-critical environments. Decis Support Syst 59:325–340.

Nithyakani P, Shanthini A, Ponsam G (2019) Human gait recognition using deep convolutional neural network. In: 2019 3rd international conference on computing and communications technologies (ICCCT), pp 208–211

Nunes DS, Zhang P, Sá Silva J (2015) A survey on human-in-the-loop applications towards an internet of all. IEEE Commun Surv Tutor 17(2):944–965

Ozsen S, Gunes S, Kara S, Latifoglu F (2009) Use of kernel functions in artificial immune systems for the nonlinear classification problems. IEEE Trans Inf Technol Biomed 13(4):621–628

Pandeeswari N, Kumar G (2016) Anomaly detection system in cloud environment using fuzzy clustering based ANN. Mob Netw Appl 21(3):494–505

Parthasarathy S, Busso C (2019) Semi-supervised speech emotion recognition with ladder networks. IEEE/ACM Trans Audio, Speech, Lang Process 28:2697–2709

Păvăloi L, Niţă CD (2018) Iris recognition using sift descriptors with different distance measures. In: 2018 10th international conference on electronics, computers and artificial intelligence (ECAI), pp 1–4

Qiu M (2017) Keystroke biometric systems for user authentication. J Signal Process Syst 86(2–3):175–190

Saeed F, Hussain M, Aboalsamh HA (2018) Classification of live scanned fingerprints using histogram of gradient descriptor. In: 2018 21st Saudi computer society national computer conference (NCC), pp 1–5

Salyut J, Kurnaz C (2018) Profile face recognition using local binary patterns with artificial neural network. In: 2018 international conference on artificial intelligence and data processing (IDAP), pp 1–4

Santhanam GR, Holland B, Kothari S, Ranade N (2017) Human-on-the-loop automation for detecting software side-channel vulnerabilities. In: Shyamasundar RK, Singh V, Vaidya J (eds) Information systems security. Springer, Cham, pp 209–230

Schlegel U, Arnout H, El-Assady M, Oelke D, Keim DA (2019) Towards a rigorous evaluation of xai methods on time series. In: 2019 IEEE/CVF international conference on computer vision workshop (ICCVW), pp 4197–4201

Shelton J, Jenkins J, Roy K (2016) Micro-dimensional feature extraction for multispectral iris recognition. SoutheastCon 2016:1–5

Shi Y, Li T, Renfa L, Peng X, Tang P (2017) An immunity-based iot environment security situation awareness model. J Comput Commun 5:182–197.

Shoufan A (2017) Continuous authentication of uav flight command data using behaviometrics. In: 2017 IFIP/IEEE international conference on very large scale integration (VLSI-SoC), pp 1–6

Singh K, Kumar J, Tripathi G, Chullai GA (2017) Sparse proximity based robust fingerprint recognition. In: 2017 international conference on computing, communication and automation (ICCCA), pp 1025–1028

Sliti M, Abdallah W, Boudriga N (2018) Jamming attack detection in optical uav networks. In: 2018 20th international conference on transparent optical networks (ICTON), pp 1–5

Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evolut Comput 23(5):828–841

Taylor PJ, Dargahi T, Dehghantanha A, Parizi RM, Choo KKR (2019) A systematic literature review of blockchain cyber security. Digit Commun Netw 6(2):147–156

Thongsook A, Nunthawarasilp T, Kraypet P, Lim J, Ruangpayoongsak N (2019) C4.5 decision tree against neural network on gait phase recognition for lower limp exoskeleton. In: 2019 1st international symposium on instrumentation, control, artificial intelligence, and robotics (ICA-SYMP), pp 69–72

Tyworth M, Giacobe NA, Mancuso VF, McNeese MD, Hall DL (2013) A human-in-the-loop approach to understanding situation awareness in cyber defence analysis. EAI End Trans Secur Saf.

Uddin MZ, Khaksar W, Torresen J (2017) A robust gait recognition system using spatiotemporal features and deep learning. In: 2017 IEEE international conference on multisensor fusion and integration for intelligent systems (MFI), pp 156–161

Veeramachaneni K, Arnaldo I, Korrapati V, Bassias C, Li K (2016) Ai \(\hat{2}\) : Training a big data machine to defend. In: 2016 IEEE 2nd international conference on big data security on cloud (BigDataSecurity), IEEE international conference on high performance and smart computing (HPSC), and IEEE international conference on intelligent data and security (IDS), pp 49–54

Verma M, Vipparthi SK, Singh G (2019) Hinet: hybrid inherited feature learning network for facial expression recognition. IEEE Lett Comput Soc 2(4):36–39

Wang Z, Fang B (2019) Application of combined kernel function artificial intelligence algorithm in mobile communication network security authentication mechanism. J Supercomput 75(9):5946–5964

Wang ZJ, Turko R, Shaikh O, Park H, Das N, Hohman F, Kahng M, Chau DH (2020) CNN explainer: learning convolutional neural networks with interactive visualization. IEEE Trans Vis Comput Gr.

Xiao R, Zhu H, Song C, Liu X, Dong J, Li H (2018) Attacking network isolation in software-defined networks: New attacks and countermeasures. In: 2018 27th international conference on computer communication and networks (ICCCN), pp 1–9

Yang H, Jia Y, Han WH, Nie YP, Li SD, Zhao XJ (2019) Calculation of network security index based on convolution neural networks, pp 530–540.

Yang W, Wang S, Hu J, Zheng G, Yang J, Valli C (2019) Securing deep learning based edge finger vein biometrics with binary decision diagram. IEEE Trans Ind Inform 15(7):4244–4253

Yavanoglu O, Aydos M (2017) A review on cyber security datasets for machine learning algorithms. In: 2017 IEEE international conference on big data (big data), pp 2186–2193

Young Park C, Blackmond Laskey K, Costa PCG, Matsumoto S (2016) A process for human-aided multi-entity bayesian networks learning in predictive situation awareness. In: 2016 19th international conference on information fusion (FUSION), pp 2116–2124

Yuan X, Li C, Li X (2017) Deepdefense: identifying ddos attack via deep learning. In: 2017 IEEE international conference on smart computing (SMARTCOMP), pp 1–8

Yunhu Jin, Shen Y, Zhang G, Hua Zhi (2016) The model of network security situation assessment based on random forest. In: 2016 7th IEEE international conference on software engineering and service science (ICSESS), pp 977–980

Zeng J, Wang F, Deng J, Qin C, Zhai Y, Gan J, Piuri V (2020) Finger vein verification algorithm based on fully convolutional neural network and conditional random field. IEEE Access 8:65402–65419

Zeng Y, Qi Z, Chen W, Huang Y, Zheng X, Qiu H (2019) Test: an end-to-end network traffic examination and identification framework based on spatio-temporal features extraction. CoRR.

Zhang W, Lu X, Gu Y, Liu Y, Meng X, Li J (2019) A robust iris segmentation scheme based on improved u-net. IEEE Access 7:85082–85089

Zhang Y, Chen X, Guo D, Song M, Teng Y, Wang X (2019) PCCN: Parallel cross convolutional neural network for abnormal network traffic flows detection in multi-class imbalanced network traffic flows. IEEE Access 7:119904–119916

Zhang Y, Li W, Zhang L, Ning X, Sun L, Lu Y (2019) Adaptive learning Gabor filter for finger-vein recognition. IEEE Access 7:159821–159830

Zhang Z, Shi F, Wan Y, Xu Y, Zhang F, Ning H (2020) Application progress of artificial intelligence in military confrontation. Chin J Eng 42(9):1106–1118.

Download references


This work was funded by the National Natural Science Foundation of China (Grant No. 61872038). This work of K.-K. R. Choo was supported only by the Cloud Technology Endowed Professorship.

Author information

Authors and affiliations.

School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China

Zhimin Zhang, Huansheng Ning, Feifei Shi, Fadi Farha, Yang Xu & Fan Zhang

Beijing Engineering Research Center for Cyberspace Data Analysis and Applications, Beijing, 100083, China

Huansheng Ning

School of Information Engineering, Xinjiang Institute of Engineering, Xinjiang, China

Department of Information Systems and Cyber Security, University of Texas at San Antonio, San Antonio, TX, 78249-0631, USA

Kim-Kwang Raymond Choo

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Huansheng Ning .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Zhang, Z., Ning, H., Shi, F. et al. Artificial intelligence in cyber security: research advances, challenges, and opportunities. Artif Intell Rev 55 , 1029–1053 (2022).

Download citation

Published : 13 March 2021

Issue Date : February 2022


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cyber Security
  • Artificial Intelligence
  • Security Methods
  • Human-in-the-Loop
  • Find a journal
  • Publish with us
  • Track your research

Using AI to develop enhanced cybersecurity measures

New research helps identify an unprecedented number of malware families.

February 15, 2024


A research team at Los Alamos National Laboratory is using artificial intelligence to address several critical shortcomings in large-scale malware analysis, making significant advancements in the classification of Microsoft Windows malware and paving the way for enhanced cybersecurity measures. Using their approach, the team set a new world record in classifying malware families.

“Artificial intelligence methods developed for cyber-defense systems, including systems for large-scale malware analysis, need to consider real-world challenges,” said Maksim Eren, a scientist in Advanced Research in Cyber Systems at Los Alamos. “Our method addresses several of them.”

The team’s paper was recently published in the Association for Computing Machinery’s journal, Transactions on Privacy and Security.

This research introduces an innovative method using AI that is a significant breakthrough in the field of Windows malware classification. The approach achieves realistic malware family classification by leveraging semi-supervised tensor decomposition methods and selective classification, specifically, the reject option.

“The reject option is the model’s ability to say, ‘I do not know,’ instead of making a wrong decision, giving the model the knowledge discovery capability,” Eren said.

Cyber defense teams need to quickly identify infected machines and malicious programs. These malicious programs can be uniquely crafted for their victims, which makes gathering large numbers of samples for traditional machine learning methods difficult.

This new method can accurately work with samples with both larger and smaller datasets at the same time — called class imbalance — allowing it to detect both rare and prominent malware families. It can also reject predictions if it is not confident in its answer. This could give security analysts the confidence to apply these techniques to practical high-stakes situations like cyber defense for detecting novel threats. Distinguishing between novel threats and known types of malware specimens is an essential capability to develop mitigation strategies. Additionally, this method can maintain its performance even when limited data is used in its training.

Altogether, the use of the reject option and tensor decomposition methods to extract multi-faceted hidden patterns in data, sets a superior capability in characterizing malware. This achievement underscores the groundbreaking nature of the team’s approach.

“To the best of our knowledge, our paper sets a new world record by simultaneously classifying an unprecedented number of malware families, surpassing prior work by a factor of 29, in addition to operating under extremely difficult real-world conditions of limited data, extreme class-imbalance and with the presence of novel malware families,” Eren said.

The team’s tensor decomposition methods, with high performance computing and graphics processing unit capabilities, are now available as a user-friendly Python library in GitHub.

Paper: “Semi-supervised Classification of Malware Families Under Extreme Class Imbalance via Hierarchical Non-Negative Matrix Factorization with Automatic Model Determination.” Journal Transactions on Privacy and Security. LANL contributors: Eren (A-4), Manish Bhattarai (T-1), Boian Alexandrov (T-1) For all authors, see the full paper: DOI:10.1145/3624567


Nick Njegomir (505) 695-8111 [email protected]

Related Stories

Browse by topic.

  • Awards and Recognitions
  • Climate Science
  • Environmental Stewardship

More Stories

Subscribe to our newsletter.

Sign up to receive the latest news and feature stories from Los Alamos National Laboratory

More From Forbes

Ai in cybersecurity: revolutionizing safety.

Forbes Technology Council

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Kunle Fadeyi is the CTO at TAPP Engine .

As our digital world grows at an incredible pace, artificial intelligence (AI) is right at the heart of this change. Even though it's still in the early days for AI, it's already a big part of our everyday lives. From Siri and Alexa helping us out at home to businesses using AI for quick, smart decisions and handling loads of data—it's all around us, making things easier and more connected.

While AI is often associated with convenience and streamlined processes, it's important to recognize that these advancements also involve the consumption of substantial amounts of confidential data.

Meanwhile, generative AI content brings a new plot twist. AI now has the capability to create outputs so lifelike that distinguishing between what's real and what's artificial becomes challenging. With AI emerging as both a protector and a possible disruptor in our era, it's crucial to reevaluate our strategies for the future of cybersecurity and data privacy.

While tools and companies enhance digital experiences, the efforts to manage security lag. McKinsey ’s report points out the growing challenge in IT organizations: As they push for digitization, they encounter major cybersecurity hurdles. There's a notable conflict between the drive to digitize and the cybersecurity team's duty to protect. Traditional cybersecurity, often reactive and overburdened, struggles to stay ahead of rapidly advancing digital threats.

NASA Urges U.S. Public To See April 8’s Total Solar Eclipse—And Drops A Home Truth

After two years of war, ukraine still has a thousand tanks, meet the billionaires buying up hawaii.

Traditional cybersecurity approaches—including firewalls, antivirus software, and intrusion detection systems—are frequently outpaced by sophisticated threats like polymorphic malware and zero-day exploits. These methods typically react after the fact, posing a risk of system contamination before any response is initiated. While these systems have their benefits, particularly in restricting digital access for risk evaluation, their often sluggish response times pose a significant challenge in the rapidly evolving realm of digital security.

Switching to a smarter, proactive cybersecurity approach is the way forward. This involves using adaptive solutions that can learn and react to threats in real time.

AI is changing the game in cybersecurity. It's quick to spot and stop threats, predicts issues before they happen and understands online behavior, making our digital world much safer.

Cybercrimes are evolving with AI tech like AI technology such as automation and machine learning . Gartner’s CARTA approach reveals AI's role in cybersecurity, akin to a digital Sherlock Holmes, constantly adapting. It's about integrating AI into a comprehensive cybersecurity strategy, not just as a plug-in solution.

Balancing AI-Driven Protection And Privacy

The idea of a dynamic cybersecurity system that quickly finds and responds to online anomalies in real time seems like the perfect solution for digital safety. Experts are optimistic about AI's long-term potential in ensuring robust built-in security measures. This approach, known as “ security by design ,” aims to close any gaps that cybercriminals might take advantage of.

AI's ability to learn and enhance cybersecurity is fueled by vast amounts of data. This continuous learning means AI-driven cybersecurity is always evolving. However, as Cybersecurity Advisor Jana Subramanian points out, there's often not enough clarity about how these algorithms are built and the quality of data they use. This lack of transparency raises important concerns about data privacy.

AI addresses these privacy concerns with "transfer learning"—a method that lets AI improve without compromising data privacy. This smart approach allows AI models to enhance their capabilities by applying knowledge from one task to another, ensuring better security that doesn’t hold sensitive data hostage.

But is it truly enough? Privacy is the missing piece of the AI puzzle. It’s not just about “security by design,” but it should also be about embracing “privacy by design.”

Despite privacy concerns, AI remains the Batman of the Gotham digital world, fighting off all sorts of cyber villains . DNS data, often used by cybercriminals to infiltrate sensitive information, can be countered by AI's ability to swiftly analyze and filter vast DNS queries, thwarting harmful data.

In the realm of malware, a notorious and adaptive cyber threat leading to costly ransomware attacks, AI is continually evolving, becoming more adept at tackling new malware forms.

Unlike traditional authentication methods that secure only the login phase, AI offers extended protection by consistently monitoring your entire online session for enhanced security.

AI doesn’t just defend; as explained above, it predicts. It can tell where the bad actors are likely to strike so organizations can beef up their defenses and keep their customers safe. It’s like having a crystal ball that helps you fortify your digital castle.

And in a digital world where organizations handle tons of confidential customer data, such as online banking , e-commerce transactions and more, AI is a non-negotiable cybersecurity strategy. It acts as the No. 1 line of defense against payment fraud, identity theft and even phishing attacks.

The Global AI Landscape

We already see global efforts to shape responsible AI and cybersecurity. Partnerships are emerging to set the stage for sound AI-driven security practices, among which some leading key players are already making a big impact on standardized AI , such as the Global Partnership for Artificial Intelligence (GPAI), Quadrilateral Security Dialogues (QUAD), UNESCO’s AI Ethics Agreement, G20 and OECD AI principles, etc.

The European Union has stepped up with the AI Act , aiming to unify AI rules within the EU. This also introduces the concept of regulatory sandboxes to ensure data privacy and security in AI development, which echoes a cool privacy principle: “privacy budgets.” It’s like a safety vault for your data, making sure privacy is baked into everything organizations do.

The rise in global AI and cybersecurity events—including conferences, workshops and strategic forums—signifies a collective intellectual effort. It brings together leading minds to collaboratively shape the future of digital security.

Moving forward, the path is not only shaped by technological progress but also by a commitment to privacy, fostering worldwide collaboration and promoting ethical AI practices. While AI presents risks such as deepfakes or sophisticated malware, it raises an essential question: Is the risk rooted in the technology itself, or does it depend on how it's used?

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Kunle Fadeyi

  • Editorial Standards
  • Reprints & Permissions

  • Threat intelligence
  • Microsoft Copilot for Security
  • Threat actors

Staying ahead of threat actors in the age of AI

  • By Microsoft Threat Intelligence
  • AI and machine learning
  • Attacker techniques, tools, and infrastructure
  • Social engineering / phishing
  • Forest Blizzard (STRONTIUM)
  • Non-governmental organizations (NGOs)

Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI’s blog on the research here . Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI. However, Microsoft and our partners continue to study this landscape closely.

The objective of Microsoft’s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including Microsoft Copilot for Security , to elevate defenders everywhere.

A principled approach to detecting and blocking threat actors

The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House’s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order’s request for comprehensive AI safety and security standards.

In line with Microsoft’s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft’s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track.

These principles include:   

  • Identification and action against malicious threat actors’ use: Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources.           
  • Notification to other AI service providers: When we detect a threat actor’s use of another service provider’s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies.
  • Collaboration with other stakeholders: Microsoft will collaborate with other stakeholders to regularly exchange information about detected threat actors’ use of AI. This collaboration aims to promote collective, consistent, and effective responses to ecosystem-wide risks.
  • Transparency: As part of our ongoing efforts to advance responsible use of AI, Microsoft will inform the public and stakeholders about actions taken under these threat actor principles, including the nature and extent of threat actors’ use of AI detected within our systems and the measures taken against them, as appropriate.

Microsoft remains committed to responsible AI innovation, prioritizing the safety and integrity of our technologies with respect for human rights and ethical standards. These principles announced today build on Microsoft’s Responsible AI practices , our voluntary commitments to advance responsible AI innovation and the Azure OpenAI Code of Conduct . We are following these principles as part of our broader commitments to strengthening international law and norms and to advance the goals of the Bletchley Declaration endorsed by 29 countries.

Microsoft and OpenAI’s complementary defenses protect AI platforms

Because Microsoft and OpenAI’s partnership extends to security, the companies can take action when known and emerging threat actors surface. Microsoft Threat Intelligence tracks more than 300 unique threat actors, including 160 nation-state actors, 50 ransomware groups, and many others. These adversaries employ various digital identities and attack infrastructures. Microsoft’s experts and automated systems continually analyze and correlate these attributes, uncovering attackers’ efforts to evade detection or expand their capabilities by leveraging new technologies. Consistent with preventing threat actors’ actions across our technologies and working closely with partners, Microsoft continues to study threat actors’ use of AI and LLMs, partner with OpenAI to monitor attack activity, and apply what we learn to continually improve defenses. This blog provides an overview of observed activities collected from known threat actor infrastructure as identified by Microsoft Threat Intelligence, then shared with OpenAI to identify potential malicious use or abuse of their platform and protect our mutual customers from future threats or harm.

Recognizing the rapid growth of AI and emergent use of LLMs in cyber operations, we continue to work with MITRE to integrate these LLM-themed tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK® framework or MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) knowledgebase. This strategic expansion reflects a commitment to not only track and neutralize threats, but also to pioneer the development of countermeasures in the evolving landscape of AI-powered cyber operations. A full list of the LLM-themed TTPs, which include those we identified during our investigations, is summarized in the appendix.

Summary of Microsoft and OpenAI’s findings and threat intelligence

The threat ecosystem over the last several years has revealed a consistent theme of threat actors following trends in technology in parallel with their defender counterparts. Threat actors, like defenders, are looking at AI, including LLMs, to enhance their productivity and take advantage of accessible platforms that could advance their objectives and attack techniques. Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent. On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital.

While different threat actors’ motives and complexity vary, they have common tasks to perform in the course of targeting and attacks. These include reconnaissance, such as learning about potential victims’ industries, locations, and relationships; help with coding, including improving things like software scripts and malware development; and assistance with learning and using native languages. Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships.

Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely. At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community.

While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context. As always, hygiene practices such as multifactor authentication (MFA ) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts.

The threat actors profiled below are a sample of observed activity we believe best represents the TTPs the industry will need to better track using MITRE ATT&CK® framework or MITRE ATLAS™ knowledgebase updates.

Forest Blizzard 

Forest Blizzard (STRONTIUM) is a Russian military intelligence actor linked to GRU Unit 26165, who has targeted victims of both tactical and strategic interest to the Russian government. Their activities span across a variety of sectors including defense, transportation/logistics, government, energy, non-governmental organizations (NGO), and information technology. Forest Blizzard has been extremely active in targeting organizations in and related to Russia’s war in Ukraine throughout the duration of the conflict, and Microsoft assesses that Forest Blizzard operations play a significant supporting role to Russia’s foreign policy and military objectives both in Ukraine and in the broader international community. Forest Blizzard overlaps with the threat actor tracked by other researchers as APT28 and Fancy Bear.

Forest Blizzard’s use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations. Based on these observations, we map and classify these TTPs using the following descriptions:

  • LLM-informed reconnaissance: Interacting with LLMs to understand satellite communication protocols, radar imaging technologies, and specific technical parameters. These queries suggest an attempt to acquire in-depth knowledge of satellite capabilities.
  • LLM-enhanced scripting techniques: Seeking assistance in basic scripting tasks, including file manipulation, data selection, regular expressions, and multiprocessing, to potentially automate or optimize technical operations.

Similar to Salmon Typhoon’s LLM interactions, Microsoft observed engagement from Forest Blizzard that were representative of an adversary exploring the use cases of a new technology. As with other adversaries, all accounts and assets associated with Forest Blizzard have been disabled.

Emerald Sleet

Emerald Sleet (THALLIUM) is a North Korean threat actor that has remained highly active throughout 2023. Their recent operations relied on spear-phishing emails to compromise and gather intelligence from prominent individuals with expertise on North Korea. Microsoft observed Emerald Sleet impersonating reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. Emerald Sleet overlaps with threat actors tracked by other researchers as Kimsuky and Velvet Chollima.

Emerald Sleet’s use of LLMs has been in support of this activity and involved research into think tanks and experts on North Korea, as well as the generation of content likely to be used in spear-phishing campaigns. Emerald Sleet also interacted with LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies. Based on these observations, we map and classify these TTPs using the following descriptions:

  • LLM-assisted vulnerability research: Interacting with LLMs to better understand publicly reported vulnerabilities, such as the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability (known as “Follina”).
  • LLM-enhanced scripting techniques : Using LLMs for basic scripting tasks such as programmatically identifying certain user events on a system and seeking assistance with troubleshooting and understanding various web technologies.
  • LLM-supported social engineering: Using LLMs for assistance with the drafting and generation of content that would likely be for use in spear-phishing campaigns against individuals with regional expertise.
  • LLM-informed reconnaissance: Interacting with LLMs to identify think tanks, government organizations, or experts on North Korea that have a focus on defense issues or North Korea’s nuclear weapon’s program.

All accounts and assets associated with Emerald Sleet have been disabled.

Crimson Sandstorm

Crimson Sandstorm (CURIUM) is an Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps (IRGC). Active since at least 2017, Crimson Sandstorm has targeted multiple sectors, including defense, maritime shipping, transportation, healthcare, and technology. These operations have frequently relied on watering hole attacks and social engineering to deliver custom .NET malware. Prior research also identified custom Crimson Sandstorm malware using email-based command-and-control (C2) channels. Crimson Sandstorm overlaps with the threat actor tracked by other researchers as Tortoiseshell, Imperial Kitten, and Yellow Liderc.

The use of LLMs by Crimson Sandstorm has reflected the broader behaviors that the security community has observed from this threat actor. Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine. Based on these observations, we map and classify these TTPs using the following descriptions:

  • LLM-supported social engineering: Interacting with LLMs to generate various phishing emails, including one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism. 
  • LLM-enhanced scripting techniques : Using LLMs to generate code snippets that appear intended to support app and web development, interactions with remote servers, web scraping, executing tasks when users sign in, and sending information from a system via email.
  • LLM-enhanced anomaly detection evasion: Attempting to use LLMs for assistance in developing code to evade detection, to learn how to disable antivirus via registry or Windows policies, and to delete files in a directory after an application has been closed.

All accounts and assets associated with Crimson Sandstorm have been disabled.

Charcoal Typhoon

Charcoal Typhoon (CHROMIUM) is a Chinese state-affiliated threat actor with a broad operational scope. They are known for targeting sectors that include government, higher education, communications infrastructure, oil & gas, and information technology. Their activities have predominantly focused on entities within Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, with observed interests extending to institutions and individuals globally who oppose China’s policies. Charcoal Typhoon overlaps with the threat actor tracked by other researchers as Aquatic Panda, ControlX, RedHotel, and BRONZE UNIVERSITY.

In recent operations, Charcoal Typhoon has been observed interacting with LLMs in ways that suggest a limited exploration of how LLMs can augment their technical operations. This has consisted of using LLMs to support tooling development, scripting, understanding various commodity cybersecurity tools, and for generating content that could be used to social engineer targets. Based on these observations, we map and classify these TTPs using the following descriptions:

  • LLM-informed reconnaissance : Engaging LLMs to research and understand specific technologies, platforms, and vulnerabilities, indicative of preliminary information-gathering stages.
  • LLM-enhanced scripting techniques : Utilizing LLMs to generate and refine scripts, potentially to streamline and automate complex cyber tasks and operations.
  • LLM-supported social engineering : Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-refined operational command techniques : Utilizing LLMs for advanced commands, deeper system access, and control representative of post-compromise behavior.

All associated accounts and assets of Charcoal Typhoon have been disabled, reaffirming our commitment to safeguarding against the misuse of AI technologies.

Salmon Typhoon

Salmon Typhoon (SODIUM) is a sophisticated Chinese state-affiliated threat actor with a history of targeting US defense contractors, government agencies, and entities within the cryptographic technology sector. This threat actor has demonstrated its capabilities through the deployment of malware, such as Win32/Wkysol, to maintain remote access to compromised systems. With over a decade of operations marked by intermittent periods of dormancy and resurgence, Salmon Typhoon has recently shown renewed activity. Salmon Typhoon overlaps with the threat actor tracked by other researchers as APT4 and Maverick Panda.

Notably, Salmon Typhoon’s interactions with LLMs throughout 2023 appear exploratory and suggest that this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement with LLMs could reflect both a broadening of their intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies.

Based on these observations, we map and classify these TTPs using the following descriptions:

  • LLM-informed reconnaissance: Engaging LLMs for queries on a diverse array of subjects, such as global intelligence agencies, domestic concerns, notable individuals, cybersecurity matters, topics of strategic interest, and various threat actors. These interactions mirror the use of a search engine for public domain research.
  • LLM-enhanced scripting techniques: Using LLMs to identify and resolve coding errors. Requests for support in developing code with potential malicious intent were observed by Microsoft, and it was noted that the model adhered to established ethical guidelines, declining to provide such assistance.
  • LLM-refined operational command techniques: Demonstrating an interest in specific file types and concealment tactics within operating systems, indicative of an effort to refine operational command execution.
  • LLM-aided technical translation and explanation: Leveraging LLMs for the translation of computing terms and technical papers.

Salmon Typhoon’s engagement with LLMs aligns with patterns observed by Microsoft, reflecting traditional behaviors in a new technological arena. In response, all accounts and assets associated with Salmon Typhoon have been disabled.

In closing, AI technologies will continue to evolve and be studied by various threat actors. Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.

Appendix: LLM-themed TTPs

Using insights from our analysis above, as well as other potential misuse of AI, we’re sharing the below list of LLM-themed TTPs that we map and classify to the MITRE ATT&CK® framework or MITRE ATLAS™ knowledgebase to equip the community with a common taxonomy to collectively track malicious use of LLMs and create countermeasures against:

  • LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
  • LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
  • LLM-aided development : Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
  • LLM-assisted vulnerability research : Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
  • LLM-optimized payload crafting : Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
  • LLM-enhanced anomaly detection evasion : Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
  • LLM-directed security feature bypass : Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
  • LLM-advised resource development : Using LLMs in tool development, tool modifications, and strategic operational planning.

Related Posts

Photo of a security team huddling in security operations center

  • Microsoft Defender

Midnight Blizzard: Guidance for responders on nation-state attack  

The Microsoft security team detected a nation-state attack on our corporate systems on January 12, 2024, and immediately activated our response process to investigate, disrupt malicious activity, mitigate the attack, and deny the threat actor further access. The Microsoft Threat Intelligence investigation identified the threat actor as Midnight Blizzard, the Russian state-sponsored actor also known as NOBELIUM.

Coworkers discuss business while walking through a modern office

New TTPs observed in Mint Sandstorm campaign targeting high-profile individuals at universities and research orgs  

Since November 2023, Microsoft has observed a distinct subset of Mint Sandstorm (PHOSPHORUS) targeting high-profile individuals working on Middle Eastern affairs at universities and research organizations in Belgium, France, Gaza, Israel, the United Kingdom, and the United States. In this campaign, the threat actor used bespoke phishing lures in an attempt to socially engineer targets into downloading malicious files.

Photo of business woman and man in separate glass elevators.

Star Blizzard increases sophistication and evasion in ongoing attacks  

Microsoft Threat Intelligence continues to track and disrupt malicious activity attributed to a Russian state-sponsored actor we track as Star Blizzard, who has improved their detection evasion capabilities since 2022 while remaining focused on email credential theft against targets.

Microsoft Security Hub booth from RSA Conference 2022.

Discover a new era of security with Microsoft at RSAC 2023  

Microsoft Security will be at the 2023 RSA Conference and we’d love to connect with you there. In this blog post, we share all the ways you can—plus, attend the Pre-Day with Microsoft and watch the Microsoft Security Copilot demo.

Watch CBS News

How foreign hackers are using artificial intelligence to create cyberattacks

WETM Elmira

Balancing concerns, opportunities with Artificial Intelligence

(ALBANY, N.Y.) NEWS10—-In Governor Hochul’s proposed budget, she wants to invest $250 million for Artificial Intelligence research.

The governor has proposed Empire AI which would be a partnership between SUNY, CUNY, and private institutions to research the technology.

“SUNY Buffalo is where the physical data center will be located, but all of these universities— CUNY, SUNY, RPI, Cornell. Colombia is in there as well. There’s a broader set. They will have access to the data center and be able to run experiments there,” explained Assemblymember Alex Bores.

Bores said this will help reduce risks that come from AI.

‘Its taking AI research from solely being in the for-profit domain and allow academic researchers to look at it from any angle to explore more,” said Bores.

This isn’t the first time New York has invested in technology.

“It was those early investments that lead to the semi-conductor industry, which led to Global Foundries, which lead to Micron, which will ultimately have 50,000 jobs,” said Assemblymember Pat Fahy. “It’s exactly what we want to do here and part of these investments, part of this AI research, is to help us solve some really big issues.”

Fahy said it can help address a wide range of issues such as climate change, fraud, and cyber security.

“There’s a lot of fear out there about AI, which is another reason why we need these investments to get out in front of the technology and make sure we are using it for all of the positive purposes,” she explained.

The governor has proposed legislation to expand penal law for crimes committed with the unauthorized use of AI. She also wants political ads distributed within 60 days of an election, to disclose if they were made with the technology.

For the latest news, weather, sports, and streaming video, head to WETM -

Balancing concerns, opportunities with Artificial Intelligence


  1. (PDF) Journal: Application of Artificial Intelligence in Cyber security

    research paper on artificial intelligence in cyber security

  2. (PDF) Artificial Intelligence in Cyber Defense

    research paper on artificial intelligence in cyber security

  3. Research Paper On Applications Of Cyber Security

    research paper on artificial intelligence in cyber security

  4. (PDF) Artificial Intelligence in Cyber Security

    research paper on artificial intelligence in cyber security

  5. (PDF) Artificial Intelligence for Cybersecurity: A Systematic Mapping

    research paper on artificial intelligence in cyber security

  6. Use of Artificial Intelligence in Cyber Security

    research paper on artificial intelligence in cyber security


  1. (PDF) Artificial Intelligence in Cyber Security

    This paper provides a concise overview of AI implementations of various cybersecurity using artificial technologies and evaluates the prospects for expanding the cybersecurity capabilities by...

  2. Artificial intelligence for cybersecurity: Literature review and future

    Artificial intelligence (AI) is a powerful technology that helps cybersecurity teams automate repetitive tasks, accelerate threat detection and response, and improve the accuracy of their actions to strengthen the security posture against various security issues and cyberattacks.

  3. Artificial intelligence in cyber security: research advances ...

    Artificial intelligence in cyber security: research advances, challenges, and opportunities Published: 13 March 2021 Volume 55 , pages 1029-1053, ( 2022 ) Cite this article Download PDF Artificial Intelligence Review Aims and scope Zhimin Zhang, Huansheng Ning, Feifei Shi, Fadi Farha, Yang Xu, Jiabo Xu, Fan Zhang & Kim-Kwang Raymond Choo

  4. AI-Driven Cybersecurity: An Overview, Security Intelligence Modeling

    Artificial intelligence (AI) is one of the key technologies of the Fourth Industrial Revolution (or Industry 4.0), which can be used for the protection of Internet-connected systems from cyber threats, attacks, damage, or unauthorized access.

  5. Artificial intelligence in cyber security: research advances

    Artificial intelligence in cyber security: research advances, challenges, and opportunities Authors: Zhimin Zhang , Huansheng Ning , Feifei Shi , Fadi Farha , Yang Xu , Jiabo Xu , Fan Zhang , Kim-Kwang Raymond Choo Authors Info & Claims Artificial Intelligence Review Volume 55 Issue 2 Feb 2022 pp 1029-1053

  6. Investigating the applications of artificial intelligence in cyber security

    6 Altmetric Explore all metrics Abstract Artificial Intelligence (AI) provides instant insights to pierce through the noise of thousands of daily security alerts. The recent literature focuses on AI's application to cyber security but lacks visual analysis of AI applications.

  7. The Emerging Threat of Ai-driven Cyber Attacks: A Review

    Hence, this study investigates the emerging threat of AI-driven attacks and reviews the negative impacts of this sophisticated cyber weaponry in cyberspace. The paper is divided into five parts. The mechanism for offering the review process is presented in the next section. Section 3 contains the results.

  8. Artificial intelligence in cyber security: research advances

    A conceptual human-in-the-loop intelligence cyber security model is presented based on the existing literature on the applications of AI in user access authentication, network situation awareness, dangerous behavior monitoring, and abnormal traffic identification. In recent times, there have been attempts to leverage artificial intelligence (AI) techniques in a broad range of cyber security ...

  9. Explainable Artificial Intelligence Applications in Cyber Security

    Although there are papers reviewing Artificial Intelligence applications in cyber security areas and the vast literature on applying XAI in many fields including healthcare, financial services, and criminal justice, the surprising fact is that there are currently no survey research articles that concentrate on XAI applications in cyber security.

  10. Artificial Intelligence in Cyber Security by Md Fazley Rafy

    This paper provides a comprehensive overview of AI utilization in cybersecurity, exploring its benefits, challenges, and potential negative impacts. In addition to that, it explores AI-based models that enhance or compromise security across various infrastructures and cyber networks. The paper critically examines the role of AI in developing ...

  11. PDF Balancing Innovation, Execution and Risk ARTIFICIAL INTELLIGENCE

    comprehensive desk research, literature reviews and expert interviews, the report explores the opportunities and challenges of artificial intelligence (AI) as it relates to cybersecurity. Specifically, this report explores the issue from two angles: how AI can help strengthen security by, for example, detecting

  12. Artificial Intelligence Enabled Cyber Security

    In the digital era, cyber security has become a serious problem. Information penetrates, wholesale fraud, manual human test breaking, and other comparable occurrences proliferate, influencing a large number of individuals just as organizations. The hindrances have consistently been endless in creating appropriate controls and procedures and putting them in place with utmost precision in order ...

  13. [PDF] Artificial Intelligence in Cyber Security

    2023. TLDR. This research paper focuses on the intersection between cyber security threats and their forestallment using Artificial Intelligence (AI) technologies and estimates the probability of expanding cybersecurity by conservation of the defense mechanisms. PDF.

  14. Artificial Intelligence Cyber Security Strategy

    AI security become essential. If a security attack make the pattern wrong, the model is not a true prediction, that could result in thousands life loss. The potential consequence of this non-accurate forecast would be even worse.

  15. Countering cyberterrorism: the confluence of artificial intelligence

    Countering cyberterrorism: the confluence of artificial intelligence, cyber forensics and digital policing in US and UK national cybersecurity by Reza Montasari, United Kingdom, Springer, 2023, xv+164 pp., £139.99 (hardback), ISBN 9783031219191.

  16. The Impact of Artificial Intelligence on Data System Security: A

    This paper aims at identifying research trends in the field through a systematic bibliometric literature review (LRSB) of research on AI and system security. the review entails 77 articles published in the Scopus ® database, presenting up-to-date knowledge on the topic. the LRSB results were synthesized across current research subthemes.

  17. PDF Artificial intelligence in cyber security: research advances ...

    The theme of the applications was consistent because of using AI to ensure cyber security. In addition, this paper also summarizes some of innovative methods mentioned in Table 5. These summaries include the datasets, features and their extraction methods, clas-sification models, and maximum accuracy of methods.

  18. Study of Artificial Intelligence in Cyber Security and The ...

    Hassan, Syed Minhaj Ul Hassan, STUDY OF ARTIFICIAL INTELLIGENCE IN CYBER SECURITY AND THE EMERGING THREAT OF AI-DRIVEN CYBER ATTACKS AND CHALLENGE (JULY 17, 2023). Available at SSRN: or Download This Paper Open PDF in Browser 0 References 0 Citations

  19. Using AI to develop enhanced cybersecurity measures

    A research team at Los Alamos National Laboratory is using artificial intelligence to address several critical shortcomings in large-scale malware analysis, making significant advancements in the classification of Microsoft Windows malware and paving the way for enhanced cybersecurity measures. ... a scientist in Advanced Research in Cyber ...

  20. New Google Initiative to Foster AI in Cybersecurity

    Google has announced a new initiative aimed at fostering the use of artificial intelligence (AI) in cybersecurity. The internet giant believes that AI is pivotal for digital security, having the potential to provide defenders with a definitive advantage over attackers and to upend the Defender's Dilemma. "AI allows security professionals and defenders to scale their work in threat ...

  21. AI In Cybersecurity: Revolutionizing Safety

    Traditional cybersecurity approaches—including firewalls, antivirus software, and intrusion detection systems—are frequently outpaced by sophisticated threats like polymorphic malware and zero ...

  22. Staying ahead of threat actors in the age of AI

    Microsoft, in collaboration with OpenAI, is publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors Forest Blizzard, Emerald Sleet, Crimson Sandstorm, and others. The observed activity includes prompt-injections, attempted misuse of large language models (LLM), and fraud.

  23. Using AI to develop enhanced cybersecurity measures

    "Artificial intelligence methods developed for cyber-defense systems, including systems for large-scale malware analysis, need to consider real-world challenges," said Maksim Eren, a scientist in ...

  24. How foreign hackers are using artificial intelligence to create

    OpenAI and its partner Microsoft said Wednesday that hackers from China, Russia and other nations have been using artificial intelligence systems to help create their cyberattacks. Washington Post ...

  25. Google Announces Free AI Cyber Tools For Gmail, Google Drive To ...

    In a bid to enhance online security, Alphabet Inc.'s (NASDAQ:GOOG) (NASDAQ:GOOGL) Google has unveiled plans to provide free artificial intelligence tools for countering the growing use of AI in ...

  26. Balancing concerns, opportunities with Artificial Intelligence

    (ALBANY, N.Y.) NEWS10—-In Governor Hochul's proposed budget, she wants to invest $250 million for Artificial Intelligence research. ... fraud, and cyber security.