Cybersecurity risks of generative AI technology

The new wave of generative artificial intelligence (AI) technology has unleased disruptions that are transforming how information is gathered. This, in many ways, could be useful to adversaries.

Furthermore, generative AI could create more sophisticated automation of malicious activities that will require more innovative approaches to detection and defense, according to Scott Crawford, research director, security, and Daniel Kennedy, principal research analyst, Voice of the Enterprise: Information Security, at 451 Research, a part of S&P Market Intelligence.

Human-like tendencies

Generative AI is smart, interactive and, at times, creative. AI software tools can produce human-readable output that is useful for gathering information. Unfortunately, this could also help adversaries become more convincing in crimes such as identity exploits, money laundering and fraud.

Output from GPT-4, the updated version of OpenAI’s ChatGPT, suggests that the tool could mimic any interactive behavior well enough to subvert interactive security controls — with both humans and machines.

Servicing an adversary

The most obvious misuse of generative AI is plagiarism, which is addressed by GPTZero, a tool developed by a Princeton University senior to detect AI-generated text.

Other more nefarious uses of generative AI could lead to darker implications that can more easily manipulate victims. Combined with a corpus of attack techniques and sensitive or personally identifying data, generative AI could create convincing phishing or social engineering attacks that are more targeted.

Behavioral analytics

Application of behavioral analytics to human access can play a role in addressing the cybersecurity risks posed by generative AI. Behavioral analytics can help establish the legitimacy of a person, device or entity seeking to access IT resources by analyzing factors such as location, device and software complement, integrity and configuration.

Interactive machine-to-machine is another area that is susceptible to bad actors exploiting generative AI. To address these risks, behavioral analytics techniques such as public key infrastructure (PKI)-based implementations can be used to demonstrate the legitimacy of machine-to-machine access attempts.

Additional protections such as “vaults” can be implemented to safeguard measures such as cached passwords and shared secrets.

Code and malware risks

While ChatGPT’s ability to write code is in the early stages, research has demonstrated that the tool can identify and describe how to exploit a simple buffer overflow.

Additionally, researchers have shown that generative AI is able to bypass content filters and create polymorphic malware that can avoid signature-based detection. In other instances, generative AI was able to construct malware variants that could be more resistant against anti-malware engines.

AI vs. AI

The rather immense capability of generative AI suggests the potential of an AI-versus-AI arms race. Cybersecurity technology innovation will need to continually raise the bar to defend against “intelligent” adversaries.


Want insights on AI trends delivered to your inbox? Join the 451 Alliance.