By now ChatGPT, the Artificial Intelligence chatbot created by OpenAI, has gone viral and everyone is talking about it. But in addition to the great opportunities that come with it, as always, there are also great risks. Here are some of them.
That’s all anyone has been talking about for about a month now, except for Artificial Intelligence and ChatGPT. And that’s because the software, made by Open AI, the organization founded by Elon Musk and Sam Altman, launched that allows direct interaction with human beings, responding in writing to concrete needs. Since then, the phenomenon has gone viral and everyone is talking about it.
Just give it a little monitoring with Google Trends to make sure how much interest has grown cerso ChatGPT.
Not to mention the most recent news about Microsoft ready to invest $10 billion in Open AI so as not to miss the great opportunity to implement artificial intelligence within Bing, its search engine.
Yet, the great physicist and mathematician Stephen Hawking argued that highly evolved artificial intelligence could endanger human existence itself, “we have to make AI do what we want it to do.”
And based on this admonition, Elon Musk and Sam Altman started Open AI in an attempt to give birth to a more controllable and manageable Artificial Intelligence.
A great intent, no doubt about it, but like anything, this mode also has its risks and dangers as well as its great advantages.
We dwell precisely on some of the dangers that are beginning to be detected, because at the moment ChatGPT can be used by anyone. Just think that in December, when it was launched, more than 1 million users were registered in just 5 days, an extraordinary achievement. So extraordinary when you think that it took Facebook 10 months before it reached 1 million users and Netflix, on the other hand, three years. Of course, these seem like distant examples, but it was just to give the size of the media hype around Chat GPT.
Now, this kind of automated writing mode, through an artificial intelligence, like ChatGPT is affecting just about everyone, including hackers and cyber criminals in general.
Something, this, that is beginning to be monitored by cybersecurity agencies because malicious actions are already taking place that could, more than is already the case, mislead the user.
Examples are already active through trolling on social media, the company WithSecure during one of its investigations created a made-up company account, with related account of CEO Kenneth White, instructing artificial intelligence to write social media posts in order to attack the CEO on a personal level, threats included.
But this is but a small example of what digital criminals could create using software like Chat GPT. Just think of cryptocurrency scams set up to look credible, with fake accounts that, thanks to AI, present persuasive and convincing content.
Check Point Research (CPR), the Threat Intelligence division of Check Point Software, is also observing early cases of cybercriminals and users using ChatGPT to develop malicious tools. And Check Point Software researchers report that hackers may be using ChatGPT and OpenAI’s Codex to carry out targeted attacks. To demonstrate this, CPR used ChatGPT and Codex to produce malicious e-mails and code and an infection chain capable of targeting users’ computers.
Using this mode, CPR was able to create an e-mail, with an Excel document attached, containing malicious code capable of reverse shell downloads. Reverse shell attacks aim to connect to a computer and redirect the target system’s shell input and output connections so that the attacker can access them remotely.
These are the steps performed by CPR researchers:
Asking ChatGPT to impersonate a hosting company
Asking ChatGPT to repeat the process, creating a phishing email with a malicious Excel attachment
Asking ChatGPT to create malicious VBA code in an Excel document
Open AI Codex
CPR was also able to generate malicious code using Codex, posing requests such as:
Run a reverse shell script on a Windows machine and connect to a specific IP address.
Check whether the URL is vulnerable to SQL injection by accessing it as an administrator.
Write a python script that performs a full port scan on the target machine.
Consequently, the malicious code was generated by Codex.
“ChatGPT can significantly alter the cyber threat landscape,” says Sergey Shykevich, Threat Intelligence Group Manager at Check Point Software. “Now anyone with minimal resources and zero knowledge in codex can easily exploit it at the expense of their imagination. To warn the public, we have demonstrated how easy it is to use the combination of ChatGPT and Codex to create malicious emails and code. I believe these AI technologies represent another step in the dangerous evolution of increasingly sophisticated and effective cyber capabilities.”
And again, CPR researchers discovered that on December 29, 2022, a thread titled “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The author of the thread revealed that he was experimenting with ChatGPT to recreate malware and techniques described in research publications and articles on common malware.
In fact, although this individual might be a very tech-savvy attacker, these posts seemed to demonstrate to less technically capable cybercriminals how to use ChatGPT for malicious purposes, with real-world examples they can use immediately.
On December 21, 2022, an attacker nicknamed USDoD posted a Python script, which he emphasized was the “first script he ever created.” When another cybercriminal commented that the style of the code resembles that of OpenAI, USDoD confirmed that OpenAI gave him a “nice hand to finish the script with a nice scope.”
This could mean that potential cybercriminals who have little or no development skills could exploit ChatGPT to develop malicious tools and become new full-fledged cybercriminals with technical capabilities.
Here, this is the scenario, a dangerous one, that should definitely not be underestimated. It is therefore worth embracing these new innovations, knowing, however, that they may hide little-known dangers or, even, amplify them, making everything much more complicated to manage.
Of course, this is but a part of it and we will certainly return to it later.
Cover image: ChatGPT on pc screen, photo by @rokas91 – Depositphotos