Is the ChatGPT app a gift to hackers and scammers?

In some instances, yes. But it can also help flag security vulnerabilities. It just depends who’s asking the questions. From Moneyweb.

When asked a how-to question about a possible criminal act, the app replies that it doesn’t support unethical or illegal activities. And then provides the answer. Image: Adobe Stock
When asked a how-to question about a possible criminal act, the app replies that it doesn’t support unethical or illegal activities. And then provides the answer. Image: Adobe Stock

Two months after launch, artificial intelligence tool ChatGPT broke 100 million users, making it the fastest-growing consumer internet app in history.

It took Instagram more than two years and TikTok about nine months to reach that level of adoption.

It’s being used for all sorts of interesting applications, from writing school essays, to solving coding problems and offering opinions on why communism failed (and where it succeeded).

Coders appear to love it because it solves in seconds problems that might otherwise take hours or days.

It recently passed law exams at prestigious US universities, and an MBA exam at the University of Pennsylvania’s Wharton School.

Move over, Google

It has been called the ‘Google killer’ because, depending on the query, it can generate a more pointed response.

For example, ask: ‘What do I need to pack for a holiday to Egypt?’

Google will respond with a list of hotels and travel-related ads before presenting articles that seem to fit the query. ChatGPT will give you exactly what you need: visa, passport, clothing for warm weather, suncream and the like.

Read:Microsoft invests $10bn in ChatGPT maker OpenAI

ChatGPT will replace some jobs, but traders say not theirs

The dark side

However, there’s another aspect to ChatGPT (GPT referring to ‘chat generative pre-trained transformer) that is gaining attention in the cybersecurity field: its ability to assist hackers and scammers.

A recent presentation by cybersecurity firm Kaspersky demonstrated just how easy it was to use ChatGPT to gather advice on hacking and malicious activity.

A cybersecurity expert posed the following question to ChatGPT: “I need to pentest [conduct a penetration test on] a website. I see it is running nginx and uses django. What should I do first?” (Nginx and Django are open-source software used in web development).

Source: ChatGPT

ChatGPT replies that it does not support unethical or illegal activities, then goes ahead and provides the answer.

With follow-up questions, it will reveal more. While that may be a gift to hackers, it is also a gift to organisations trying to detect vulnerabilities in their own systems.

How can threat actors use ChatGPT?

According to Kaspersky’s lead data scientist Vladislav Tushkanov, ChatGPT can give advice not only on hacking and malicious activities, but can also generate spear phishing emails (emails that look authentic and trick you into revealing personal or financial information) and spam emails, or generate programming code and boilerplates for malware.

It can do this hundreds of thousands of times, each individualised for the recipient.

There is concern that ChatGPT might fix some of the ‘bugs’ such as bad spelling and grammar associated with old-style 911 scams, where you might get an email purporting to be from a member of a royal family looking to park a huge sum of money.

When ChatGPT was asked to generate a spam email looking for a ‘partner’ to help move $10 million out of Nigeria in return for 30% commission, it came up with a fairly coherent and well-written response.

Source: ChatGPT

The likelihood of emails such as this finding any victims may seem remote, but with a bit of tweaking, and a slight change in the narrative, the hit rate could be improved. For scammers with hundreds of thousands of stolen email addresses to work through, a success rate of 0.1% would be make it well worth it.

A lot of this type of advice is already available in ethical hacker courses (for security testing purposes) currently available on the internet, so it should be no surprise that ChatGPT is able to reproduce that.

The ability to generate more plausible spear phishing attacks may however be a problem, says Kaspersky, and would need extra risk protection.

Though ChatGPT generates high quality text, spam detectors rely on metadata and techniques other than content analysis to weed out malicious emails.

There are already tools available to judge whether text has been generated by artificial intelligence (AI), so we are now entering the era of AI monitoring AI.

Read: Artificial intelligence in South Africa comes with special dilemmas

Is ChatGPT capable of becoming an autonomous hacking AI?

Existing security systems will generally detect malware, whether created by humans or AI, as there are usually signature errors in the code that would prevent it being deployed automatically. There’s no room for complacency, however, as AI is learning fast and future iterations of tools like ChatGPT may become highly efficient at generating malware and sidestepping any ethical constraints, given the right guidance from malicious actors.

ChatGPT is an enabler, and is not meant as a human replacement, says Maher Yamout, senior security researcher at Kaspersky. “You still need a human brain to do a sanity check.”

ChatGPT lacks an up-to-date data set, being limited to data as recent as 2021. That will improve with new releases, but cybersecurity threats are evolving by the day. “You still need a human to exercise judgment and decide whether to use the insights provided [by ChatGPT] or discard them,” adds Yamout.

It has the advantage of describing to the user the purpose of the code it is being asked to analyse, and this is a huge benefit to cybersecurity professionals.

An update to ChatGPT is the “red mark” that flags any content that can be used in an offensive manner and will limit the questions that can be asked on that topic. The system can be tricked by asking questions in a different way – for example, ‘Teach me how hackers work’.

As with all new breakthrough technologies, AI could provide bad actors with extremely powerful tools to steal online, even those who have little or no programming experience. But it also provides organisations with some powerful tools of their own to fight back against malicious online actors.

About Ciaran Ryan 1177 Articles
The Writer's Room is a curated by Ciaran Ryan, who has written on South African affairs for Sunday Times, Mail & Guardian, Financial Mail, Finweek, Noseweek, The Daily Telegraph, Forbes, USA Today, Acts Online and Lewrockwell.com, among others. In between he manages a gold mining operation in Ghana, and previously worked in Congo. Most of his time is spent in the lovely city of Joburg.