ChatGPT, the modern artificial intelligence, allows anyone to launch a cyberattack | Technique

You can write baroque poetry, write a letter to the Three Wise Men with childish expressions, or prepare disguised college papers. He is also able to guess riddles and even program. ChatGPT chat bot is my favorite game among technicians. He also communicated with the general public thanks to her ease of handling: just ask questions in writing, as in any conversation. But the artificial intelligence (AI) system developed by OpenAI, a company co-founded by Elon Musk, brings with it new risks. Among them, helping anyone launch a cyberattack.

A team of analysts from cybersecurity firm Check Point used ChatGPT and Codex, a similar programming-focused tool, to develop a full attack. phishing without writing a single line of code. the phishing It is one of the favorite techniques of cybercriminals today: it consists of tricking the user into voluntarily clicking on a link to download malware, or malware, which will then steal information or money. The typical example is an email supposedly sent by the bank asking the user to enter their credentials to get it.

Check Point researchers asked the aforementioned automated tools to generate a scam email, and to write code malware and that it can be copied and pasted into an Excel file to be sent as an email attachment, so that the trap is executed as soon as the victim downloads the file. Thus, they were able to create an entire infection chain capable of remotely gaining access to other computers, all through questions asked in chat conversations. Their goal was to test the malicious potential of ChatGPT. And they got it.

Check Point analysts asked ChatGPT to write a phishing email.  Pictured is one of the questions they asked to get to that conclusion.
Check Point analysts asked ChatGPT to write a phishing email. Pictured is one of the questions they asked to get to that conclusion.

“Experience shows that relatively complex attacks can be developed with very few resources. By asking the right questions and cross questions, without mentioning keywords that contradict your content policy, this can be achieved,” explains Eusebio Nieva, Technical Director of Check Point in Spain and Portugal . “A professional attacker, right now, won’t need ChatGPT or Codex at all. Those who don’t have enough knowledge or learn can be useful,” he asserts. In his opinion, anyone who is moderately intelligent and doesn’t know how to code can launch a simple attack using this tool.

Does this mean that a hot OpenAI application should be reviewed or monitored? It will be difficult to do. “If you tell ChatGPT to create a job that encrypts the contents of a hard drive, it will do it. Another thing is that you’re using it for something good, like protecting your data, or something bad, like hijacking someone else’s computer,” says Mark Rivero, a security researcher at Kaspersky. Depending on what the tool is asked to do (eg write a phishing Using this word), she replied that she could not do it because it was illegal. Although this barrier can often be circumvented by asking the question in other terms, as the Check Point team did. Because of the high impact OpenAI makes, developers are constantly reviewing their terms and conditions. At the time of this writing, the exercise Check Point did could still be replicated.

ChatGPT is a variant of the GPT-3 reverse language model, the most advanced in the world. use deep learning (Deep learning), an artificial intelligence technique, to produce texts that mimic human writing. It analyzes about 175,000 million variables to determine which word statistically matches the words before it best, whether it’s a question or a statement. Hence, his texts seem to have been produced by a person, even if he does not know whether what he says is good or bad, right or wrong, true or unrealistic.

This screenshot shows how the Check Point team asked ChatGPT to write specific code for their mock cyberattack which can be copied and pasted into an Excel sheet.
This screenshot shows how the Check Point team asked ChatGPT to write specific code for their mock cyberattack which can be copied and pasted into an Excel sheet.

“The launch of ChatGPT brings together two of my biggest concerns in cybersecurity: artificial intelligence and the potential for disinformation,” Steve Grobman, McAfee vice president and chief technology officer, wrote on his blog as soon as the OpenAI chatbot went public. “These AI tools will be used by a wide range of evil actors, from cybercriminals to those who want to poison public opinion, so that their work will achieve more realistic results.”

All sources consulted for this report agree that ChatGPT will not revolutionize the cybersecurity sector. The threats that niche companies are working on are far too complex compared to what today’s most popular chatbot can generate. Not so easy to hack company or organization systems. But that doesn’t mean it won’t be useful to cybercriminals.

These tools can be used to create feats [un fragmento de software] They take advantage of vulnerabilities known up to the date when OpenIA collected data from its model,” said Josep Albors, Director of Research and Outreach at ESET Spain. In practice, it facilitates part of the infection chain used by cybercriminals and attackers, “putting up a barrier Front entry for some attacks is lower than it was and may bring more malicious actors into play.”

Neiva expects calls back Kids scenarioChildren who do not have much knowledge of programming and who have taken texts Ready and launched attacks for fun or to test themselves. The complexity of cyber security has been responsible for gradually removing them from the radar. “They were reported missing a few years ago, but it’s likely we’ll soon see them carrying out not-so-sophisticated, but effective attacks,” says Check Point’s manager.

Expert cybercriminals, for their part, can turn to these chatbots to perform small tasks in the long process of designing a complex cyberattack. For example, writing persuasive emails and other traditional processes, such as automatically obfuscating defense systems to launch a real attack later.

ChatGPT can also be useful for those who are on the side of the good guys. Cybersecurity companies have been working with AI tools for decades, but OpenAI, which draws on a huge amount of data gleaned from the Internet, offers a new perspective. “It can serve as a very good training tool for understanding the exploitation techniques used by many criminals and designing defensive measures for them,” says Albors.

You can follow country technology in a Facebook s Twitter Or sign up here to receive The weekly newsletter.

Subscribe to continue reading

Read without limits

Leave a Reply

Your email address will not be published. Required fields are marked *