Over the past few months, ChatGPT has generated significant attention in the tech industry. However, not all of the attention has been positive.
Recently, an individual has asserted that they were able to create potent data-mining malware in a matter of hours by utilizing prompts based on ChatGPT, Fox News reports.
Aaron Mulgrew, a security researcher at Forcepoint, has revealed that he was able to construct the data-mining malware using OpenAI’s generative chatbot.
Despite ChatGPT being equipped with certain safeguards to prevent users from requesting it to generate malicious code, Mulgrew was able to identify a workaround.
Mulgrew utilized a technique of requesting ChatGPT to generate code one function at a time, with each function on a separate line, as reported by Fox News.
After assembling all the individual functions, he discovered that he had a highly sophisticated data-stealing executable that was virtually undetectable and on par with malware used by nation-states.
The situation is particularly concerning because Mulgrew was able to produce this extremely perilous malware without the assistance of a team of hackers, and without having to personally author the code.
Written by staff