Breaking News

Hackers Misusing ChatGPT Easier to Detect: OpenAI Report

ChatGPT

OpenAI has highlighted that cybercriminals exploiting its AI tool, ChatGPT, have made it simpler for authorities to uncover and disrupt their covert activities. A recent report outlines how these bad actors’ attempts to use ChatGPT often backfire, revealing valuable insights into their methods and targets.

Through analyzing ChatGPT prompts, OpenAI identified the platforms and tools that malicious actors are focusing on. For instance, misuse of the AI helped OpenAI trace a covert influence campaign across social media platforms like X (formerly Twitter) and Instagram, shedding light on how cybercriminals attempt to spread disinformation online.

Also Read: Data Breach: Hackers Uses Telegram Bots to Leak Star Health’s Information

The report noted that while AI tools like ChatGPT have helped criminals save time and reduce costs, such as generating spam posts, they haven’t granted them capabilities beyond what publicly available resources could provide. Instead, ChatGPT has been primarily used for scaling existing tactics, like enhancing spear-phishing efforts and debugging code.

Among the notable cases is a Chinese-linked group, “SweetSpecter,” which used ChatGPT to research and launch spear-phishing campaigns, but was thwarted by OpenAI’s filters. Additionally, CyberAv3ngers, associated with the Iranian armed forces, used ChatGPT for research, which provided insights into potential targets for future cyber-attacks.

OpenAI stressed the need for industry-wide collaboration to counter these cyber threats, emphasizing that while AI tools offer new challenges, they must be tackled collectively. The company remains committed to transparency and ongoing efforts to address misuse of AI in cyber activities.

Facebook
Twitter
LinkedIn
Pinterest
WhatsApp