State-Affiliated Hackers from China, Russia, Iran and North Korea Used ChatGPT to Boost Cyberattacks

State-Affiliated Hackers from China, Russia, Iran and North Korea Used ChatGPT to Boost Cyberattacks

Malicious state-backed hackers are experimenting with using AI systems, including chatbots like ChatGPT, to aid their cyber campaigns, according to new collaborative research from Microsoft and OpenAI.

The report found that five state-sponsored groups - from China, Iran, North Korea and Russia - used AI platforms for reconnaissance, social engineering attacks, coding automation and other purposes.

Nonetheless, the findings highlight the need to prepare defenses against AI-enhanced threats. Upon detecting the activity from malicious actors including advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates, Microsoft and OpenAI took action against the hackers' accounts and access.

The targeted groups included two attributed to China, one to Iran, one to North Korea and one to Russia. The report indicates that the use of AI by these threat actors often revolved around enhancing their operational efficiency and effectiveness. According to OpenAI's investigation, the hackers used its services for activities like:

  • Researching companies, tools and vulnerabilities
  • Generating phishing content
  • Scripting malware functions
  • Translating documents
  • Troubleshooting code

While concerning, Microsoft said its researchers "have not identified significant attacks employing the AI models we monitor." Rather, the company views the cyber spies' experimentation with AI as part of a natural technology adoption curve among hacking groups.

Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely. At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community.

In response, Microsoft and OpenAI plan to collaborate across the industry to stay ahead of AI-related threats. This includes implementing AI safeguards, notifying other providers affected, coordinating threat intelligence sharing, transparency to stakeholders and disrupting detected state-backed activity.

Overall, the report highlights a critical duality inherent in the advancement of AI: while AI can significantly bolster cybersecurity defenses, offering new tools and capabilities to protect digital assets, it can also be weaponized by adversaries. This duality underscores the importance of a proactive and collaborative approach to AI safety and security, involving continuous monitoring, threat intelligence sharing, and the development of robust countermeasures.

It is an important reminder of the constantly shifting landscape of cyber threats and the ongoing arms race between defenders and attackers. As AI technologies continue to evolve, both sides will likely increase their reliance on these tools, necessitating ongoing vigilance and innovation in cybersecurity practices.

OpenAI insists that the vast majority of people use their systems to "help improve their daily lives," and "as is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention."

While threats will continue evolving alongside AI capabilities, the companies hope to foster "greater awareness and preparedness" against high-tech adversaries through openness.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe