News from the AI & ML world

DeeperML - #aithreats

iHLS News@iHLS //
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.

OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator.

Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Schneier on Security: Report on the Malicious Uses of AI
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • www.zdnet.com: The company's new report outlines the latest examples of AI misuse and abuse originating from China and elsewhere.
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • cyberpress.org: CyberPress article on OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian, and Chinese Hackers
  • securityaffairs.com: SecurityAffairs article on OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities
Classification:
@www.pwc.com //
The UK's National Cyber Security Centre (NCSC) has issued warnings regarding the growing cyber threats intensified by artificial intelligence and the dangers of unpatched, end-of-life routers. The NCSC's report, "Impact of AI on cyber threat from now to 2027," indicates that threat actors are increasingly using AI to enhance existing tactics. These tactics include vulnerability research, reconnaissance, malware development, and social engineering, leading to a potential increase in both the volume and impact of cyber intrusions. The NCSC cautioned that a digital divide is emerging, with organizations unable to keep pace with AI-enabled threats facing increased risk.

The use of AI by malicious actors is projected to rise, and this poses significant challenges for businesses, especially those that are not prepared to defend against it. The NCSC noted that while advanced state actors may develop their own AI models, most threat actors will likely leverage readily available, off-the-shelf AI tools. Moreover, the implementation of AI systems by organizations can inadvertently increase their attack surface, creating new vulnerabilities that threat actors could exploit. Direct prompt injection, software vulnerabilities, indirect prompt injection, and supply chain attacks are techniques that could be used to gain access to wider systems.

Alongside the AI threat, the FBI has issued alerts concerning the rise in cyberattacks targeting aging internet routers, particularly those that have reached their "End of Life." The FBI warned of TheMoon malware exploiting these outdated devices. Both the NCSC and FBI warnings highlight the importance of proactively replacing outdated hardware and implementing robust security measures to mitigate these risks.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has issued a warning about the TheMoon malware. The warning also stresses the dramatic uptick in cyberattacks targeting aging internet routers, especially those deemed “End of Life†(EOL).
  • www.exponential-e.com: NCSC warns of IT helpdesk impersonation trick being used by ransomware gangs after UK retailers attacked
  • Latest from ITPro in News: AI-enabled cyber attacks exacerbated by digital divide in UK
  • NCSC News Feed: UK critical systems at increased risk from 'digital divide' created by AI threats
  • industrialcyber.co: NCSC warns UK critical systems face rising threats from AI-driven vulnerabilities
  • www.tenable.com: Cybersecurity Snapshot: U.K. NCSC’s Best Cyber Advice on AI Security, the Quantum Threat, API Risks, Mobile Malware and More
Classification:
  • HashTags: #CyberSecurity #AIThreats #RouterSecurity
  • Company: NCSC
  • Target: UK Retailers, Critical Systems
  • Feature: cyber threats
  • Malware: TheMoon
  • Type: News
  • Severity: Medium