iHLS News@iHLS
//
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.
OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator. Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI. References :
Classification:
@www.pwc.com
//
The UK's National Cyber Security Centre (NCSC) has issued warnings regarding the growing cyber threats intensified by artificial intelligence and the dangers of unpatched, end-of-life routers. The NCSC's report, "Impact of AI on cyber threat from now to 2027," indicates that threat actors are increasingly using AI to enhance existing tactics. These tactics include vulnerability research, reconnaissance, malware development, and social engineering, leading to a potential increase in both the volume and impact of cyber intrusions. The NCSC cautioned that a digital divide is emerging, with organizations unable to keep pace with AI-enabled threats facing increased risk.
The use of AI by malicious actors is projected to rise, and this poses significant challenges for businesses, especially those that are not prepared to defend against it. The NCSC noted that while advanced state actors may develop their own AI models, most threat actors will likely leverage readily available, off-the-shelf AI tools. Moreover, the implementation of AI systems by organizations can inadvertently increase their attack surface, creating new vulnerabilities that threat actors could exploit. Direct prompt injection, software vulnerabilities, indirect prompt injection, and supply chain attacks are techniques that could be used to gain access to wider systems. Alongside the AI threat, the FBI has issued alerts concerning the rise in cyberattacks targeting aging internet routers, particularly those that have reached their "End of Life." The FBI warned of TheMoon malware exploiting these outdated devices. Both the NCSC and FBI warnings highlight the importance of proactively replacing outdated hardware and implementing robust security measures to mitigate these risks. References :
Classification:
|
BenchmarksBlogsResearch Tools |