News from the AI & ML world

DeeperML - #cyberespionage

Eric Geller@cybersecuritydive.com //
SentinelOne, a cybersecurity firm, has revealed that it was the target of a year-long reconnaissance campaign by China-linked espionage groups, identified as APT15 and UNC5174. This campaign, dubbed "PurpleHaze," involved network reconnaissance and intrusion attempts, ultimately aiming to gather strategic intelligence and potentially establish access for future conflicts. SentinelOne discovered the campaign when the suspected Chinese spies tried to break into the security vendor's own servers in October 2024. The attempted intrusion on SentinelOne's systems failed, but it prompted a deeper investigation into the broader campaign and the malware being used.

The investigation revealed that over 70 organizations across multiple sectors globally were targeted, including a South Asian government entity and a European media organization. The attacks spanned from July 2024 to March 2025 and involved the use of ShadowPad malware and post-exploitation espionage activity. These targeted sectors include manufacturing, government, finance, telecommunications, and research. The coordinated attacks are believed to be connected to Chinese government spying programs.

SentinelOne has expressed high confidence that the PurpleHaze and ShadowPad activity clusters can be attributed to China-nexus threat actors. This incident underscores the persistent threat that Chinese cyber espionage actors pose to global industries and public sector organizations. The attack on SentinelOne also highlights that cybersecurity vendors themselves are prime targets for these groups, given their deep visibility into client environments and ability to disrupt adversary operations. SentinelOne recommends that more proactive steps are taken to prevent future attacks.

Recommended read:
References :
  • The Register - Security: Chinese spy crew appears to be preparing for conflict by backdooring 75+ critical orgs
  • hackread.com: Chinese-Linked Hackers Targeted 70+ Global Organizations, SentinelLABS
  • www.scworld.com: FAILED ATTACK ON SENTINELONE REVEALS CAMPAIGN BY CHINA-LINKED GROUPS
  • The Hacker News: Over 70 Organizations Across Multiple Sectors Targeted by China-Linked Cyber Espionage Group
  • www.cybersecuritydive.com: SentinelOne rebuffs China-linked attack — and discovers global intrusions
  • SecureWorld News: Chinese Hackers Target SentinelOne in Broader Espionage Campaign
  • securityaffairs.com: China-linked threat actor targeted +70 orgs worldwide, SentinelOne warns
  • Cyber Security News: New Report Reveals Chinese Hackers Targeted to Breach SentinelOne Servers
  • www.sentinelone.com: The security firm said the operatives who tried to breach it turned out to be responsible for cyberattacks on dozens of critical infrastructure organizations worldwide.
  • BleepingComputer: SentinelOne shares new details on China-linked breach attempt
  • cyberpress.org: A newly published technical analysis by SentinelLABS has exposed a sophisticated, multi-phase reconnaissance and intrusion campaign orchestrated by Chinese-nexus threat actors, aimed explicitly at SentinelOne’s digital infrastructure between mid-2024 and early 2025.
  • gbhackers.com: New Report Reveals Chinese Hackers Attempted to Breach SentinelOne Servers
  • industrialcyber.co: SentinelOne links ShadowPad and PurpleHaze attacks to China-aligned threat actors

Pierluigi Paganini@securityaffairs.com //
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.

Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation.

In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities.

Recommended read:
References :
  • chatgptiseatingtheworld.com: After filing an objection with Judge Stein, OpenAI took to the court of public opinion to seek the reversal of Magistrate Judge Wang’s broad order requiring OpenAI to preserve all ChatGPT logs of people’s chats.
  • Reclaim The Net: Private prompts once thought ephemeral could now live forever, thanks for demands from the New York Times.
  • Digital Information World: If you’ve ever used ChatGPT’s temporary chat feature thinking your conversation would vanish after closing the window — well, it turns out that wasn’t exactly the case.
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • Schneier on Security: Report on the Malicious Uses of AI
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Latest news: How global threat actors are weaponizing AI now, according to OpenAI
  • The Hacker News: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • therecord.media: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • siliconangle.com: OpenAI to retain deleted ChatGPT conversations following court order
  • eWEEK: ‘An Inappropriate Request’: OpenAI Appeals ChatGPT Data Retention Court Order in NYT Case
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • Policy ? Ars Technica: OpenAI is retaining all ChatGPT logs “indefinitely.†Here’s who’s affected.
  • AI News | VentureBeat: Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
  • www.techradar.com: Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever
  • aithority.com: New Relic Report Shows OpenAI’s ChatGPT Dominates Among AI Developers
  • the-decoder.com: ChatGPT scams range from silly money-making ploys to calculated political meddling
  • hackread.com: OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, N. Korea
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

iHLS News@iHLS //
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.

OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator.

Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI.

Recommended read:
References :
  • Schneier on Security: Report on the Malicious Uses of AI
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • Latest news: The company's new report outlines the latest examples of AI misuse and abuse originating from China and elsewhere.
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist.
  • cyberpress.org: CyberPress article on OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian, and Chinese Hackers
  • securityaffairs.com: SecurityAffairs article on OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • The Hacker News: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities
  • www.itpro.com: OpenAI is clamping down on ChatGPT accounts used to spread malware.

@www.microsoft.com //
References: www.microsoft.com , PPC Land ,
Microsoft is aggressively integrating artificial intelligence across its products and services, striving to revolutionize the user experience. The company is focused on developing agentic systems that can work independently, proactively identify problems, suggest solutions, and maintain context across interactions. Microsoft envisions a future where AI agents will augment and amplify organizational capabilities, leading to significant transformations in various fields. To facilitate secure and flexible interactions, Microsoft is employing Model Context Protocol (MCP) to enable AI models to interact with external services.

As AI agents become more sophisticated and integrated into business processes, Microsoft recognizes the importance of evolving identity standards. The company is actively working on robust mechanisms to ensure agents can securely access data and act across connected systems, including APIs, code repositories, and enterprise systems. Microsoft emphasizes that industry collaboration on identity standards is crucial for the safe and effective deployment of AI agents.

To aid organizations in safely adopting AI, Microsoft Deputy CISO Yonatan Zunger shares guidance for efficient implementation and defense against evolving identity attack techniques. Microsoft CVP Charles Lamanna offers an AI adoption playbook, emphasizing the importance of "customer obsession" and "extreme ownership" for both startups and large enterprises navigating the age of AI. Lamanna suggests focusing on a few high-impact AI projects instead of spreading resources thinly across numerous pilots.

Recommended read:
References :
  • www.microsoft.com: How to deploy AI safely
  • PPC Land: Microsoft debuts free AI video generation tool powered by OpenAI's Sora, rolling out globally on mobile devices today.
  • www.microsoft.com: Microsoft and CrowdStrike are teaming up to create alignment across our individual threat actor taxonomies to help security professionals connect insights faster. The post appeared first on .