info@thehackernews.com (The@The Hacker News
//
References:
The Hacker News
, therecord.media
The Rare Werewolf APT group, also known as Librarian Ghouls and Rezet, has been actively targeting Russian enterprises and engineering schools since at least 2019, with activity continuing through May 2025. This advanced persistent threat group distinguishes itself by primarily utilizing legitimate third-party software instead of developing its own malicious tools. The attacks are characterized by the use of command files and PowerShell scripts to establish remote access to compromised systems, steal credentials, and deploy the XMRig cryptocurrency miner. The campaign has impacted hundreds of Russian users, with additional infections reported in Belarus and Kazakhstan.
The group's initial infection vector typically involves targeted phishing emails containing password-protected archives with executable files disguised as official documents or payment orders. Once the victim opens the attachment, the attackers deploy a legitimate tool called 4t Tray Minimizer to obscure their presence on the compromised system. They also use tools like Defender Control to disable antivirus software and Blat, a legitimate utility, to send stolen data via SMTP. The attackers actively refine their tactics and a new wave of attacks emerged immediately after a slight decline in December 2024. A key aspect of the Rare Werewolf APT's strategy involves the use of a Windows batch script that launches a PowerShell script, scheduling the victim system to wake up at 1 AM local time and providing a four-hour window for remote access via AnyDesk. The machine is then shut down at 5 AM through a scheduled task, minimizing the chance of detection. The attackers also collect information about available CPU cores and GPUs to optimally configure the crypto miner. Besides cryptomining, the group has also been known to steal sensitive documents, passwords, and compromise Telegram accounts. Recommended read:
References :
sjvn01@Practical Technology
//
Cisco is making significant strides in integrating artificial intelligence into its networking and data center solutions. They are releasing a range of new products and updates that leverage AI to enhance security and automate network tasks, with a focus on supporting AI adoption for enterprise IT. These new "AgenticOps" tools will enable the orchestration of AI agents with a high degree of autonomy within enterprise environments, aiming to streamline complex system management. Cisco's strategy includes a focus on secure network architectures and AI-driven policies to combat emerging threats, including rogue AI agents.
The networking giant is also strengthening its data center strategy through an expanded partnership with NVIDIA. This collaboration is designed to establish a new standard for secure, scalable, and high-performance enterprise AI. The Cisco AI Defense and Hypershield security solutions utilize NVIDIA AI to deliver enhanced visibility, validation, and runtime protection across AI workflows. This partnership builds upon the Cisco Secure AI Factory with NVIDIA, aiming to provide continuous monitoring and protection throughout the AI lifecycle, from data ingestion to model deployment. Furthermore, Cisco is enhancing AI networking performance to meet the demands of data-intensive AI workloads. This includes Cisco Intelligent Packet Flow, which dynamically steers traffic using real-time telemetry, and NVIDIA Spectrum-X, an AI-optimized Ethernet platform that delivers high-throughput and low-latency connectivity. By offering end-to-end visibility and unified monitoring across networks and GPUs, Cisco and NVIDIA are enabling enterprises to maintain zero-trust security across distributed AI environments, regardless of where data and workloads are located. Recommended read:
References :
Eric Geller@cybersecuritydive.com
//
SentinelOne, a cybersecurity firm, has revealed that it was the target of a year-long reconnaissance campaign by China-linked espionage groups, identified as APT15 and UNC5174. This campaign, dubbed "PurpleHaze," involved network reconnaissance and intrusion attempts, ultimately aiming to gather strategic intelligence and potentially establish access for future conflicts. SentinelOne discovered the campaign when the suspected Chinese spies tried to break into the security vendor's own servers in October 2024. The attempted intrusion on SentinelOne's systems failed, but it prompted a deeper investigation into the broader campaign and the malware being used.
The investigation revealed that over 70 organizations across multiple sectors globally were targeted, including a South Asian government entity and a European media organization. The attacks spanned from July 2024 to March 2025 and involved the use of ShadowPad malware and post-exploitation espionage activity. These targeted sectors include manufacturing, government, finance, telecommunications, and research. The coordinated attacks are believed to be connected to Chinese government spying programs. SentinelOne has expressed high confidence that the PurpleHaze and ShadowPad activity clusters can be attributed to China-nexus threat actors. This incident underscores the persistent threat that Chinese cyber espionage actors pose to global industries and public sector organizations. The attack on SentinelOne also highlights that cybersecurity vendors themselves are prime targets for these groups, given their deep visibility into client environments and ability to disrupt adversary operations. SentinelOne recommends that more proactive steps are taken to prevent future attacks. Recommended read:
References :
Pierluigi Paganini@securityaffairs.com
//
OpenAI is actively combating the misuse of its AI tools, including ChatGPT, by malicious groups from countries like China, Russia, and Iran. The company recently banned multiple ChatGPT accounts linked to these threat actors, who were exploiting the platform for illicit activities. These banned accounts were involved in assisting with malware development, automating social media activities to spread disinformation, and conducting research on sensitive topics such as U.S. satellite communications technologies.
OpenAI's actions highlight the diverse ways in which malicious actors are attempting to leverage AI for their campaigns. Chinese groups used AI to generate fake comments and articles on platforms like TikTok and X, posing as real users to spread disinformation and influence public opinion. North Korean actors used AI to craft fake resumes and job applications in an attempt to secure remote IT jobs and potentially steal data. Russian groups employed AI to develop malware and plan cyberattacks, aiming to compromise systems and exfiltrate sensitive information. The report also details specific operations like ScopeCreep, where a Russian-speaking threat actor used ChatGPT to develop and refine Windows malware. They also use AI to debug code in multiple languages and setup their command and control infrastructure. This malware was designed to escalate privileges, establish stealthy persistence, and exfiltrate sensitive data while evading detection. OpenAI's swift response and the details revealed in its report demonstrate the ongoing battle against the misuse of AI and the proactive measures being taken to safeguard its platforms. Recommended read:
References :
Pierluigi Paganini@securityaffairs.com
//
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.
Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation. In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities. Recommended read:
References :
iHLS News@iHLS
//
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.
OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator. Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI. Recommended read:
References :
Alex Simons@Microsoft Security Blog
//
References:
Microsoft Security Blog
, Davey Winder
,
Microsoft is grappling with ongoing issues related to its Windows Updates, with another out-of-band patch released to address problems caused by a previous update. The May Patch Tuesday update had failed to install correctly on some Windows 11 virtual machines, leaving them in recovery mode with an "ACPI.sys" error. KB5062170 aims to resolve this boot error which affected Windows 11 23H2 and 22H2 systems, with the caveat that it does not fix a separate issue causing blurry CJK fonts in Chromium browsers at 100 percent scaling, requiring users to increase scaling to 125 or 150 percent as a workaround. The increasing frequency of these out-of-band fixes highlights ongoing challenges with Microsoft's quality control, impacting both consumer and enterprise users.
Alongside addressing update failures, Microsoft is actively developing AI capabilities and integrating them into its services. While specific details are limited, Microsoft is working towards building a "robust and sophisticated set of agents" across various fields and is looking at evolving identity standards. This future vision involves AI agents that can proactively identify problems, suggest solutions, and maintain context across conversations, going beyond simple request-response interactions. The company recently launched a public preview of its Conditional Access Optimizer Agent and is investing in agents for developer and operations workflows. In the realm of cybersecurity, Microsoft Threat Intelligence has identified a new Russia-affiliated threat actor named Void Blizzard, active since at least April 2024. Void Blizzard is engaging in worldwide cloud abuse activity and cyberespionage, targeting organizations of interest to Russia in critical sectors such as government, defense, transportation, media, NGOs, and healthcare, primarily in Europe and North America. This discovery underscores the ongoing need for vigilance and proactive threat detection in the face of evolving cyber threats. Recommended read:
References :
@medium.com
//
References:
medium.com
The rise of artificial intelligence has sparked intense debate about the best approach to regulation. Many believe that AI's rapid development requires careful management to mitigate potential risks. Some experts are suggesting a shift from rigid regulatory "guardrails" to more adaptable "leashes," enabling innovation while ensuring responsible use. The aim is to foster economic growth and technological progress while safeguarding public safety and ethical considerations.
The concept of "leashes" in AI regulation proposes a flexible, management-based approach, allowing AI tools to explore new domains without restrictive barriers. Unlike fixed "guardrails," leashes provide a tethered structure that can prevent AI from "running away," say experts. This approach acknowledges the heterogeneous and dynamic nature of AI, recognizing that prescriptive regulations may not be suitable for such a rapidly evolving field. Focusing on cybersecurity, experts suggest building security from first principles using foundation models. This entails reimagining cybersecurity strategies from the ground up, similar to how Netflix transformed entertainment and Visa tackled fraud detection. Instead of layering more tools and rules onto existing systems, the emphasis is on developing sophisticated models that can learn, adapt, and improve automatically, enabling proactive identification and mitigation of threats. Recommended read:
References :
Waqas@hackread.com
//
A massive database containing over 184 million unique login credentials has been discovered online by cybersecurity researcher Jeremiah Fowler. The unprotected database, which amounted to approximately 47.42 gigabytes of data, was found on a misconfigured cloud server and lacked both password protection and encryption. Fowler, from Security Discovery, identified the exposed Elastic database in early May and promptly notified the hosting provider, leading to the database being removed from public access.
The exposed credentials included usernames and passwords for a vast array of online services, including major tech platforms like Apple, Microsoft, Facebook, Google, Instagram, Snapchat, Roblox, Spotify, WordPress, and Yahoo, as well as various email providers. More alarmingly, the data also contained access information for bank accounts, health platforms, and government portals from numerous countries, posing a significant risk to individuals and organizations. The authenticity of the data was confirmed by Fowler, who contacted several individuals whose email addresses were listed in the database, and they verified that the passwords were valid. The origin and purpose of the database remain unclear, with no identifying information about its owner or collector. The sheer scope and diversity of the login details suggest that the data may have been compiled by cybercriminals using infostealer malware. Jeremiah Fowler described the find as "one of the most dangerous discoveries" he has found in a very long time. The database's IP address pointed to two domain names, one of which was unregistered, further obscuring the identity of the data's owner and intended use. Recommended read:
References :
@ketteringhealth.org
//
Kettering Health, a healthcare network operating 14 medical centers and over 120 outpatient facilities in western Ohio, has been hit by a ransomware attack causing a system-wide technology outage. The cyberattack, which occurred on Tuesday, May 20, 2025, has forced the cancellation of elective inpatient and outpatient procedures and has disrupted access to critical patient care systems, including phone lines, the call center, and the MyChart patient portal. Emergency services remain operational, but emergency crews are being diverted to other facilities due to the disruption. Kettering Health has confirmed they are responding to the cybersecurity incident involving unauthorized access to its network and has taken steps to contain and mitigate the breach, while actively investigating the situation.
The ransomware attack is suspected to involve the Interlock ransomware gang, which emerged last fall and has targeted various sectors, including tech, manufacturing firms, and government organizations. A ransom note, viewed by CNN, claimed the attackers had secured Kettering Health's most vital files and threatened to leak stolen data unless the health network began negotiating an extortion fee. In response to the disruption, Kettering Health has canceled elective procedures and is rescheduling them for a later date. Additionally, the organization is cautioning patients about scam calls from individuals posing as Kettering Health team members requesting credit card payments and has halted normal billing calls as a precaution. The incident highlights the increasing cybersecurity challenges facing healthcare systems. According to cybersecurity experts, healthcare networks often operate with outdated technology and lack comprehensive cybersecurity training for staff, making them vulnerable to attacks. There is a call to action to invest in healthcare cybersecurity, with recommendations for the government and its partners to address understaffed healthcare cyber programs by tweaking federal healthcare funding programs to cover critical cybersecurity expenditures, augmenting healthcare cybersecurity workforces and incentivizing cyber maturity. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |