News from the AI & ML world

DeeperML - #cybersecurity

info@thehackernews.com (The@The Hacker News //
The Rare Werewolf APT group, also known as Librarian Ghouls and Rezet, has been actively targeting Russian enterprises and engineering schools since at least 2019, with activity continuing through May 2025. This advanced persistent threat group distinguishes itself by primarily utilizing legitimate third-party software instead of developing its own malicious tools. The attacks are characterized by the use of command files and PowerShell scripts to establish remote access to compromised systems, steal credentials, and deploy the XMRig cryptocurrency miner. The campaign has impacted hundreds of Russian users, with additional infections reported in Belarus and Kazakhstan.

The group's initial infection vector typically involves targeted phishing emails containing password-protected archives with executable files disguised as official documents or payment orders. Once the victim opens the attachment, the attackers deploy a legitimate tool called 4t Tray Minimizer to obscure their presence on the compromised system. They also use tools like Defender Control to disable antivirus software and Blat, a legitimate utility, to send stolen data via SMTP. The attackers actively refine their tactics and a new wave of attacks emerged immediately after a slight decline in December 2024.

A key aspect of the Rare Werewolf APT's strategy involves the use of a Windows batch script that launches a PowerShell script, scheduling the victim system to wake up at 1 AM local time and providing a four-hour window for remote access via AnyDesk. The machine is then shut down at 5 AM through a scheduled task, minimizing the chance of detection. The attackers also collect information about available CPU cores and GPUs to optimally configure the crypto miner. Besides cryptomining, the group has also been known to steal sensitive documents, passwords, and compromise Telegram accounts.

Recommended read:
References :
  • The Hacker News: Research focusing on the group's methods, including its use of legitimate software.
  • therecord.media: Report of the malicious campaign targeting Russian enterprises.

sjvn01@Practical Technology //
Cisco is making significant strides in integrating artificial intelligence into its networking and data center solutions. They are releasing a range of new products and updates that leverage AI to enhance security and automate network tasks, with a focus on supporting AI adoption for enterprise IT. These new "AgenticOps" tools will enable the orchestration of AI agents with a high degree of autonomy within enterprise environments, aiming to streamline complex system management. Cisco's strategy includes a focus on secure network architectures and AI-driven policies to combat emerging threats, including rogue AI agents.

The networking giant is also strengthening its data center strategy through an expanded partnership with NVIDIA. This collaboration is designed to establish a new standard for secure, scalable, and high-performance enterprise AI. The Cisco AI Defense and Hypershield security solutions utilize NVIDIA AI to deliver enhanced visibility, validation, and runtime protection across AI workflows. This partnership builds upon the Cisco Secure AI Factory with NVIDIA, aiming to provide continuous monitoring and protection throughout the AI lifecycle, from data ingestion to model deployment.

Furthermore, Cisco is enhancing AI networking performance to meet the demands of data-intensive AI workloads. This includes Cisco Intelligent Packet Flow, which dynamically steers traffic using real-time telemetry, and NVIDIA Spectrum-X, an AI-optimized Ethernet platform that delivers high-throughput and low-latency connectivity. By offering end-to-end visibility and unified monitoring across networks and GPUs, Cisco and NVIDIA are enabling enterprises to maintain zero-trust security across distributed AI environments, regardless of where data and workloads are located.

Recommended read:
References :
  • Practical Technology: Serious about running your own AI infrastructure? Consider Cisco’s latest offerings.
  • WhatIs: The networking giant released a slew of products leveraging the capabilities of last year's Splunk acquisition and touting a focus on AI adoption support.
  • www.zdnet.com: AgenticOps tools are a way to 'orchestrate' agents that will have a high degree of autonomy in the enterprise campus.
  • blogs.nvidia.com: Cisco and NVIDIA are helping set a new standard for secure, scalable and high-performance enterprise AI.

Eric Geller@cybersecuritydive.com //
SentinelOne, a cybersecurity firm, has revealed that it was the target of a year-long reconnaissance campaign by China-linked espionage groups, identified as APT15 and UNC5174. This campaign, dubbed "PurpleHaze," involved network reconnaissance and intrusion attempts, ultimately aiming to gather strategic intelligence and potentially establish access for future conflicts. SentinelOne discovered the campaign when the suspected Chinese spies tried to break into the security vendor's own servers in October 2024. The attempted intrusion on SentinelOne's systems failed, but it prompted a deeper investigation into the broader campaign and the malware being used.

The investigation revealed that over 70 organizations across multiple sectors globally were targeted, including a South Asian government entity and a European media organization. The attacks spanned from July 2024 to March 2025 and involved the use of ShadowPad malware and post-exploitation espionage activity. These targeted sectors include manufacturing, government, finance, telecommunications, and research. The coordinated attacks are believed to be connected to Chinese government spying programs.

SentinelOne has expressed high confidence that the PurpleHaze and ShadowPad activity clusters can be attributed to China-nexus threat actors. This incident underscores the persistent threat that Chinese cyber espionage actors pose to global industries and public sector organizations. The attack on SentinelOne also highlights that cybersecurity vendors themselves are prime targets for these groups, given their deep visibility into client environments and ability to disrupt adversary operations. SentinelOne recommends that more proactive steps are taken to prevent future attacks.

Recommended read:
References :
  • The Register - Security: Chinese spy crew appears to be preparing for conflict by backdooring 75+ critical orgs
  • hackread.com: Chinese-Linked Hackers Targeted 70+ Global Organizations, SentinelLABS
  • www.scworld.com: FAILED ATTACK ON SENTINELONE REVEALS CAMPAIGN BY CHINA-LINKED GROUPS
  • The Hacker News: Over 70 Organizations Across Multiple Sectors Targeted by China-Linked Cyber Espionage Group
  • www.cybersecuritydive.com: SentinelOne rebuffs China-linked attack — and discovers global intrusions
  • SecureWorld News: Chinese Hackers Target SentinelOne in Broader Espionage Campaign
  • securityaffairs.com: China-linked threat actor targeted +70 orgs worldwide, SentinelOne warns
  • Cyber Security News: New Report Reveals Chinese Hackers Targeted to Breach SentinelOne Servers
  • www.sentinelone.com: The security firm said the operatives who tried to breach it turned out to be responsible for cyberattacks on dozens of critical infrastructure organizations worldwide.
  • BleepingComputer: SentinelOne shares new details on China-linked breach attempt
  • cyberpress.org: A newly published technical analysis by SentinelLABS has exposed a sophisticated, multi-phase reconnaissance and intrusion campaign orchestrated by Chinese-nexus threat actors, aimed explicitly at SentinelOne’s digital infrastructure between mid-2024 and early 2025.
  • gbhackers.com: New Report Reveals Chinese Hackers Attempted to Breach SentinelOne Servers
  • industrialcyber.co: SentinelOne links ShadowPad and PurpleHaze attacks to China-aligned threat actors

Pierluigi Paganini@securityaffairs.com //
OpenAI is actively combating the misuse of its AI tools, including ChatGPT, by malicious groups from countries like China, Russia, and Iran. The company recently banned multiple ChatGPT accounts linked to these threat actors, who were exploiting the platform for illicit activities. These banned accounts were involved in assisting with malware development, automating social media activities to spread disinformation, and conducting research on sensitive topics such as U.S. satellite communications technologies.

OpenAI's actions highlight the diverse ways in which malicious actors are attempting to leverage AI for their campaigns. Chinese groups used AI to generate fake comments and articles on platforms like TikTok and X, posing as real users to spread disinformation and influence public opinion. North Korean actors used AI to craft fake resumes and job applications in an attempt to secure remote IT jobs and potentially steal data. Russian groups employed AI to develop malware and plan cyberattacks, aiming to compromise systems and exfiltrate sensitive information.

The report also details specific operations like ScopeCreep, where a Russian-speaking threat actor used ChatGPT to develop and refine Windows malware. They also use AI to debug code in multiple languages and setup their command and control infrastructure. This malware was designed to escalate privileges, establish stealthy persistence, and exfiltrate sensitive data while evading detection. OpenAI's swift response and the details revealed in its report demonstrate the ongoing battle against the misuse of AI and the proactive measures being taken to safeguard its platforms.

Recommended read:
References :
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • The Register - Security: OpenAI boots accounts linked to 10 malicious campaigns
  • hackread.com: OpenAI, a leading artificial intelligence company, has revealed it is actively fighting widespread misuse of its AI tools…
  • Metacurity: OpenAI banned ChatGPT accounts tied to Russian and Chinese hackers using the tool for malware, social media abuse, and U.S.

Pierluigi Paganini@securityaffairs.com //
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.

Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation.

In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities.

Recommended read:
References :
  • chatgptiseatingtheworld.com: After filing an objection with Judge Stein, OpenAI took to the court of public opinion to seek the reversal of Magistrate Judge Wang’s broad order requiring OpenAI to preserve all ChatGPT logs of people’s chats.
  • Reclaim The Net: Private prompts once thought ephemeral could now live forever, thanks for demands from the New York Times.
  • Digital Information World: If you’ve ever used ChatGPT’s temporary chat feature thinking your conversation would vanish after closing the window — well, it turns out that wasn’t exactly the case.
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • Schneier on Security: Report on the Malicious Uses of AI
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • www.zdnet.com: How global threat actors are weaponizing AI now, according to OpenAI
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • therecord.media: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • siliconangle.com: OpenAI to retain deleted ChatGPT conversations following court order
  • eWEEK: ‘An Inappropriate Request’: OpenAI Appeals ChatGPT Data Retention Court Order in NYT Case
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • Policy ? Ars Technica: OpenAI is retaining all ChatGPT logs “indefinitely.†Here’s who’s affected.
  • AI News | VentureBeat: Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
  • www.techradar.com: Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever
  • aithority.com: New Relic Report Shows OpenAI’s ChatGPT Dominates Among AI Developers
  • the-decoder.com: ChatGPT scams range from silly money-making ploys to calculated political meddling
  • hackread.com: OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, N. Korea
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

iHLS News@iHLS //
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.

OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator.

Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI.

Recommended read:
References :
  • Schneier on Security: Report on the Malicious Uses of AI
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • www.zdnet.com: The company's new report outlines the latest examples of AI misuse and abuse originating from China and elsewhere.
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • cyberpress.org: CyberPress article on OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian, and Chinese Hackers
  • securityaffairs.com: SecurityAffairs article on OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

Alex Simons@Microsoft Security Blog //
Microsoft is grappling with ongoing issues related to its Windows Updates, with another out-of-band patch released to address problems caused by a previous update. The May Patch Tuesday update had failed to install correctly on some Windows 11 virtual machines, leaving them in recovery mode with an "ACPI.sys" error. KB5062170 aims to resolve this boot error which affected Windows 11 23H2 and 22H2 systems, with the caveat that it does not fix a separate issue causing blurry CJK fonts in Chromium browsers at 100 percent scaling, requiring users to increase scaling to 125 or 150 percent as a workaround. The increasing frequency of these out-of-band fixes highlights ongoing challenges with Microsoft's quality control, impacting both consumer and enterprise users.

Alongside addressing update failures, Microsoft is actively developing AI capabilities and integrating them into its services. While specific details are limited, Microsoft is working towards building a "robust and sophisticated set of agents" across various fields and is looking at evolving identity standards. This future vision involves AI agents that can proactively identify problems, suggest solutions, and maintain context across conversations, going beyond simple request-response interactions. The company recently launched a public preview of its Conditional Access Optimizer Agent and is investing in agents for developer and operations workflows.

In the realm of cybersecurity, Microsoft Threat Intelligence has identified a new Russia-affiliated threat actor named Void Blizzard, active since at least April 2024. Void Blizzard is engaging in worldwide cloud abuse activity and cyberespionage, targeting organizations of interest to Russia in critical sectors such as government, defense, transportation, media, NGOs, and healthcare, primarily in Europe and North America. This discovery underscores the ongoing need for vigilance and proactive threat detection in the face of evolving cyber threats.

Recommended read:
References :
  • Microsoft Security Blog: Our industry needs to continue working together on identity standards for agent access across systems. Read about how Microsoft is building a robust and sophisticated set of agents.
  • Davey Winder: Microsoft has confirmed that Windows Update is changing — here's what you need to know.
  • www.microsoft.com: Microsoft Threat Intelligence has discovered a cluster of worldwide cloud abuse activity conducted by a threat actor we track as Void Blizzard, who we assess with high confidence is Russia-affiliated and has been active since at least April 2024.

@medium.com //
References: medium.com
The rise of artificial intelligence has sparked intense debate about the best approach to regulation. Many believe that AI's rapid development requires careful management to mitigate potential risks. Some experts are suggesting a shift from rigid regulatory "guardrails" to more adaptable "leashes," enabling innovation while ensuring responsible use. The aim is to foster economic growth and technological progress while safeguarding public safety and ethical considerations.

The concept of "leashes" in AI regulation proposes a flexible, management-based approach, allowing AI tools to explore new domains without restrictive barriers. Unlike fixed "guardrails," leashes provide a tethered structure that can prevent AI from "running away," say experts. This approach acknowledges the heterogeneous and dynamic nature of AI, recognizing that prescriptive regulations may not be suitable for such a rapidly evolving field.

Focusing on cybersecurity, experts suggest building security from first principles using foundation models. This entails reimagining cybersecurity strategies from the ground up, similar to how Netflix transformed entertainment and Visa tackled fraud detection. Instead of layering more tools and rules onto existing systems, the emphasis is on developing sophisticated models that can learn, adapt, and improve automatically, enabling proactive identification and mitigation of threats.

Recommended read:
References :
  • medium.com: The Cybersecurity Transformation: Building Security from First Principles with Foundation Models

Waqas@hackread.com //
A massive database containing over 184 million unique login credentials has been discovered online by cybersecurity researcher Jeremiah Fowler. The unprotected database, which amounted to approximately 47.42 gigabytes of data, was found on a misconfigured cloud server and lacked both password protection and encryption. Fowler, from Security Discovery, identified the exposed Elastic database in early May and promptly notified the hosting provider, leading to the database being removed from public access.

The exposed credentials included usernames and passwords for a vast array of online services, including major tech platforms like Apple, Microsoft, Facebook, Google, Instagram, Snapchat, Roblox, Spotify, WordPress, and Yahoo, as well as various email providers. More alarmingly, the data also contained access information for bank accounts, health platforms, and government portals from numerous countries, posing a significant risk to individuals and organizations. The authenticity of the data was confirmed by Fowler, who contacted several individuals whose email addresses were listed in the database, and they verified that the passwords were valid.

The origin and purpose of the database remain unclear, with no identifying information about its owner or collector. The sheer scope and diversity of the login details suggest that the data may have been compiled by cybercriminals using infostealer malware. Jeremiah Fowler described the find as "one of the most dangerous discoveries" he has found in a very long time. The database's IP address pointed to two domain names, one of which was unregistered, further obscuring the identity of the data's owner and intended use.

Recommended read:
References :
  • hackread.com: Database Leak Reveals 184 Million Infostealer-Harvested Emails and Passwords
  • PCMag UK security: Security Nightmare: Researcher Finds Trove of 184M Exposed Logins for Google, Apple, More
  • WIRED: Mysterious Database of 184 Million Records Exposes Vast Array of Login Credentials
  • www.zdnet.com: Massive data breach exposes 184 million passwords for Google, Microsoft, Facebook, and more
  • Davey Winder: 184,162,718 Passwords And Logins Leaked — Apple, Facebook, Snapchat
  • DataBreaches.Net: Mysterious database of 184 million records exposes vast array of login credentials
  • 9to5Mac: Apple logins with plain text passwords found in massive database of 184M records
  • www.engadget.com: Someone Found Over 180 Million User Records in an Unprotected Online Database
  • borncity.com: Suspected InfoStealer data leak exposes 184 million login data
  • databreaches.net: The possibility that data could be inadvertently exposed in a misconfigured or otherwise unsecured database is a longtime privacy nightmare that has been difficult to fully address.
  • borncity.com: [German]Security researcher Jeremiah Fowler came across a freely accessible and unprotected database on the Internet. The find was quite something, as a look at the data sets suggests that it was probably data collected by InfoStealer malware. Records containing 184 …
  • securityonline.info: 184 Million Leaked Credentials Found in Open Database
  • Know Your Adversary: 184 Million Records Database Leak: Microsoft, Apple, Google, Facebook, PayPal Logins Found
  • securityonline.info: Security researchers have identified a database containing a staggering 184 million account credentials—prompting yet another urgent reminder to The post appeared first on .

@ketteringhealth.org //
Kettering Health, a healthcare network operating 14 medical centers and over 120 outpatient facilities in western Ohio, has been hit by a ransomware attack causing a system-wide technology outage. The cyberattack, which occurred on Tuesday, May 20, 2025, has forced the cancellation of elective inpatient and outpatient procedures and has disrupted access to critical patient care systems, including phone lines, the call center, and the MyChart patient portal. Emergency services remain operational, but emergency crews are being diverted to other facilities due to the disruption. Kettering Health has confirmed they are responding to the cybersecurity incident involving unauthorized access to its network and has taken steps to contain and mitigate the breach, while actively investigating the situation.

The ransomware attack is suspected to involve the Interlock ransomware gang, which emerged last fall and has targeted various sectors, including tech, manufacturing firms, and government organizations. A ransom note, viewed by CNN, claimed the attackers had secured Kettering Health's most vital files and threatened to leak stolen data unless the health network began negotiating an extortion fee. In response to the disruption, Kettering Health has canceled elective procedures and is rescheduling them for a later date. Additionally, the organization is cautioning patients about scam calls from individuals posing as Kettering Health team members requesting credit card payments and has halted normal billing calls as a precaution.

The incident highlights the increasing cybersecurity challenges facing healthcare systems. According to cybersecurity experts, healthcare networks often operate with outdated technology and lack comprehensive cybersecurity training for staff, making them vulnerable to attacks. There is a call to action to invest in healthcare cybersecurity, with recommendations for the government and its partners to address understaffed healthcare cyber programs by tweaking federal healthcare funding programs to cover critical cybersecurity expenditures, augmenting healthcare cybersecurity workforces and incentivizing cyber maturity.

Recommended read:
References :
  • industrialcyber.co: Ransomware suspected in Kettering Health cyberattack disrupting patient services, canceling elective procedures
  • BleepingComputer: Kettering Health, a healthcare network that operates 14 medical centers in Ohio, was forced to cancel inpatient and outpatient procedures following a cyberattack that caused a system-wide technology outage.
  • www.bleepingcomputer.com: Kettering Health, a healthcare network that operates 14 medical centers in Ohio, was forced to cancel inpatient and outpatient procedures following a cyberattack that caused a system-wide technology outage. [...]
  • DataBreaches.Net: Elective inpatient and outpatient procedures were canceled.
  • thecyberexpress.com: Kettering Health Hit by Cyberattack: Network Outage and Scam Calls Reported
  • The DefendOps Diaries: Strengthening Cybersecurity in Healthcare: Lessons from the Kettering Health Ransomware Attack
  • BleepingComputer: Kettering Health hit by system-wide outage after ransomware attack
  • The Dysruption Hub: Reports Ransomware Attack Cripples Kettering Health Systems Across Ohio
  • : Kettering Health faces a ransomware attack and confirms a scam targeting its patients
  • www.scworld.com: Apparent ransomware attack leads to systemwide outage for Kettering Health
  • Industrial Cyber: Reports Ransomware suspected in Kettering Health cyberattack disrupting patient services, canceling elective procedures
  • www.itpro.com: The incident at Kettering Health disrupted procedures for patients
  • www.cybersecuritydive.com: Ohio’s Kettering Health hit by cyberattack

Nicole Kobie@itpro.com //
The FBI has issued a warning regarding a major fraud campaign where cybercriminals are using AI-generated audio deepfakes and text messages to impersonate senior U.S. government officials. This scheme, which has been active since April 2025, targets current and former federal and state officials, along with their contacts, aiming to gain access to their personal accounts. The attackers are employing tactics known as smishing (SMS phishing) and vishing (voice phishing) to establish rapport before attempting to compromise accounts, potentially leading to the theft of sensitive information or funds.

The FBI advises that if individuals receive a message claiming to be from a senior U.S. official, they should not assume it is authentic. The agency suggests verifying the communication through official channels, such as calling back using the official number of the relevant department, rather than the number provided in the suspicious message. Additionally, recipients should be wary of unusual verbal tics or word choices that could indicate a deepfake in operation.

This warning comes amidst a surge in social engineering attacks leveraging AI-based voice cloning. A recent report indicated a 442% increase in the use of AI voice cloning between the first and second halves of 2024. Experts caution that the stolen credentials or information obtained through these schemes could be used to further impersonate officials, spread disinformation, or commit financial fraud, highlighting the increasing sophistication and potential damage of AI-enhanced fraud.

Recommended read:
References :
  • Threats | CyberScoop: FBI warns of fake texts, deepfake calls impersonating senior U.S. officials
  • Talkback Resources: Deepfake voices of senior US officials used in scams: FBI [social]
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has released a public service announcement to warn individuals about a growing involving text and voice messaging scams. Since April 2025, malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts. The FBI is urging the public to remain vigilant and take steps to protect themselves from these schemes. So let's understand what exactly is happening? The FBI has disclosed a coordinated campaign involving smishing and vishing—two cyber techniques used to deceive people into revealing sensitive information or giving unauthorized access to their personal accounts.
  • www.itpro.com: The FBI says hackers are using AI voice clones to impersonate US government officials
  • The Register - Software: The FBI has warned that fraudsters are impersonating "senior US officials" using deepfakes as part of a major fraud campaign.
  • www.cybersecuritydive.com: Hackers are increasingly using vishing and smishing for state-backed espionage campaigns and major ransomware attacks.
  • Tech Monitor: FBI warns of AI-generated audio deepfakes targeting US officials
  • cyberinsider.com: Senior U.S. Officials Impersonated in AI-Powered Vishing Campaign
  • cyberscoop.com: FBI warns of fake texts, deepfake calls impersonating senior U.S. officials
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has released a public service announcement to warn individuals about a growing involving text and voice messaging scams. Since April 2025, malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts.
  • BleepingComputer: FBI: US officials targeted in voice deepfake attacks since April
  • securityaffairs.com: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials
  • www.techradar.com: The FBI is warning about ongoing smishing and vishing attacks impersonating senior US officials.
  • hackread.com: FBI warns of AI Voice Scams Impersonating US Govt Officials
  • Security Affairs: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials Shields up US
  • iHLS: The FBI has flagged a concerning wave of cyber activity involving AI-generated content used to impersonate high-ranking U.S. government officials.

@www.webroot.com //
Cybercriminals are increasingly using sophisticated tactics to deceive individuals and steal sensitive information. One common method involves sending fraudulent text messages, known as smishing, that impersonate legitimate businesses like delivery services or banks. These scams often entice victims to click on malicious links, leading to identity theft, financial loss, or the installation of malware. Webroot emphasizes mobile security, particularly protecting phones from text scams with potential identity theft and malware planting. The Federal Trade Commission reported that consumers lost $470 million to scams initiated through text messages in 2024.

Google is intensifying its efforts to combat these online threats by integrating artificial intelligence across its various platforms. The company is leveraging AI in Search, Chrome, and Android to identify and block scam attempts more effectively. Google's AI-powered defenses are capable of detecting 20 times more scam pages than before, significantly improving the quality of search results. Furthermore, AI is used to identify fraudulent websites, app notifications, calls, and direct messages, helping to safeguard users from various scam tactics.

A key component of Google's enhanced protection is the integration of Gemini Nano, a lightweight, on-device AI model, into Chrome. This allows for instant identification of scams, even those that haven't been previously encountered. When a user navigates to a potentially dangerous page, Chrome evaluates the page using Gemini Nano, which extracts security signals to determine the intent of the page. This information is then sent to Safe Browsing for a final verdict, adding an extra layer of protection against evolving online threats.

Recommended read:
References :
  • www.eweek.com: Google is intensifying efforts to combat online scams by integrating artificial intelligence across Search, Chrome, and Android, aiming to make fraud more difficult for cybercriminals.
  • www.webroot.com: It all starts so innocently. You get a text saying “Your package couldn’t be delivered. Click here to reschedule.†Little do you know, clicking that link could open the door for scammers to steal your identity, empty your bank account, or even plant malicious software (malware) on your device. Unless you know what to look out

@owaspai.org //
References: OWASP , Bernard Marr
The Open Worldwide Application Security Project (OWASP) is actively shaping the future of AI regulation through its AI Exchange project. This initiative fosters collaboration between the global security community and formal standardization bodies, driving the creation of AI security standards designed to protect individuals and businesses while encouraging innovation. By establishing a formal liaison with international standardization organizations like CEN/CENELEC, OWASP is enabling its vast network of security professionals to directly contribute to the development of these crucial standards, ensuring they are practical, fair, and effective.

OWASP's influence is already evident in the development of key AI security standards, notably impacting the AI Act, a European Commission initiative. Through the contributions of experts like Rob van der Veer, who founded the OWASP AI Exchange, the project has provided significant input to ISO/IEC 27090, the global standard on AI security guidance. The OWASP AI Exchange serves as an open-source platform where experts collaborate to shape these global standards, ensuring a balance between strong security measures and the flexibility needed to support ongoing innovation.

The OWASP AI Exchange provides over 200 pages of practical advice and references on protecting AI and data-centric systems from threats. This resource serves as a bookmark for professionals and actively contributes to international standards, demonstrating the consensus on AI security and privacy through collaboration with key institutes and Standards Development Organizations (SDOs). The foundation of OWASP's approach lies in risk-based thinking, tailoring security measures to specific contexts rather than relying on a one-size-fits-all checklist, addressing the critical need for clear guidance and effective regulation in the rapidly evolving landscape of AI security.

Recommended read:
References :
  • OWASP: OWASP Enables AI Regulation That Works with OWASP AI Exchange
  • Bernard Marr: Take These Steps Today To Protect Yourself Against AI Cybercrime

Siôn Geschwindt@The Next Web //
References: The Next Web , medium.com ,
Quantum computing is rapidly advancing, presenting both opportunities and challenges. Researchers at Toshiba Europe have achieved a significant milestone by transmitting quantum-encrypted messages over a record distance of 254km using standard fiber optic cables. This breakthrough, facilitated by quantum key distribution (QKD) cryptography, marks the first instance of coherent quantum communication via existing telecom infrastructure. QKD leverages the principles of quantum mechanics to securely share encryption keys, making eavesdropping virtually impossible, as any attempt to intercept the message would immediately alert both parties involved.

This advance addresses growing concerns among European IT professionals, with 67% fearing that quantum computing could compromise current encryption standards. Unlike classical computers, which would take an impractical amount of time to break modern encryption, quantum computers can exploit phenomena like superposition and entanglement to potentially crack even the most secure classical encryptions within minutes. This has prompted global governments and organizations to accelerate the development of robust cryptographic algorithms capable of withstanding quantum attacks.

Efforts are underway to build quantum-secure communication infrastructure. Heriot-Watt University recently inaugurated a £2.5 million Optical Ground Station (HOGS) to promote satellite-based quantum-secure communication. In July 2024, Toshiba Europe, GÉANT, PSNC, and Anglia Ruskin University demonstrated cryogenics-free QKD over a 254 km fiber link, using standard telecom racks and room temperature detectors. Initiatives such as Europe’s EuroQCI and ESA’s Eagle-1 satellite further underscore the commitment to developing and deploying quantum-resistant technologies, mitigating the silent threat that quantum computing poses to cybersecurity.

Recommended read:
References :
  • The Next Web: Researchers at Toshiba Europe have used quantum key distribution (QKD) cryptography to send messages a record 254km using a traditional fiber optic cable network.
  • medium.com: Rethinking Cybersecurity in the Face of Emerging Threats
  • medium.com: Quantum Security: The Silent Threat Coming for Your Business

@industrialcyber.co //
The UK's National Cyber Security Centre (NCSC) has issued a warning that critical systems in the United Kingdom face increasing risks due to AI-driven vulnerabilities. The agency highlighted a growing 'digital divide' between organizations capable of defending against AI-enabled threats and those that are not, exposing the latter to greater cyber risk. According to a new report, developments in AI are expected to accelerate the exploitation of software vulnerabilities by malicious actors, intensifying cyber threats by 2027.

The report, presented at the NCSC's CYBERUK conference, predicts that AI will significantly enhance the efficiency and effectiveness of cyber intrusions. Paul Chichester, NCSC director of operations, stated that AI is transforming the cyber threat landscape by expanding attack surfaces, increasing the volume of threats, and accelerating malicious capabilities. He emphasized the need for organizations to implement robust cybersecurity practices across their AI systems and dependencies, ensuring up-to-date defenses.

The NCSC assessment emphasizes that by 2027, AI-enabled tools will almost certainly improve threat actors' ability to exploit known vulnerabilities, leading to a surge in attacks against systems lacking security updates. With the time between vulnerability disclosure and exploitation already shrinking, AI is expected to further reduce this timeframe. The agency urges organizations to adopt its guidance on securely implementing AI tools while maintaining strong cybersecurity measures across all systems.

Recommended read:
References :
  • Industrial Cyber: GCHQ’s National Cyber Security Centre (NCSC) has warned that U.K. critical systems are facing growing risks due to...
  • NCSC News Feed: NCSC warns that organisations unable to defend AI-enabled threats are exposed to greater cyber risk and AI-driven vulnerabilities and threats
  • Tech Monitor: Report predicts that AI will enhance cyber intrusion efficiency and effectiveness by 2027

info@thehackernews.com (The@The Hacker News //
Google is ramping up its AI integration across various platforms to enhance user security and accessibility. The tech giant is deploying AI models in Chrome to detect and block online scams, protecting users from fraudulent websites and suspicious notifications. These AI-powered systems are already proving effective in Google Search, blocking hundreds of millions of scam results daily and significantly reducing fake airline support pages by over 80 percent. Google is also using AI in a new iOS feature called Simplify, which leverages Gemini's large language models to translate dense technical jargon into plain, readable language, making complex information more accessible.

Google's Gemini is also seeing updates in other areas, including new features for simplification and potentially expanded access for younger users. The Simplify feature, accessible via the Google App on iOS, aims to break down technical jargon found in legal contracts or medical reports. Google conducted a study showing improved comprehension among users who read Simplify-processed text, however, the study's limitations highlight the challenges in accurately gauging the full impact of AI-driven simplification. Google's plan to make Gemini available to users under 13 has also sparked concerns among parents and child safety experts, prompting Google to implement parental controls through Family Link and assure that children's activity won't be used to train its AI models.

However, the integration of AI has also presented unforeseen challenges. A recent update to Gemini has inadvertently broken content filters, affecting apps that rely on lowered guardrails, particularly those providing support for trauma survivors. This update has blocked incident reports related to sensitive topics, raising concerns about the limitations and potential biases of AI-driven content moderation. This issue has led to some users, particularly developers who work with apps assisting trauma survivors, to have apps rendered useless due to the changes.

Recommended read:
References :
  • techstrong.ai: Google’s plan to soon give under-13 youngsters access to its flagship artificial intelligence (AI) chatbot Gemini is raising hackles among parents and child safety experts, but offers the latest proof point of the risks tech companies are willing to take to reach more potential AI users.
  • www.eweek.com: Google is intensifying efforts to combat online scams by integrating artificial intelligence across Search, Chrome, and Android, aiming to make fraud more difficult for cybercriminals.
  • www.techradar.com: Tired of scams? Google is enlisting AI to protect you in Chrome, Google Search, and on Android.
  • www.tomsguide.com: Google is going to start using AI to keep you safe — here's how
  • The Official Google Blog: Image showing a shield in front of a computer, phone, search bar and several warning notifications
  • cyberinsider.com: Google plans to introduce a new security feature in Chrome 137 that uses on-device AI to detect tech support scams in real time.
  • PCMag UK security: A new version of Chrome coming this month will use Gemini Nano AI to help the browser stop scams that usually appear as annoying pop-ups.
  • Davey Winder: Google Confirms Android Attack Warnings — Powered By AI
  • www.zdnet.com: How Google's AI combats new scam tactics - and how you can stay one step ahead
  • THE DECODER: Google is now using AI models to protect Chrome users from online scams. The article appeared first on .
  • eWEEK: Google is intensifying efforts to combat online scams by integrating artificial intelligence across Search, Chrome, and Android, aiming to make fraud more difficult for cybercriminals.
  • eWEEK: Google has rolled out a new iOS feature called Simplify that uses Gemini’s large language models to turn dense technical jargon such as what you would find in legal contracts or medical reports into plain, readable language without sacrificing key details.
  • The DefendOps Diaries: Google Chrome's AI-Powered Defense Against Tech Support Scams
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • Malwarebytes: Google Chrome will use AI to block tech support scam websites
  • security.googleblog.com: Posted by Jasika Bawa, Andy Lim, and Xinghui Lu, Google Chrome Security Tech support scams are an increasingly prevalent form of cybercrime, characterized by deceptive tactics aimed at extorting money or gaining unauthorized access to sensitive data.
  • CyberInsider: Google plans to introduce a new security feature in Chrome 137 that uses on-device AI to detect tech support scams in real time.
  • iHLS: Google is rolling out new anti-scam capabilities in its Chrome browser, introducing a lightweight on-device AI model designed to spot fraudulent websites and alert users in real time.
  • bsky.app: Google will use on-device LLMs to detect potential tech support scams and alert Chrome users to possible dangers