Nicole Kobie@itpro.com
//
The FBI has issued a warning about a rise in scams targeting U.S. government officials. Cybercriminals are using AI-generated voice clones and text messages to impersonate senior officials. This campaign, which started in April 2025, aims to trick current and former federal and state officials, as well as their contacts, into divulging sensitive information or granting unauthorized access to accounts. These tactics are referred to as "smishing" (malicious SMS messages) and "vishing" (fraudulent voice calls). The FBI is advising the public that if you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.
The attackers use AI to create realistic voice deepfakes, making it difficult to distinguish between real and fake messages. They also leverage publicly available data to make their messages more convincing, exploiting human trust to infiltrate broader networks. The FBI has found that one method attackers use to gain access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. The use of AI-generated audio has increased sharply, as large language models have proliferated and improved their abilities to create lifelike audio. Once an account is compromised, it can be used in future attacks to target other government officials, their associates, and contacts by using trusted contact information they obtain. Stolen contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds. The FBI advises that the scammers are using software to generate phone numbers that are not attributed to specific phones, making them more difficult to trace. Individuals should be vigilant and follow standard security advice, such as not trusting unsolicited messages and verifying requests through official channels. Recommended read:
References :
@www.webroot.com
//
References:
www.eweek.com
, www.webroot.com
Cybercriminals are increasingly using sophisticated tactics to deceive individuals and steal sensitive information. One common method involves sending fraudulent text messages, known as smishing, that impersonate legitimate businesses like delivery services or banks. These scams often entice victims to click on malicious links, leading to identity theft, financial loss, or the installation of malware. Webroot emphasizes mobile security, particularly protecting phones from text scams with potential identity theft and malware planting. The Federal Trade Commission reported that consumers lost $470 million to scams initiated through text messages in 2024.
Google is intensifying its efforts to combat these online threats by integrating artificial intelligence across its various platforms. The company is leveraging AI in Search, Chrome, and Android to identify and block scam attempts more effectively. Google's AI-powered defenses are capable of detecting 20 times more scam pages than before, significantly improving the quality of search results. Furthermore, AI is used to identify fraudulent websites, app notifications, calls, and direct messages, helping to safeguard users from various scam tactics. A key component of Google's enhanced protection is the integration of Gemini Nano, a lightweight, on-device AI model, into Chrome. This allows for instant identification of scams, even those that haven't been previously encountered. When a user navigates to a potentially dangerous page, Chrome evaluates the page using Gemini Nano, which extracts security signals to determine the intent of the page. This information is then sent to Safe Browsing for a final verdict, adding an extra layer of protection against evolving online threats. Recommended read:
References :
@owaspai.org
//
References:
OWASP
, Bernard Marr
The Open Worldwide Application Security Project (OWASP) is actively shaping the future of AI regulation through its AI Exchange project. This initiative fosters collaboration between the global security community and formal standardization bodies, driving the creation of AI security standards designed to protect individuals and businesses while encouraging innovation. By establishing a formal liaison with international standardization organizations like CEN/CENELEC, OWASP is enabling its vast network of security professionals to directly contribute to the development of these crucial standards, ensuring they are practical, fair, and effective.
OWASP's influence is already evident in the development of key AI security standards, notably impacting the AI Act, a European Commission initiative. Through the contributions of experts like Rob van der Veer, who founded the OWASP AI Exchange, the project has provided significant input to ISO/IEC 27090, the global standard on AI security guidance. The OWASP AI Exchange serves as an open-source platform where experts collaborate to shape these global standards, ensuring a balance between strong security measures and the flexibility needed to support ongoing innovation. The OWASP AI Exchange provides over 200 pages of practical advice and references on protecting AI and data-centric systems from threats. This resource serves as a bookmark for professionals and actively contributes to international standards, demonstrating the consensus on AI security and privacy through collaboration with key institutes and Standards Development Organizations (SDOs). The foundation of OWASP's approach lies in risk-based thinking, tailoring security measures to specific contexts rather than relying on a one-size-fits-all checklist, addressing the critical need for clear guidance and effective regulation in the rapidly evolving landscape of AI security. Recommended read:
References :
Siôn Geschwindt@The Next Web
//
References:
The Next Web
, medium.com
,
Quantum computing is rapidly advancing, presenting both opportunities and challenges. Researchers at Toshiba Europe have achieved a significant milestone by transmitting quantum-encrypted messages over a record distance of 254km using standard fiber optic cables. This breakthrough, facilitated by quantum key distribution (QKD) cryptography, marks the first instance of coherent quantum communication via existing telecom infrastructure. QKD leverages the principles of quantum mechanics to securely share encryption keys, making eavesdropping virtually impossible, as any attempt to intercept the message would immediately alert both parties involved.
This advance addresses growing concerns among European IT professionals, with 67% fearing that quantum computing could compromise current encryption standards. Unlike classical computers, which would take an impractical amount of time to break modern encryption, quantum computers can exploit phenomena like superposition and entanglement to potentially crack even the most secure classical encryptions within minutes. This has prompted global governments and organizations to accelerate the development of robust cryptographic algorithms capable of withstanding quantum attacks. Efforts are underway to build quantum-secure communication infrastructure. Heriot-Watt University recently inaugurated a £2.5 million Optical Ground Station (HOGS) to promote satellite-based quantum-secure communication. In July 2024, Toshiba Europe, GÉANT, PSNC, and Anglia Ruskin University demonstrated cryogenics-free QKD over a 254 km fiber link, using standard telecom racks and room temperature detectors. Initiatives such as Europe’s EuroQCI and ESA’s Eagle-1 satellite further underscore the commitment to developing and deploying quantum-resistant technologies, mitigating the silent threat that quantum computing poses to cybersecurity. Recommended read:
References :
@industrialcyber.co
//
References:
Industrial Cyber
, NCSC News Feed
,
The UK's National Cyber Security Centre (NCSC) has issued a warning that critical systems in the United Kingdom face increasing risks due to AI-driven vulnerabilities. The agency highlighted a growing 'digital divide' between organizations capable of defending against AI-enabled threats and those that are not, exposing the latter to greater cyber risk. According to a new report, developments in AI are expected to accelerate the exploitation of software vulnerabilities by malicious actors, intensifying cyber threats by 2027.
The report, presented at the NCSC's CYBERUK conference, predicts that AI will significantly enhance the efficiency and effectiveness of cyber intrusions. Paul Chichester, NCSC director of operations, stated that AI is transforming the cyber threat landscape by expanding attack surfaces, increasing the volume of threats, and accelerating malicious capabilities. He emphasized the need for organizations to implement robust cybersecurity practices across their AI systems and dependencies, ensuring up-to-date defenses. The NCSC assessment emphasizes that by 2027, AI-enabled tools will almost certainly improve threat actors' ability to exploit known vulnerabilities, leading to a surge in attacks against systems lacking security updates. With the time between vulnerability disclosure and exploitation already shrinking, AI is expected to further reduce this timeframe. The agency urges organizations to adopt its guidance on securely implementing AI tools while maintaining strong cybersecurity measures across all systems. Recommended read:
References :
@www.pwc.com
//
The UK's National Cyber Security Centre (NCSC) has issued warnings regarding the growing cyber threats intensified by artificial intelligence and the dangers of unpatched, end-of-life routers. The NCSC's report, "Impact of AI on cyber threat from now to 2027," indicates that threat actors are increasingly using AI to enhance existing tactics. These tactics include vulnerability research, reconnaissance, malware development, and social engineering, leading to a potential increase in both the volume and impact of cyber intrusions. The NCSC cautioned that a digital divide is emerging, with organizations unable to keep pace with AI-enabled threats facing increased risk.
The use of AI by malicious actors is projected to rise, and this poses significant challenges for businesses, especially those that are not prepared to defend against it. The NCSC noted that while advanced state actors may develop their own AI models, most threat actors will likely leverage readily available, off-the-shelf AI tools. Moreover, the implementation of AI systems by organizations can inadvertently increase their attack surface, creating new vulnerabilities that threat actors could exploit. Direct prompt injection, software vulnerabilities, indirect prompt injection, and supply chain attacks are techniques that could be used to gain access to wider systems. Alongside the AI threat, the FBI has issued alerts concerning the rise in cyberattacks targeting aging internet routers, particularly those that have reached their "End of Life." The FBI warned of TheMoon malware exploiting these outdated devices. Both the NCSC and FBI warnings highlight the importance of proactively replacing outdated hardware and implementing robust security measures to mitigate these risks. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.
When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats. The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities. Recommended read:
References :
Dissent@DataBreaches.Net
//
The LockBit ransomware group, a major player in the Ransomware-as-a-Service (RaaS) sector, has suffered a significant data breach. On May 7, 2025, the group's dark web affiliate panels were defaced, revealing a link to a MySQL database dump containing sensitive operational information. This exposed data includes Bitcoin addresses, private communications with victim organizations, user credentials, and other details related to LockBit's illicit activities. The defacement message, "Don't do crime CRIME IS BAD xoxo from Prague," accompanied the data leak, suggesting a possible motive of disrupting or discrediting the ransomware operation.
The exposed data from LockBit's affiliate panel is extensive, including nearly 60,000 unique Bitcoin wallet addresses and over 4,400 victim negotiation messages spanning from December 2024 through April 2025. Security researchers have confirmed the authenticity of the leaked data, highlighting the severity of the breach. The LockBit operator, known as "LockBitSupp," acknowledged the breach but claimed that no private keys were compromised. Despite previous setbacks, such as the "Operation Cronos" law enforcement action in February 2024, LockBit had managed to rebuild its operations, making this recent breach a significant blow to their infrastructure. Analysis of the leaked information has uncovered a list of 20 critical Common Vulnerabilities and Exposures (CVEs) frequently exploited by LockBit in their attacks. These vulnerabilities span multiple vendors and technologies, including Citrix, PaperCut, Microsoft, VMware, Apache, F5 Networks, SonicWall, Fortinet, Ivanti, Fortra, and Potix. Additionally, the leaked negotiations revealed LockBit’s preference for Monero (XMR) cryptocurrency, offering discounts to victims who paid ransoms using this privacy-focused digital currency. Ransom demands typically ranged from $4,000 to $150,000, depending on the scale of the attack. Recommended read:
References :
@www.techmeme.com
//
References:
Ken Yeung
, venturebeat.com
According to a new Amazon Web Services (AWS) report, generative AI has become the top IT priority for global organizations in 2025, surpassing traditional IT investments like security tools. The AWS Generative AI Adoption Index, which surveyed 3,739 senior IT decision makers across nine countries, reveals that 45% of organizations plan to prioritize generative AI spending. This shift signifies a major change in corporate technology strategies as businesses aim to capitalize on AI's transformative potential. While security remains a priority, the broad range of use cases for AI is driving the accelerated adoption and increased budget allocation.
The AWS study highlights several key challenges to GenAI adoption, including a lack of skilled workforce, the cost of development, biases and hallucinations, lack of compelling use cases, and lack of data. Specifically, 55% of respondents cited a lack of skilled workers as a significant barrier. Despite these challenges, organizations are moving quickly to implement GenAI, with 44% having moved beyond the proof-of-concept phase into production deployment. The average organization has approximately 45 GenAI projects or experiments in various stages, with about 20 of them transitioning into production. In response to the growing importance of AI, 60% of companies have already appointed a dedicated AI executive, such as a Chief AI Officer (CAIO), to manage the complexity of AI initiatives. This executive-level commitment demonstrates the increasing recognition of AI’s strategic importance within organizations. Furthermore, many organizations are creating training plans to upskill their workforce for GenAI, indicating a proactive approach to address the talent gap. The focus on generative AI reflects the belief that it can drive automation, enhance creativity, and improve decision-making across various industries. Recommended read:
References :
@www.techmeme.com
//
A recent report from Amazon Web Services (AWS) indicates a significant shift in IT spending priorities for 2025. Generative AI has overtaken cybersecurity as the primary focus for global IT leaders, with 45% now prioritizing AI investments. This change underscores the increasing emphasis on implementing AI strategies and acquiring the necessary talent, even amidst ongoing skills shortages. The AWS Generative AI Adoption Index surveyed 3,739 senior IT decision makers across nine countries, including the United States, Brazil, Canada, France, Germany, India, Japan, South Korea, and the United Kingdom.
This move to prioritize generative AI doesn't suggest a neglect of security, according to Rahul Pathak, Vice President of Generative AI and AI/ML Go-to-Market at AWS. Pathak stated that customers' security remains a massive priority, and the surge in AI investment reflects the widespread recognition of AI's diverse applications and the pressing need to accelerate its adoption. The survey revealed that 90% of organizations are already deploying generative AI in some capacity, with 44% moving beyond experimental phases to production deployment, indicating a critical inflection point in AI adoption. The survey also highlights the emergence of new leadership roles within organizations to manage AI initiatives. Sixty percent of companies have already appointed a Chief AI Officer (CAIO) or equivalent, and an additional 26% plan to do so by 2026. This executive-level commitment reflects the growing strategic importance of AI, although the study cautions that nearly a quarter of organizations may still lack formal AI transformation strategies by 2026. These companies are planning ways to bridge the gen AI talent gap this year by creating training plans to upskill their workforce for GenAI. Recommended read:
References :
@blogs.nvidia.com
//
Oracle Cloud Infrastructure (OCI) is now deploying thousands of NVIDIA Blackwell GPUs to power agentic AI and reasoning models. OCI has stood up and optimized its first wave of liquid-cooled NVIDIA GB200 NVL72 racks in its data centers, enabling customers to develop and run next-generation AI agents. The NVIDIA GB200 NVL72 platform is a rack-scale system combining 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs, delivering performance and energy efficiency for agentic AI powered by advanced AI reasoning models. Oracle aims to build one of the world's largest Blackwell clusters, with OCI Superclusters scaling beyond 100,000 NVIDIA Blackwell GPUs to meet the growing demand for accelerated computing.
This deployment includes high-speed NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet networking for scalable, low-latency performance, along with software and database integrations from NVIDIA and OCI. OCI is among the first to deploy NVIDIA GB200 NVL72 systems, and this deployment marks a transformation of cloud data centers into AI factories. These AI factories are designed to manufacture intelligence at scale, leveraging the NVIDIA GB200 NVL72 platform. OCI offers flexible deployment options to bring Blackwell to customers across public, government, and sovereign clouds, as well as customer-owned data centers. These new racks are the first systems available from NVIDIA DGX Cloud, an optimized platform with software, services, and technical support for developing and deploying AI workloads on clouds. NVIDIA will utilize these racks for various projects, including training reasoning models, autonomous vehicle development, accelerating chip design and manufacturing, and developing AI tools. In related cybersecurity news, Cisco Foundation AI has released its first open-source security model, Llama-3.1-FoundationAI-SecurityLLM-base-8B, designed to improve response time, expand capacity, and proactively reduce risk in security operations. Recommended read:
References :
@blog.google
//
Google is enhancing its security operations by integrating agentic AI into Google Unified Security, aiming to empower security teams and business leaders in the AI era. This initiative incorporates AI-driven agents designed to collaborate with human analysts, automating routine tasks and enhancing decision-making processes. The vision is to evolve towards an autonomous Security Operations Center (SOC) where AI agents handle routine tasks, freeing up analysts to concentrate on more complex and critical threats. These advancements seek to proactively combat evolving threats by giving defenders an advantage over threat actors.
Google's enhancements include incorporating threat intelligence from Mandiant’s M-Trends 2025 report to improve threat detection and simplify security workflows. This report provides data, analysis, and learnings drawn from Mandiant's threat intelligence findings and over 450,000 hours of incident investigations. Key findings from M-Trends 2025 reveal that attackers are exploiting various opportunities, from using infostealer malware to targeting unsecured data repositories and exploiting cloud migration risks, with financial sector being the top target. The most common initial infection vector was exploit (33%), followed by stolen credentials (16%), and email phishing (14%). Gemini AI is also being integrated to enhance threat detection with real-time insights, powering malware analysis and triage AI agents. This integration also includes curated detections and threat intelligence rule packs for M-Trends 2025 findings, shifting organizations from reactive to preemptive security measures. Throughout 2024, Google Cloud Security customers have already benefited from threat intelligence and insights now publicly released in the M-Trends 2025 report through expert-crafted threat intelligence, enhanced detections, and Mandiant security assessments. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |