News from the AI & ML world

DeeperML - #security

info@thehackernews.com (The@The Hacker News //
Google is ramping up its AI integration across various platforms to enhance user security and accessibility. The tech giant is deploying AI models in Chrome to detect and block online scams, protecting users from fraudulent websites and suspicious notifications. These AI-powered systems are already proving effective in Google Search, blocking hundreds of millions of scam results daily and significantly reducing fake airline support pages by over 80 percent. Google is also using AI in a new iOS feature called Simplify, which leverages Gemini's large language models to translate dense technical jargon into plain, readable language, making complex information more accessible.

Google's Gemini is also seeing updates in other areas, including new features for simplification and potentially expanded access for younger users. The Simplify feature, accessible via the Google App on iOS, aims to break down technical jargon found in legal contracts or medical reports. Google conducted a study showing improved comprehension among users who read Simplify-processed text, however, the study's limitations highlight the challenges in accurately gauging the full impact of AI-driven simplification. Google's plan to make Gemini available to users under 13 has also sparked concerns among parents and child safety experts, prompting Google to implement parental controls through Family Link and assure that children's activity won't be used to train its AI models.

However, the integration of AI has also presented unforeseen challenges. A recent update to Gemini has inadvertently broken content filters, affecting apps that rely on lowered guardrails, particularly those providing support for trauma survivors. This update has blocked incident reports related to sensitive topics, raising concerns about the limitations and potential biases of AI-driven content moderation. This issue has led to some users, particularly developers who work with apps assisting trauma survivors, to have apps rendered useless due to the changes.

Recommended read:
References :
  • techstrong.ai: Google’s plan to soon give under-13 youngsters access to its flagship artificial intelligence (AI) chatbot Gemini is raising hackles among parents and child safety experts, but offers the latest proof point of the risks tech companies are willing to take to reach more potential AI users.
  • www.eweek.com: Google is intensifying efforts to combat online scams by integrating artificial intelligence across Search, Chrome, and Android, aiming to make fraud more difficult for cybercriminals.
  • www.techradar.com: Tired of scams? Google is enlisting AI to protect you in Chrome, Google Search, and on Android.
  • www.tomsguide.com: Google is going to start using AI to keep you safe — here's how
  • The Official Google Blog: Image showing a shield in front of a computer, phone, search bar and several warning notifications
  • cyberinsider.com: Google plans to introduce a new security feature in Chrome 137 that uses on-device AI to detect tech support scams in real time.
  • PCMag UK security: A new version of Chrome coming this month will use Gemini Nano AI to help the browser stop scams that usually appear as annoying pop-ups.
  • Davey Winder: Google Confirms Android Attack Warnings — Powered By AI
  • www.zdnet.com: How Google's AI combats new scam tactics - and how you can stay one step ahead
  • THE DECODER: Google deploys AI in Chrome to detect and block online scams
  • eWEEK: Google has rolled out a new iOS feature called Simplify that uses Gemini’s large language models to turn dense technical jargon such as what you would find in legal contracts or medical reports into plain, readable language without sacrificing key details.
  • The DefendOps Diaries: Google Chrome's AI-Powered Defense Against Tech Support Scams
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • Malwarebytes: Google Chrome will use AI to block tech support scam websites
  • security.googleblog.com: Posted by Jasika Bawa, Andy Lim, and Xinghui Lu, Google Chrome Security Tech support scams are an increasingly prevalent form of cybercrime, characterized by deceptive tactics aimed at extorting money or gaining unauthorized access to sensitive data.
  • CyberInsider: Google plans to introduce a new security feature in Chrome 137 that uses on-device AI to detect tech support scams in real time.
  • iHLS: Google is rolling out new anti-scam capabilities in its Chrome browser, introducing a lightweight on-device AI model designed to spot fraudulent websites and alert users in real time.
  • bsky.app: Google will use on-device LLMs to detect potential tech support scams and alert Chrome users to possible dangers

info@thehackernews.com (The@The Hacker News //
Google Chrome is set to integrate on-device AI, leveraging the 'Gemini Nano' large-language model (LLM), to proactively detect and block tech support scams while users browse the web. This new security feature aims to combat malicious websites that deceive users into believing their computers are infected with viruses or have other technical issues. These scams often manifest as full-screen browser windows or persistent pop-ups, designed to make them difficult to close, with the ultimate goal of tricking victims into calling a bogus support number.

Google is addressing the evolving tactics of scammers, who are known to adapt quickly to exploit unsuspecting users. These deceptive practices include expanding pop-ups to full-screen, disabling mouse input to create a sense of urgency, and even playing alarming audio messages to convince users that their computers are locked down. The 'Gemini Nano' model, previously used on Pixel phones, will analyze web pages for suspicious activity, such as the misuse of keyboard lock APIs, to identify potential tech support scams in real-time. This on-device processing is crucial as many malicious sites have a very short lifespan.

When Chrome navigates to a potentially harmful website, the Gemini Nano model will activate and scrutinize the page's intent. The collected data is then sent to Google’s Safe Browsing service for a final assessment, determining whether to display a warning to the user. To alleviate privacy and performance concerns, Google has implemented measures to ensure the LLM is used sparingly, runs locally, and manages resource consumption effectively. Users who have opted-in to the Enhanced Protection setting will have the security signals sent to Google's Safe Browsing service.

Recommended read:
References :
  • bsky.app: Google is implementing a new Chrome security feature that uses the built-in 'Gemini Nano' large-language model (LLM) to detect and block tech support scams while browsing the web.
  • PCMag UK security: Google's Chrome Browser Taps On-Device AI to Catch Tech Support Scams
  • BleepingComputer: Google Chrome to use on-device AI to detect tech support scams
  • thecyberexpress.com: Google is betting on AI
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • Davey Winder: Mobile malicious, misleading, spammy or scammy — Google fights back against Android attacks with new AI-powered notification protection.
  • Malwarebytes: Google announced it will equip Chrome with an AI driven method to detect and block Tech Support Scam websites
  • cyberinsider.com: Google plans to introduce a new security feature in Chrome 137 that uses on-device AI to detect tech support scams in real time.
  • The DefendOps Diaries: Google Chrome's AI-Powered Defense Against Tech Support Scams
  • gbhackers.com: Google Chrome Uses Advanced AI to Combat Sophisticated Online Scams
  • security.googleblog.com: Using AI to stop tech support scams in Chrome
  • cyberpress.org: Chrome 137 Adds Gemini Nano AI to Combat Tech Support Scams
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • thecyberexpress.com: Google Expands On-Device AI to Counter Evolving Online Scams
  • www.eweek.com: Google is intensifying efforts to combat online scams by integrating artificial intelligence across Search, Chrome, and Android, aiming to make fraud more difficult for cybercriminals.
  • CyberInsider: Details on Google Chrome for Android deploying on-device AI to tackle tech support scams.
  • iHLS: discusses Chrome adding on-device AI to detect scams in real time.
  • www.ghacks.net: Google integrates local Gemini AI into Chrome browser for scam protection.
  • gHacks Technology News: Scam Protection: Google integrates local Gemini AI into Chrome browser
  • www.scworld.com: Google to deploy AI-powered scam detection in Chrome

info@thehackernews.com (The@The Hacker News //
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.

When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats.

The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities.

Recommended read:
References :
  • Search Engine Journal: How Google Protects Searchers From Scams: Updates Announced
  • www.zdnet.com: How Google's AI combats new scam tactics - and how you can stay one step ahead
  • cyberinsider.com: Google plans to introduce a new security feature in Chrome 137 that uses on-device AI to detect tech support scams in real time.
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • Davey Winder: Google Confirms Android Attack Warnings — Powered By AI
  • securityonline.info: Chrome 137 Uses On-Device Gemini Nano AI to Combat Tech Support Scams
  • BleepingComputer: Google is implementing a new Chrome security feature that uses the built-in 'Gemini Nano' large-language model (LLM) to detect and block tech support scams while browsing the web. [...]
  • The Official Google Blog: How we’re using AI to combat the latest scams
  • The Tech Portal: Google to deploy Gemini Nano AI for real-time scam protection in Chrome
  • www.tomsguide.com: Google is keeping you safe from scams across search and your smartphone
  • www.eweek.com: Google’s Scam-Fighting Efforts Just Got Accelerated, Thanks to AI
  • the-decoder.com: Google deploys AI in Chrome to detect and block online scams.
  • www.techradar.com: Tired of scams? Google is enlisting AI to protect you in Chrome, Google Search, and on Android.
  • Daily CyberSecurity: Chrome 137 Uses On-Device Gemini Nano AI to Combat Tech Support Scams
  • PCMag UK security: Google's Chrome Browser Taps On-Device AI to Catch Tech Support Scams
  • Analytics India Magazine: Google Chrome to Use AI to Stop Tech Support Scams
  • eWEEK: Google’s Scam-Fighting Efforts Just Got Accelerated, Thanks to AI
  • THE DECODER: Google is now using AI models to protect Chrome users from online scams. The article appeared first on .
  • bsky.app: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • The DefendOps Diaries: Google Chrome's AI-Powered Defense Against Tech Support Scams
  • thecyberexpress.com: Google has released new details on how artificial intelligence (AI) is being used across its platforms to combat a growing wave of online scams. In its latest Fighting Scams in Search report, the company outlines AI-powered systems that are already blocking hundreds of millions of harmful results daily and previews further enhancements being rolled out across Google Search, Chrome, and Android.
  • gHacks Technology News: Scam Protection: Google integrates local Gemini AI into Chrome browser
  • Malwarebytes: Google Chrome will use AI to block tech support scam websites
  • security.googleblog.com: Using AI to stop tech support scams in Chrome
  • iHLS: Chrome Adds On-Device AI to Detect Scams in Real Time
  • bsky.app: Google will use on-device LLMs to detect potential tech support scams and alert Chrome users to possible dangers
  • bsky.app: Google's #AI tools that protect against scammers: https://techcrunch.com/2025/05/08/google-rolls-out-ai-tools-to-protect-chrome-users-against-scams/ #ArtificialIntelligence
  • bsky.app: Google will use on-device LLMs to detect potential tech support scams and alert Chrome users to possible dangers
  • www.searchenginejournal.com: How Google Protects Searchers From Scams: Updates Announced

@zdnet.com //
Microsoft is rolling out a wave of new AI-powered features for Windows 11 and Copilot+ PCs, aiming to enhance user experience and streamline various tasks. A key addition is an AI agent designed to assist users in navigating and adjusting Windows 11 settings. This agent will understand user intent through natural language, allowing them to simply describe the setting they wish to change, such as adjusting mouse pointer size or enabling voice control. With user permission, the AI agent can then automate and execute the necessary adjustments. This feature, initially available to Windows Insiders on Snapdragon X Copilot+ PCs, seeks to eliminate the frustration of searching for and changing settings manually.

Microsoft is also enhancing Copilot with new AI skills, including the ability to act on screen content. One such action, "Ask Copilot," will enable users to draft content in Microsoft Word based on on-screen information, or create bulleted lists from selected text. These capabilities aim to boost productivity by leveraging generative AI to quickly process and manipulate information. Furthermore, the Windows 11 Start menu is undergoing a revamp, offering easier access to apps and a phone companion panel for quick access to information from synced iPhones or Android devices. The updated Start menu, along with the new AI features, will first be available to Windows Insiders running Snapdragon X Copilot Plus PCs.

In a shift toward passwordless security, Microsoft is removing the password autofill feature from its Authenticator app, encouraging users to transition to Microsoft Edge for password management. Starting in June 2025, users will no longer be able to save new passwords in the Authenticator app, with autofill functionality being removed in July 2025. By August 2025, saved passwords will no longer be accessible in the app. Microsoft argues that this change streamlines the process, as passwords will be synced with the Microsoft account and accessible through Edge. However, users who do not use Edge may find this transition less seamless, as they will need to install Edge and make it the default autofill provider to maintain access to their saved passwords.

Recommended read:
References :
  • cyberinsider.com: Microsoft to Retire Password Autofill in Authenticator by August 2025
  • www.bleepingcomputer.com: Microsoft ends Authenticator password autofill, moves users to Edge
  • Davey Winder: You Have Until June 1 To Save Your Passwords, Microsoft Warns App Users
  • The DefendOps Diaries: Microsoft's Strategic Shift: Transitioning Password Management to Edge
  • www.ghacks.net: Microsoft removes Authenticator App feature to promote Microsoft Edge
  • www.ghacks.net: Microsoft Removes Authenticator App feature to promote Microsoft Edge
  • Tech Monitor: Microsoft to phase out Authenticator autofill by August 2025
  • Davey Winder: You won't be able to save new passwords after June 1, Microsoft warns all authenticator app users. Here's what you need to do.
  • www.microsoft.com: The post appeared first on .
  • PCWorld: If you use Microsoft’s Authenticator app on your mobile phone as a password manager, here’s some bad news: Microsoft is discontinuing the “autofill†password management functionality in Authenticator.
  • securityaffairs.com: Microsoft announced that all new accounts will be “passwordless by default” to increase their level of security.
  • heise Security: Microsoft Authenticator: Zurück vom Passwort-Manager zum Authenticator Microsofts Authenticator-App kann neben erweiterter Authentifizierung als zweiter Faktor auch Passwörter verwalten. Das endet jetzt.
  • PCMag Middle East ai: Microsoft Tests Using Copilot AI to Adjust Windows 11 Settings for You
  • PCMag UK security: Microsoft Is Dropping A Useful Feature From Its Authenticator App
  • www.zdnet.com: Microsoft's new AI skills are coming to Copilot+ PCs - including some for all Windows 11 users
  • Dataconomy: Microsoft is revamping the Windows 11 Start menu and introducing several new AI features this month, initially available to Windows Insiders running Snapdragon X Copilot Plus PCs, including the newly announced Surface devices.
  • www.windowscentral.com: Microsoft just announced major Windows 11 and Copilot+ PC updates, adding a bunch of exclusive features and AI capabilities.
  • Microsoft Copilot Blog: Welcome to Microsoft’s Copilot Release Notes. Here we’ll provide regular updates on what’s happening with Copilot, from new features to firmware updates and more.
  • shellypalmer.com: Microsoft is officially going passwordless by default. On the surface, it’s a welcome step toward a safer, simpler future.
  • www.techradar.com: Microsoft has a big new AI settings upgrade for Windows 11 on Copilot+ PCs – plus 3 other nifty tricks
  • www.engadget.com: Microsoft introduces agent for AI-powered settings controls in Copilot+ PCs
  • www.ghacks.net: Finally! Microsoft is making AI useful in Windows by introducing AI agents
  • www.cybersecurity-insiders.com: Cybersecurity Insiders reports Microsoft is saying NO to passwords and to shut down Authenticator App
  • FIDO Alliance: PC Mag: RIP Passwords: Microsoft Moves to Passkeys as the Default on New Accounts
  • www.cybersecurity-insiders.com: Microsoft to say NO to passwords and to shut down Authenticator App

@Salesforce //
Salesforce is enhancing its security operations by integrating AI agents into its security teams. These AI agents are becoming vital force multipliers, automating tasks that previously required manual effort. This automation is leading to faster response times and freeing up security personnel to focus on higher-value analysis and strategic initiatives, ultimately boosting the overall productivity of the security team.

The deployment of agentic AI in security presents unique challenges, particularly in ensuring data privacy and security. As businesses increasingly adopt AI to remain competitive, concerns arise regarding data leaks and accountability. Dr. Eoghan Casey, Field CTO at Salesforce, emphasizes the shared responsibility in building trust into AI systems, with providers maintaining a trusted technology platform and customers ensuring the confidentiality and reliability of their information. Implementing safety guardrails is crucial to ensure that AI agents operate within technical, legal, and ethical boundaries, safeguarding against undesirable outcomes.

At RSA Conference 2025, SecAI, an AI-enriched threat intelligence company, debuted its AI-native Investigator platform designed to solve the challenges of efficient threat investigation. The platform combines curated threat intelligence with advanced AI techniques for deep information integration, contextual security reasoning, and suggested remediation options. Chase Lee, Managing Director at SecAI, stated that the company is reshaping what's possible in cyber defense by giving security teams superhuman capabilities to meet the scale and speed of modern threats. This AI-driven approach streamlines the investigation process, enabling analysts to rapidly evaluate threats and make confident decisions.

Recommended read:
References :
  • Salesforce: Meet the AI Agents Augmenting Salesforce Security Teams
  • venturebeat.com: Salesforce unveils groundbreaking AI research tackling "jagged intelligence," introducing new benchmarks, models, and guardrails to make enterprise AI agents more intelligent, trusted, and consistently reliable for business use.
  • Salesforce: Salesforce AI Research Delivers New Benchmarks, Guardrails, and Models to Make Future Agents More Intelligent, Trusted, and Versatile
  • www.marktechpost.com: Salesforce AI Research Introduces New Benchmarks, Guardrails, and Model Architectures to Advance Trustworthy and Capable AI Agents
  • www.salesforce.com: Salesforce AI Research Delivers New Benchmarks, Guardrails, and Models to Make Future Agents More Intelligent, Trusted, and Versatile
  • MarkTechPost: Salesforce AI Research Introduces New Benchmarks, Guardrails, and Model Architectures to Advance Trustworthy and Capable AI Agents

@www.helpnetsecurity.com //
StrikeReady has launched its next-generation Security Command Center v2, an AI-powered platform designed to help security teams move beyond basic alert processing and automate threat response. For years, security teams have struggled with siloed tools, fragmented intelligence, and a constant stream of alerts, forcing them to operate in a reactive mode. Traditional Security Operations platforms, meant to unify data and streamline response, often added complexity through customization and manual oversight. The new platform aims to address these challenges by bringing automated response to assets, identities, vulnerabilities, and alerts.

The Security Command Center v2 offers several key business outcomes and metrics. These include proactive risk visibility with a consolidated risk view across identities, assets, and vulnerabilities, validated in a single command center interface. This is intended to enable informed, strategic planning instead of constant firefighting. The platform also offers radical time reduction, with risk validation using threat intelligence dropping from hours to minutes and alert processing reduced from an hour to just one minute, freeing analysts for threat hunting. All alerts, regardless of severity, are processed at machine speed and accuracy.

According to Alex Lanstein, CTO at StrikeReady, the goal is to help security teams "escape the cycle of perpetual reactivity." With this platform, organizations can control and reduce risk in real-time, closing security gaps before they're exploited. Furthermore, the new platform offers better, faster, and more cost-effective deployments, with automated workflows and capabilities going live in as little as 60 minutes. Lower operational expenses are also expected, with examples such as phishing alert backlogs cleared in minutes, reducing manual efforts and potentially saving over $180,000 annually. The platform includes native case management, collaboration, and real-time validation, streamlining security operations and minimizing reliance on external ticketing systems.

Recommended read:
References :
  • hackread.com: Industry First: StrikeReady AI Platform Moves Security Teams Beyond Basic, One-Dimensional AI-Driven Triage Solutions
  • Help Net Security: StrikeReady Security Command Center v2 accelerates threat response
  • NextBigFuture.com: Industry First: StrikeReady AI Platform Moves Security Teams Beyond Basic, One-Dimensional AI-Driven Triage Solutions

Help Net@Help Net Security //
Microsoft is enhancing Windows 11 roadmap transparency with new initiatives to better inform IT professionals and users about upcoming features. The company has launched a new Windows roadmap website designed to simplify the tracking of new Windows 11 features. This move addresses a key criticism regarding the lack of clarity around the testing and rollout phases of new functionalities. Microsoft aims to provide IT administrators with more insights, enabling them to effectively manage changes across their Windows estates.

The new roadmap consolidates information from various sources, including the Windows Insider Program and Microsoft's support site, offering a unified view of in-development features. Users can filter features based on platform, Windows versions, and rollout status, gaining insights into descriptions, release dates, and compatibility details. While the roadmap currently focuses on the client version of Windows 11, Microsoft plans to expand it to include other Windows versions in the future and is accepting feedback to further improve the tool's utility.

Recommended read:
References :
  • Help Net Security: Week in review: Chrome sandbox escape 0-day fixed, Microsoft adds new AI agents to Security Copilot
  • TechSpot: Microsoft finally offers a unified roadmap for tracking upcoming Windows features
  • Techzine Global: Microsoft gives IT administrators more insights in Windows 11 roadmap
  • www.windowscentral.com: Microsoft publishes Windows roadmap as it promises transparency around feature availability

Michael Nuñez@AI News | VentureBeat //
AI security startup Hakimo has secured $10.5 million in Series A funding to expand its autonomous security monitoring platform. The funding round was led by Vertex Ventures and Zigg Capital, with participation from RXR Arden Digital Ventures, Defy.vc, and Gokul Rajaram. This brings the company’s total funding to $20.5 million. Hakimo's platform addresses the challenges of rising crime rates, understaffed security teams, and overwhelming false alarms in traditional security systems.

The company’s flagship product, AI Operator, monitors existing security systems, detects threats in real-time, and executes response protocols with minimal human intervention. Hakimo's AI Operator utilizes computer vision and generative AI to detect any anomaly or threat that can be described in words. Companies using Hakimo can save approximately $125,000 per year compared to using traditional security guards.

Recommended read:
References :
  • AiThority: Hakimo Secures $10.5Million to Transform Physical Security With Human-Like Autonomous Security Agent
  • AI News | VentureBeat: The watchful AI that never sleeps: Hakimo’s $10.5M bet on autonomous security
  • Unite.AI: Hakimo Raises $10.5M to Revolutionize Physical Security with Autonomous AI Agent

Vasu Jakkal@Microsoft Security Blog //
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.

The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses.

Recommended read:
References :
  • The Register - Software: AI agents swarm Microsoft Security Copilot
  • Microsoft Security Blog: Microsoft unveils Microsoft Security Copilot agents and new protections for AI
  • .NET Blog: Learn how the Xbox services team leveraged .NET Aspire to boost their team's productivity.
  • Ken Yeung: Microsoft’s First CTO Says AI Is ‘Three to Five Miracles’ Away From Human-Level Intelligence
  • SecureWorld News: Microsoft Expands Security Copilot with AI Agents
  • www.zdnet.com: Microsoft's new AI agents aim to help security pros combat the latest threats
  • www.itpro.com: Microsoft launches new security AI agents to help overworked cyber professionals
  • www.techrepublic.com: After Detecting 30B Phishing Attempts, Microsoft Adds Even More AI to Its Security Copilot
  • eSecurity Planet: esecurityplanet.com covers Fortifying Cybersecurity: Agentic Solutions by Microsoft and Partners
  • Source: AI innovation requires AI security: Hear what’s new at Microsoft Secure
  • www.csoonline.com: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats.
  • SiliconANGLE: Microsoft introduces AI agents for Security Copilot
  • SiliconANGLE: Microsoft Corp. is enhancing the capabilities of its popular artificial intelligence-powered Copilot tool with the launch late today of its first “deep reasoning” agents, which can solve complex problems in the way a highly skilled professional might do.
  • Ken Yeung: Microsoft is introducing a new way for developers to create smarter Copilots.
  • www.computerworld.com: Microsoft’s Newest AI Agents Can Detail How They Reason

Megan Crouse@eWEEK //
References: The Register - Software , eWEEK , OODAloop ...
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.

The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping.

Recommended read:
References :
  • The Register - Software: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content
  • eWEEK: Crowdflare’s Free AI Labyrinth Distracts Crawlers That Could Steal Website Content to Feed AI
  • The Verge: Cloudflare, one of the biggest network internet infrastructure companies in the world, has announced AI Labyrinth, a new tool to fight web-crawling bots that scrape sites for AI training data without permission. The company says in a blog post that when it detects “inappropriate bot behavior,â€� the free, opt-in tool lures crawlers down a path
  • OODAloop: Trapping misbehaving bots in an AI Labyrinth
  • THE DECODER: Instead of simply blocking unwanted AI crawlers, Cloudflare has introduced a new defense method that lures them into a maze of AI-generated content, designed to waste their time and resources.
  • Digital Information World: Cloudflare’s Latest AI Labyrinth Feature Combats Unauthorized AI Data Scraping By Giving Bots Fake AI Content
  • Ars OpenForum: Cloudflare turns AI against itself with endless maze of irrelevant facts
  • Cyber Security News: Cloudflare Introduces AI Labyrinth to Thwart AI Crawlers and Malicious Bots
  • poliverso.org: Cloudflare’s AI Labyrinth Wants Bad Bots To Get Endlessly Lost
  • aboutdfir.com: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content Cloudflare has created a bot-busting AI to make life hell for AI crawlers.

@PCWorld //
Google Chrome has introduced a new layer of security, integrating AI into its existing "Enhanced protection" feature. This update provides real-time defense against dangerous websites, downloads, and browser extensions, marking a significant upgrade to Chrome's security capabilities. The AI integration allows for immediate analysis of patterns, enabling the identification of suspicious webpages that may not yet be classified as malicious.

This AI-powered security feature is an enhancement of Chrome's Safe Browsing. The technology apparently enables real-time analysis of patterns to identify suspicious or dangerous webpages. The improved protection also extends to deep scanning of downloads to detect suspicious files.

Recommended read:
References :
  • BleepingComputer: Google Chrome has updated the existing "Enhanced protection" feature with AI to offer "real-time" protection against dangerous websites, downloads and extensions.
  • Anonymous ???????? :af:: Google Chrome has updated the existing "Enhanced protection" feature with AI to offer "real-time" protection against dangerous websites, downloads and extensions.
  • PCWorld: Google Chrome adds real-time AI protection against dangerous content