info@thehackernews.com (The@The Hacker News
//
Google is ramping up its AI integration across various platforms to enhance user security and accessibility. The tech giant is deploying AI models in Chrome to detect and block online scams, protecting users from fraudulent websites and suspicious notifications. These AI-powered systems are already proving effective in Google Search, blocking hundreds of millions of scam results daily and significantly reducing fake airline support pages by over 80 percent. Google is also using AI in a new iOS feature called Simplify, which leverages Gemini's large language models to translate dense technical jargon into plain, readable language, making complex information more accessible.
Google's Gemini is also seeing updates in other areas, including new features for simplification and potentially expanded access for younger users. The Simplify feature, accessible via the Google App on iOS, aims to break down technical jargon found in legal contracts or medical reports. Google conducted a study showing improved comprehension among users who read Simplify-processed text, however, the study's limitations highlight the challenges in accurately gauging the full impact of AI-driven simplification. Google's plan to make Gemini available to users under 13 has also sparked concerns among parents and child safety experts, prompting Google to implement parental controls through Family Link and assure that children's activity won't be used to train its AI models. However, the integration of AI has also presented unforeseen challenges. A recent update to Gemini has inadvertently broken content filters, affecting apps that rely on lowered guardrails, particularly those providing support for trauma survivors. This update has blocked incident reports related to sensitive topics, raising concerns about the limitations and potential biases of AI-driven content moderation. This issue has led to some users, particularly developers who work with apps assisting trauma survivors, to have apps rendered useless due to the changes. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google Chrome is set to integrate on-device AI, leveraging the 'Gemini Nano' large-language model (LLM), to proactively detect and block tech support scams while users browse the web. This new security feature aims to combat malicious websites that deceive users into believing their computers are infected with viruses or have other technical issues. These scams often manifest as full-screen browser windows or persistent pop-ups, designed to make them difficult to close, with the ultimate goal of tricking victims into calling a bogus support number.
Google is addressing the evolving tactics of scammers, who are known to adapt quickly to exploit unsuspecting users. These deceptive practices include expanding pop-ups to full-screen, disabling mouse input to create a sense of urgency, and even playing alarming audio messages to convince users that their computers are locked down. The 'Gemini Nano' model, previously used on Pixel phones, will analyze web pages for suspicious activity, such as the misuse of keyboard lock APIs, to identify potential tech support scams in real-time. This on-device processing is crucial as many malicious sites have a very short lifespan. When Chrome navigates to a potentially harmful website, the Gemini Nano model will activate and scrutinize the page's intent. The collected data is then sent to Google’s Safe Browsing service for a final assessment, determining whether to display a warning to the user. To alleviate privacy and performance concerns, Google has implemented measures to ensure the LLM is used sparingly, runs locally, and manages resource consumption effectively. Users who have opted-in to the Enhanced Protection setting will have the security signals sent to Google's Safe Browsing service. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.
When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats. The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities. Recommended read:
References :
@zdnet.com
//
Microsoft is rolling out a wave of new AI-powered features for Windows 11 and Copilot+ PCs, aiming to enhance user experience and streamline various tasks. A key addition is an AI agent designed to assist users in navigating and adjusting Windows 11 settings. This agent will understand user intent through natural language, allowing them to simply describe the setting they wish to change, such as adjusting mouse pointer size or enabling voice control. With user permission, the AI agent can then automate and execute the necessary adjustments. This feature, initially available to Windows Insiders on Snapdragon X Copilot+ PCs, seeks to eliminate the frustration of searching for and changing settings manually.
Microsoft is also enhancing Copilot with new AI skills, including the ability to act on screen content. One such action, "Ask Copilot," will enable users to draft content in Microsoft Word based on on-screen information, or create bulleted lists from selected text. These capabilities aim to boost productivity by leveraging generative AI to quickly process and manipulate information. Furthermore, the Windows 11 Start menu is undergoing a revamp, offering easier access to apps and a phone companion panel for quick access to information from synced iPhones or Android devices. The updated Start menu, along with the new AI features, will first be available to Windows Insiders running Snapdragon X Copilot Plus PCs. In a shift toward passwordless security, Microsoft is removing the password autofill feature from its Authenticator app, encouraging users to transition to Microsoft Edge for password management. Starting in June 2025, users will no longer be able to save new passwords in the Authenticator app, with autofill functionality being removed in July 2025. By August 2025, saved passwords will no longer be accessible in the app. Microsoft argues that this change streamlines the process, as passwords will be synced with the Microsoft account and accessible through Edge. However, users who do not use Edge may find this transition less seamless, as they will need to install Edge and make it the default autofill provider to maintain access to their saved passwords. Recommended read:
References :
@Salesforce
//
Salesforce is enhancing its security operations by integrating AI agents into its security teams. These AI agents are becoming vital force multipliers, automating tasks that previously required manual effort. This automation is leading to faster response times and freeing up security personnel to focus on higher-value analysis and strategic initiatives, ultimately boosting the overall productivity of the security team.
The deployment of agentic AI in security presents unique challenges, particularly in ensuring data privacy and security. As businesses increasingly adopt AI to remain competitive, concerns arise regarding data leaks and accountability. Dr. Eoghan Casey, Field CTO at Salesforce, emphasizes the shared responsibility in building trust into AI systems, with providers maintaining a trusted technology platform and customers ensuring the confidentiality and reliability of their information. Implementing safety guardrails is crucial to ensure that AI agents operate within technical, legal, and ethical boundaries, safeguarding against undesirable outcomes. At RSA Conference 2025, SecAI, an AI-enriched threat intelligence company, debuted its AI-native Investigator platform designed to solve the challenges of efficient threat investigation. The platform combines curated threat intelligence with advanced AI techniques for deep information integration, contextual security reasoning, and suggested remediation options. Chase Lee, Managing Director at SecAI, stated that the company is reshaping what's possible in cyber defense by giving security teams superhuman capabilities to meet the scale and speed of modern threats. This AI-driven approach streamlines the investigation process, enabling analysts to rapidly evaluate threats and make confident decisions. Recommended read:
References :
@www.helpnetsecurity.com
//
References:
hackread.com
, Help Net Security
,
StrikeReady has launched its next-generation Security Command Center v2, an AI-powered platform designed to help security teams move beyond basic alert processing and automate threat response. For years, security teams have struggled with siloed tools, fragmented intelligence, and a constant stream of alerts, forcing them to operate in a reactive mode. Traditional Security Operations platforms, meant to unify data and streamline response, often added complexity through customization and manual oversight. The new platform aims to address these challenges by bringing automated response to assets, identities, vulnerabilities, and alerts.
The Security Command Center v2 offers several key business outcomes and metrics. These include proactive risk visibility with a consolidated risk view across identities, assets, and vulnerabilities, validated in a single command center interface. This is intended to enable informed, strategic planning instead of constant firefighting. The platform also offers radical time reduction, with risk validation using threat intelligence dropping from hours to minutes and alert processing reduced from an hour to just one minute, freeing analysts for threat hunting. All alerts, regardless of severity, are processed at machine speed and accuracy. According to Alex Lanstein, CTO at StrikeReady, the goal is to help security teams "escape the cycle of perpetual reactivity." With this platform, organizations can control and reduce risk in real-time, closing security gaps before they're exploited. Furthermore, the new platform offers better, faster, and more cost-effective deployments, with automated workflows and capabilities going live in as little as 60 minutes. Lower operational expenses are also expected, with examples such as phishing alert backlogs cleared in minutes, reducing manual efforts and potentially saving over $180,000 annually. The platform includes native case management, collaboration, and real-time validation, streamlining security operations and minimizing reliance on external ticketing systems. Recommended read:
References :
Help Net@Help Net Security
//
Microsoft is enhancing Windows 11 roadmap transparency with new initiatives to better inform IT professionals and users about upcoming features. The company has launched a new Windows roadmap website designed to simplify the tracking of new Windows 11 features. This move addresses a key criticism regarding the lack of clarity around the testing and rollout phases of new functionalities. Microsoft aims to provide IT administrators with more insights, enabling them to effectively manage changes across their Windows estates.
The new roadmap consolidates information from various sources, including the Windows Insider Program and Microsoft's support site, offering a unified view of in-development features. Users can filter features based on platform, Windows versions, and rollout status, gaining insights into descriptions, release dates, and compatibility details. While the roadmap currently focuses on the client version of Windows 11, Microsoft plans to expand it to include other Windows versions in the future and is accepting feedback to further improve the tool's utility. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
References:
AiThority
, AI News | VentureBeat
,
AI security startup Hakimo has secured $10.5 million in Series A funding to expand its autonomous security monitoring platform. The funding round was led by Vertex Ventures and Zigg Capital, with participation from RXR Arden Digital Ventures, Defy.vc, and Gokul Rajaram. This brings the company’s total funding to $20.5 million. Hakimo's platform addresses the challenges of rising crime rates, understaffed security teams, and overwhelming false alarms in traditional security systems.
The company’s flagship product, AI Operator, monitors existing security systems, detects threats in real-time, and executes response protocols with minimal human intervention. Hakimo's AI Operator utilizes computer vision and generative AI to detect any anomaly or threat that can be described in words. Companies using Hakimo can save approximately $125,000 per year compared to using traditional security guards. Recommended read:
References :
Vasu Jakkal@Microsoft Security Blog
//
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.
The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses. Recommended read:
References :
Megan Crouse@eWEEK
//
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.
The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping. Recommended read:
References :
@PCWorld
//
References:
BleepingComputer
, Anonymous ???????? :af:
,
Google Chrome has introduced a new layer of security, integrating AI into its existing "Enhanced protection" feature. This update provides real-time defense against dangerous websites, downloads, and browser extensions, marking a significant upgrade to Chrome's security capabilities. The AI integration allows for immediate analysis of patterns, enabling the identification of suspicious webpages that may not yet be classified as malicious.
This AI-powered security feature is an enhancement of Chrome's Safe Browsing. The technology apparently enables real-time analysis of patterns to identify suspicious or dangerous webpages. The improved protection also extends to deep scanning of downloads to detect suspicious files. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |