@cloud.google.com
//
References:
security.googleblog.com
, Compute
,
Google is enhancing its Chrome security measures by integrating the on-device Gemini Nano large language model (LLM) to combat tech support scams. This new feature, launched with Chrome 137, adds an extra layer of protection by leveraging the LLM to generate signals that Safe Browsing can use to deliver more accurate verdicts on potentially dangerous sites. The on-device approach allows Chrome to detect and block attacks in real-time, even those from malicious sites that exist for less than 10 minutes. This method also considers how sites present themselves to individual users, enhancing the ability to assess the web for illegitimate purposes and potential threats.
AI Hypercomputer at Google Cloud is receiving several enhancements to accelerate AI inference workloads. These updates include the unveiling of Ironwood, Google's newest Tensor Processing Unit (TPU) designed specifically for inference, along with software improvements like simple and performant inference using vLLM on TPU and the latest GKE inference capabilities. With optimized software and powerful benchmarks, AI Hypercomputer aims to maximize performance and reduce inference costs, further enhancing JetStream and bringing vLLM support for TPU. JetStream, Google's open-source inference engine, has demonstrated significantly improved throughput performance for models like Llama 2 70B and Mixtral 8x7B. Google is also investing in advanced nuclear power to fuel its AI and data center growth, emphasizing its commitment to sustainability and addressing the increasing energy demands of AI. Partnering with Elementl Power, Google plans to build three nuclear power plants, each generating at least 600 megawatts of clean electricity. These plants will utilize small modular reactors (SMRs), which are smaller, cheaper, and faster to build than traditional nuclear reactors, aligning with Google's goal to be pollution-free by 2030 and ensuring a constant, carbon emission-free energy source for its energy-intensive operations. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is ramping up its AI integration across various platforms to enhance user security and accessibility. The tech giant is deploying AI models in Chrome to detect and block online scams, protecting users from fraudulent websites and suspicious notifications. These AI-powered systems are already proving effective in Google Search, blocking hundreds of millions of scam results daily and significantly reducing fake airline support pages by over 80 percent. Google is also using AI in a new iOS feature called Simplify, which leverages Gemini's large language models to translate dense technical jargon into plain, readable language, making complex information more accessible.
Google's Gemini is also seeing updates in other areas, including new features for simplification and potentially expanded access for younger users. The Simplify feature, accessible via the Google App on iOS, aims to break down technical jargon found in legal contracts or medical reports. Google conducted a study showing improved comprehension among users who read Simplify-processed text, however, the study's limitations highlight the challenges in accurately gauging the full impact of AI-driven simplification. Google's plan to make Gemini available to users under 13 has also sparked concerns among parents and child safety experts, prompting Google to implement parental controls through Family Link and assure that children's activity won't be used to train its AI models. However, the integration of AI has also presented unforeseen challenges. A recent update to Gemini has inadvertently broken content filters, affecting apps that rely on lowered guardrails, particularly those providing support for trauma survivors. This update has blocked incident reports related to sensitive topics, raising concerns about the limitations and potential biases of AI-driven content moderation. This issue has led to some users, particularly developers who work with apps assisting trauma survivors, to have apps rendered useless due to the changes. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is enhancing its defenses against online scams by integrating AI-powered systems across Chrome, Search, and Android platforms. The company announced it will leverage Gemini Nano, its on-device large language model (LLM), to bolster Safe Browsing capabilities within Chrome 137 on desktop computers. This on-device approach offers real-time analysis of potentially dangerous websites, enabling Google to safeguard users from emerging scams that may not yet be included in traditional blocklists or threat databases. Google emphasizes that this proactive measure is crucial, especially considering the fleeting lifespan of many malicious sites, often lasting less than 10 minutes.
The integration of Gemini Nano in Chrome allows for the detection of tech support scams, which commonly appear as misleading pop-ups designed to trick users into believing their computers are infected with a virus. These scams often involve displaying a phone number that directs users to fraudulent tech support services. The Gemini Nano model analyzes the behavior of web pages, including suspicious browser processes, to identify potential scams in real-time. The security signals are then sent to Google’s Safe Browsing online service for a final assessment, determining whether to issue a warning to the user about the possible threat. Google is also expanding its AI-driven scam detection to identify other fraudulent schemes, such as those related to package tracking and unpaid tolls. These features are slated to arrive on Chrome for Android later this year. Additionally, Google revealed that its AI-powered scam detection systems have become significantly more effective, ensnaring 20 times more deceptive pages and blocking them from search results. This has led to a substantial reduction in scams impersonating airline customer service providers (over 80%) and those mimicking official resources like visas and government services (over 70%) in 2024. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.
When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats. The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |