Alexey Shabanov@TestingCatalog
//
Perplexity AI is rapidly expanding its presence in the AI market through strategic integrations and innovative features. The company has launched Perplexity Labs, a new tool for Pro subscribers designed to automate tasks such as creating reports, spreadsheets, and mini web apps. This feature leverages AI research, code execution, and content generation, positioning Perplexity as a versatile platform for both information retrieval and content creation. Labs can generate and execute code for data structuring, create interactive web apps, and produce various file types, making it well-suited for diverse projects from marketing campaigns to business analysis.
The startup is also making strides in device integration. Samsung is reportedly nearing a wide-ranging deal with Perplexity that includes investment and deep integration into devices, the Bixby assistant, and the web browser. This partnership could see Perplexity pre-installed on upcoming Galaxy S26 series phones, potentially replacing Google Gemini as the default AI assistant. The integration might also extend to Samsung Internet, offering users more advanced and personalized AI experiences directly within their web browsing. Furthermore, Perplexity is enhancing its AI-driven search capabilities within the Comet Browser. Users can now observe Perplexity AI controlling pages in the Comet Browser, with visual indicators showing actions like clicking and filling forms. This new feature allows for more interactive and transparent AI-driven automation, benefiting users who automate repetitive workflows such as data entry and testing. This positions Perplexity as a pioneer in bringing interactive and transparent AI-driven automation to the browser. Recommended read:
References :
Eric Hal@techradar.com
//
Google I/O 2025 saw the unveiling of 'AI Mode' for Google Search, signaling a significant shift in how the company approaches information retrieval and user experience. The new AI Mode, powered by the Gemini 2.5 model, is designed to offer more detailed results, personal context, and intelligent assistance. This upgrade aims to compete directly with the capabilities of AI chatbots like ChatGPT, providing users with a more conversational and comprehensive search experience. The rollout has commenced in the U.S. for both the browser version of Search and the Google app, although availability in other countries remains unconfirmed.
AI Mode brings several key features to the forefront, including Deep Search, Live Visual Search, and AI-powered agents. Deep Search allows users to delve into topics with unprecedented depth, running hundreds of searches simultaneously to generate expert-level, fully-cited reports in minutes. With Search Live, users can leverage their phone's camera to interact with Search in real-time, receiving context-aware responses from Gemini. Google is also bringing agentic capabilities to Search, allowing users to perform tasks like booking tickets and making reservations directly through the AI interface. Google’s revamp of its AI search service appears to be a response to the growing popularity of AI-driven search experiences offered by companies like OpenAI and Perplexity. According to Gartner analyst Chirag Dekate, evidence suggests a greater reliance on search and AI-infused search experiences. As AI Mode rolls out, Google is encouraging website owners to optimize their content for AI-powered search by creating unique, non-commodity content and ensuring that their sites meet technical requirements and provide a good user experience. Recommended read:
References :
@www.theapplepost.com
//
References:
Ken Yeung
, Shelly Palmer
,
Google is expanding its use of Gemini AI to revolutionize advertising on YouTube with a new product called "Peak Points," announced at the YouTube Brandcast event in New York. This AI-powered feature analyzes videos to pinpoint moments of maximum viewer engagement, strategically inserting ads at these "peak points." The goal is to improve ad performance by targeting viewers when they are most emotionally invested or attentive, potentially leading to better ad recall and effectiveness for marketers.
This new approach to ad placement signifies a shift from traditional contextual targeting, where ads are placed based on general video metadata or viewer history. Gemini AI provides a more granular analysis, identifying specific timestamps within a video where engagement spikes. This allows YouTube to not only understand what viewers are watching but also how they are watching it, gathering real-time attention data. This data has far-reaching implications, potentially influencing algorithmic recommendations, content development, talent discovery, and platform control. For content creators, Peak Points fundamentally changes monetization strategies. The traditional mid-roll ad insertion at default intervals will be replaced by Gemini's assessment of content's engagement level. Creators will now be incentivized to create content that not only retains viewers but also generates attention spikes at specific moments. Marketers, on the other hand, are shifting from buying against content to buying against engagement, necessitating a reevaluation of brand safety, storytelling, and overall campaign outcomes in this new attention-based economy. Recommended read:
References :
Jibin Joseph@PCMag Middle East ai
//
Google is experimenting with replacing its iconic "I'm Feeling Lucky" button with a new "AI Mode" tool. This represents a significant shift in how the search engine operates, moving away from providing a direct link to the first search result and towards offering AI-powered answers directly within the Google search interface. The "I'm Feeling Lucky" button, which has been a staple of Google's homepage since 1998, bypasses the search results page entirely, taking users directly to what Google deems the most relevant website. However, Google believes that most users now prefer browsing a range of links rather than immediately jumping to a single site.
The new AI Mode aims to provide a more interactive and informative search experience. When users ask questions, the AI Mode tool leverages Google's AI chatbot to generate detailed responses instead of simply presenting a list of links. For example, if a user asks "Where can I purchase a camping chair under $100?", AI Mode may display images, prices, and store links directly within the search results. Users can then engage in follow-up questions with the AI, such as "Is it waterproof?", receiving further details and recommendations. The AI also uses real-time information to display store hours, product availability, and other relevant data. Testing of the AI Mode button is currently limited to a small percentage of users in the U.S. who are part of Google's Search Labs program. Google is exploring different placements for the button, including inside the search bar next to the camera icon, or replacing the "I'm Feeling Lucky" button entirely. Some users have also reported seeing a rainbow-colored glow around the button when they hover over it. While this move aims to align with modern search habits, some users have expressed concern over the potential loss of the nostalgic "I'm Feeling Lucky" feature, and are also concerned about the accuracy of Google's AI Mode. Recommended read:
References :
@www.artificialintelligence-news.com
//
Apple is doubling down on its custom silicon efforts, developing a new generation of chips destined for future smart glasses, AI-capable servers, and the next iterations of its Mac computers. The company's hardware strategy continues to focus on in-house production, aiming to optimize performance and efficiency across its product lines. This initiative includes a custom chip for smart glasses, designed for voice commands, photo capture, and audio playback, drawing inspiration from the low-power components of the Apple Watch but with modifications to reduce energy consumption and support multiple cameras. Production for the smart glasses chip is anticipated to begin in late 2026 or early 2027, potentially bringing the device to market within two years, with Taiwan Semiconductor Manufacturing Co. expected to handle production, as they do for most Apple chips.
Apple is also exploring integrating cameras into devices like AirPods and Apple Watches, utilizing chips currently under development codenamed "Nevis" for Apple Watch and "Glennie" for AirPods, both slated for a potential release around 2027. In addition to hardware advancements, Apple is considering incorporating AI-powered search results in its Safari browser, potentially shifting away from reliance on Google Search. Eddy Cue, Apple's SVP of services, confirmed the company has engaged in discussions with AI companies like Anthropic, OpenAI, and Perplexity to ensure it has alternative options available, demonstrating a commitment to staying nimble in the face of technological shifts. Apple is planning the launch of AR and non-AR glasses under the codename N401. The company's CEO, Tim Cook, hopes for Apple to take a lead in this market segment. Eddy Cue said that in 10 years you may not need an iPhone. Cue acknowledges that AI is a new technology shift, and it’s creating new opportunities for new entrants and that Apple needs to stay open to future possibilities. Recommended read:
References :
@docs.anthropic.com
//
Anthropic, the generative AI startup, has officially entered the internet search arena with the launch of its new web search API for Claude. This positions Claude as a direct challenger to traditional search engines like Google, offering users real-time access to information through its large language models. This API enables developers to integrate Claude’s search capabilities directly into their own applications, expanding the reach of AI-powered information retrieval.
The Claude web search API provides access to current web information, allowing the AI assistant to conduct multiple, iterative searches to deliver more complete and accurate answers. Claude uses its "reasoning" capabilities to determine if a user’s query would benefit from a real-time search, generating search queries and analyzing the results to inform its responses. The responses it delivers will come with citations that link to the source articles it uses, offering users transparency and enabling them to verify the information for themselves. This move comes amid signs of a potential shift in the search landscape, with growing user engagement with AI-driven alternatives. Apple is reportedly exploring AI search engines like ChatGPT, Perplexity and Anthropic's Claude, as options in Safari, signaling a shift away from Google’s $20 billion deal to be the default search engine. The decline in traditional search volume is attributed to the conversational and context-aware nature of AI platforms. The move signals a growing trend towards conversational AI in information retrieval, which may reshape how people access and use the internet. Recommended read:
References :
@the-decoder.com
//
Google is integrating its Gemini AI model deeper into its search engine with the introduction of 'AI Mode'. This new feature, currently in a limited testing phase in the US, aims to transform the search experience into a conversational one. Instead of the traditional list of links, AI Mode delivers answers generated directly from Google’s index, functioning much like a Gemini-powered chatbot. The search giant is also dropping the Labs waitlist, allowing any U.S. user who opts in to try the new search function.
The AI Mode includes visual place and product cards, enhanced multimedia features, and a left-side panel for managing past searches. This provides more organized results for destinations, products, and services. Users can ask contextual follow-up questions, and the AI Mode will populate a sidebar with cards referring to the sources it's using to formulate its answers. It can also access Google's Shopping Graph and localized data from Maps. This move is seen as Google's direct response to AI-native upstarts that are recasting the search bar as a natural-language front end to the internet. Google CEO Sundar Pichai is hopeful to have an agreement with Apple to have Gemini as an option as part of Apple Intelligence by middle of this year. The rise of AI in search raises concerns for marketers. Organic SEO tactics built on blue links will erode and there will be a need to prepare content for zero‑click, AI‑generated summaries. Recommended read:
References :
Matt G.@Search Engine Journal
//
OpenAI is rolling out a series of updates to ChatGPT, aiming to enhance its search capabilities and introduce a new shopping experience. These features are now available to all users, including those with free accounts, across all regions where ChatGPT is offered. The updates build upon real-time search features that were introduced in October and aim to challenge established search engines such as Google. ChatGPT's search function has seen a rapid increase in usage, processing over one billion web searches in the past week.
The most significant addition is the introduction of shopping functionality, allowing users to search for products, compare options, and view visual details like pricing and reviews directly within the chatbot. OpenAI emphasizes that product results are chosen independently and are not advertisements, with recommendations personalized based on current conversations, past chats, and user preferences. The initial focus will be on categories like fashion, beauty, home goods, and electronics, and soon it will integrate its memory feature with shopping for Pro and Plus users, meaning ChatGPT will reference a user’s previous chats to make highly personalized product recommendations. In addition to the new shopping features, OpenAI has added other improvements to ChatGPT's search capabilities. Users can now access ChatGPT search via WhatsApp. Other improvements include trending searches and autocomplete, which offer suggestions as you type to speed up your searches. Furthermore, ChatGPT will provide multiple sources for information and highlight specific portions of text that correspond to each source, making it easier for users to verify facts across multiple websites. While these new features aim to enhance user experience, OpenAI is also addressing concerns about ChatGPT's 'yes-man' personality through system prompt updates. Recommended read:
References :
Ken Yeung@Ken Yeung
//
Google has launched a new feature called "Discover Sources" for NotebookLM, its AI-powered tool designed to organize and analyze information. Rolling out to all users starting April 2, 2025, the new feature automatically curates relevant websites on a specified topic, recommending up to ten sources accompanied by AI-generated summaries. This enhancement streamlines research by allowing users to quickly surface relevant content from the internet.
NotebookLM, initially launched in 2023 as an AI-powered alternative to Evernote and Microsoft OneNote, previously relied on manual uploads of documents, articles, and notes. "Discover Sources" automates the process of pulling in information from the internet with a single click. The curated sources remain accessible within NotebookLM notebooks, allowing users to leverage them within Briefing Docs, FAQs, and Audio Overviews without repeatedly scouring the internet. This enhancement highlights the growing trend of AI-driven research tools shaping how we work and learn. Recommended read:
References :
Ryan Daws@AI News
//
Anthropic has announced that its AI assistant Claude can now search the web. This enhancement allows Claude to provide users with more up-to-date and relevant responses by expanding its knowledge base beyond its initial training data. It may seem like a minor feature update, but it's not. It is available to paid Claude 3.7 Sonnet users by toggling on "web search" in their profile settings.
This integration emphasizes transparency, as Claude provides direct citations when incorporating information from the web, enabling users to easily fact-check sources. Claude aims to streamline the information-gathering process by processing and delivering relevant sources in a conversational format. Anthropic believes this update will unlock new use cases for Claude across various industries, including sales, finance, research, and shopping. Recommended read:
References :
Chris McKay@Maginative
//
Google is currently navigating the "innovator’s dilemma" by experimenting with AI-driven search solutions to disrupt its core search business before competitors do. The company is testing and developing AI versions of Google Search, including a new experimental "AI Mode" powered by Gemini 2.0. This new mode transforms the search engine into a chatbot-like interface, providing more nuanced and multi-step answers to user queries. It allows users to interact with the AI, ask follow-up questions, and even compare products directly within the search page.
AI Mode delivers a full-page AI-generated response. Users can interact with the AI, ask follow-up questions, and even compare products. This mode runs on a custom Gemini 2.0 version and is currently available to Google One AI Premium subscribers. This move comes as Google faces increasing competition from other AI chatbots like OpenAI's ChatGPT and Perplexity AI, who are rethinking the search experience. The goal is to provide immediate, conversational answers and a more comprehensive search experience, though some experts caution that the traditional link-based search may eventually disappear as a result. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |