@www.eweek.com
//
Apple is exploring groundbreaking technology to enable users to control iPhones, iPads, and Vision Pro headsets with their thoughts, marking a significant leap towards hands-free device interaction. The company is partnering with Synchron, a brain-computer interface (BCI) startup, to develop a universal standard for translating neural activity into digital commands. This collaboration aims to empower individuals with disabilities, such as ALS and severe spinal cord injuries, allowing them to navigate and operate their devices without physical gestures.
Apple's initiative involves Synchron's Stentrode, a stent-like implant placed in a vein near the brain's motor cortex. This device picks up neural activity and translates it into commands, enabling users to select icons on a screen or navigate virtual environments. The brain signals work in conjunction with Apple's Switch Control feature, a part of its operating system designed to support alternative input devices. While early users have noted the interface is slower compared to traditional methods, Apple plans to introduce a dedicated software standard later this year to simplify the development of BCI tools and improve performance. In addition to BCI technology, Apple is also focusing on enhancing battery life in future iPhones through artificial intelligence. The upcoming iOS 19 is expected to feature an AI-powered battery optimization mode that learns user habits and manages app energy usage accordingly. This feature is particularly relevant for the iPhone 17 Air, where it will help offset the impact of a smaller battery. Furthermore, Apple is reportedly exploring the use of advanced memory technology and innovative screen designs for its 20th-anniversary iPhone in 2027, aiming for faster AI processing and extended battery life. Recommended read:
References :
@the-decoder.com
//
Google is enhancing its AI capabilities across several platforms. NotebookLM, the AI-powered research tool, is expanding its "Audio Overviews" feature to approximately 75 languages, including less common ones such as Icelandic, Basque, and Latin. This enhancement will enable users worldwide to listen to AI-generated summaries of documents, web pages, and YouTube transcripts, making research more accessible. The audio for each language is generated by AI agents using metaprompting, with the Gemini 2.5 Pro language model as the underlying system, moving towards audio production technology based entirely on Gemini’s multimodality.
These Audio Overviews are designed to distill a mix of documents into a scripted conversation between two synthetic hosts. Users can direct the tone and depth through prompts, and then download an MP3 or keep playback within the notebook. This expansion rebuilds the speech stack and language detection while maintaining a one-click flow. Early testers have reported that multilingual voices make long reading lists easier to digest and provide an alternative channel for blind or low-vision audiences. In addition to NotebookLM enhancements, Google Gemini is receiving AI-assisted image editing capabilities. Users will be able to modify backgrounds, swap objects, and make other adjustments to both AI-generated and personal photos directly within the chat interface. These editing tools are being introduced gradually for users on web and mobile devices, supporting over 45 languages in most countries. To access the new features on your phone, users will need the latest version of the Gemini app. Recommended read:
References :
@techradar.com
//
Google has officially launched its AI-powered NotebookLM app on both Android and iOS platforms, expanding the reach of this powerful research tool beyond the web. The app, which leverages AI to summarize and analyze documents, aims to enhance productivity and learning by enabling users to quickly extract key insights from large volumes of text. The release of the mobile app coincides with Google I/O 2025, where further details about the app's features and capabilities are expected to be unveiled. Users can now pre-order the app on both the Google Play Store and Apple App Store, ensuring automatic download upon its full launch on May 20th.
NotebookLM provides users with an AI-powered workspace to collate information from multiple sources, including documents, webpages, and more. The app offers smart summaries and allows users to ask questions about the data, making it a helpful alternative to Google Gemini for focused research tasks. The mobile version of NotebookLM retains most of the web app's features, including the ability to create and browse notebooks, add sources, and engage in conversations with the AI about the content. Users can also utilize the app to generate audio overviews or "podcasts" of their notes, which can be interrupted for follow-up questions. In addition to the mobile app launch, Google has significantly expanded the language support for NotebookLM's "Audio Overviews" feature. Originally available only in English, the AI-generated summaries can now be accessed in approximately 75 languages, including Spanish, French, Hindi, Turkish, Korean, Icelandic, Basque and Latin. This expansion allows researchers, students, and content creators worldwide to benefit from the audio summarization capabilities of NotebookLM, making it easier to digest long reading lists and providing an alternative channel for blind or low-vision users. Recommended read:
References :
Facebook@Meta Newsroom
//
Meta has launched its first dedicated AI application, directly challenging ChatGPT in the burgeoning AI assistant market. The Meta AI app, built on the Llama 4 large language model (LLM), aims to offer users a more personalized AI experience. The application is designed to learn user preferences, remember context from previous interactions, and provide seamless voice-based conversations, setting it apart from competitors. This move is a significant step in Meta's strategy to establish itself as a major player in the AI landscape, offering a direct path to its generative AI models.
The new Meta AI app features a 'Discover' feed, a social component allowing users to explore how others are utilizing AI and share their own AI-generated creations. The app also replaces Meta View as the companion application for Ray-Ban Meta smart glasses, enabling a fluid experience across glasses, mobile, and desktop platforms. Users will be able to initiate conversations on one device and continue them seamlessly on another. To use the application, a Meta products account is required, though users can sign in with their existing Facebook or Instagram profiles. CEO Mark Zuckerberg emphasized that the app is designed to be a user’s personal AI, highlighting the ability to engage in voice conversations. The app begins with basic information about a user's interests, evolving over time to incorporate more detailed knowledge about the user and their network. The launch of the Meta AI app comes as other companies are also developing their AI models, seeking to demonstrate the power and flexibility of its in-house Llama 4 models to both consumers and third-party software developers. Recommended read:
References :
@www.eweek.com
//
Google is expanding access to its AI tools, making Gemini Live's screen sharing feature free for all Android users. This reverses the previous requirement of a Google One AI Premium plan. Now, anyone with the Gemini app on an Android device can utilize Gemini's ability to view and interact with their screen or camera in real-time. This allows users to get assistance with anything from understanding complex app settings to translating foreign language signs. The rollout is gradual and will be available to all compatible Android devices within the coming weeks. Google seems to be prioritizing widespread adoption and gathering valuable AI training data from a larger user base.
Google is also extending the availability of its AI tools to college students in the U.S. Students can now access Google's AI Premium Plan at no cost until Spring 2026. This includes access to Gemini Advanced, NotebookLM Plus, and 2TB of Google One storage. Gemini Advanced provides assistance with exams and essays and includes Gemini 2.5 Pro, Google’s most powerful AI model. This free access could assist students with research, writing, studying, and overall academic support. In the enterprise space, Google is improving its BigQuery data warehouse service with new AI capabilities. BigQuery Unified Governance helps organizations discover, understand, and trust their data assets. According to Google, BigQuery has five times the number of customers of both Snowflake and Databricks. Google Cloud is emphasizing the importance of having access to the correct data that meets business service level agreements (SLAs) for enterprise AI success. Recommended read:
References :
Fiona Jackson@eWEEK
//
Nvidia has launched Signs, a new AI-powered platform designed to teach American Sign Language (ASL). Developed in partnership with the American Society for Deaf Children and creative agency Hello Monday, Signs aims to bridge communication gaps by providing an interactive web platform for ASL learning. The platform utilizes AI to analyze users' movements through their webcams, offering real-time feedback and instruction via a 3D avatar demonstrating signs. It currently features a validated library of 100 signs, with plans to expand to 1,000 by collecting 400,000 video clips.
Signs is designed to support sign language learners of all levels and also allows contributors to add to Nvidia's growing ASL open-source video dataset. The dataset will be validated by fluent ASL users and interpreters. This dataset, slated for release later this year, will be a valuable resource for building accessible technologies like AI agents, digital human applications, and video conferencing tools. While the current focus is on hand movements and finger positions, future versions aim to incorporate facial expressions, head movements, and nuances like slang and regional variations. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |