News from the AI & ML world

DeeperML - #accessibility

@www.eweek.com //
References: bsky.app , eWEEK ,
Apple is exploring groundbreaking technology to enable users to control iPhones, iPads, and Vision Pro headsets with their thoughts, marking a significant leap towards hands-free device interaction. The company is partnering with Synchron, a brain-computer interface (BCI) startup, to develop a universal standard for translating neural activity into digital commands. This collaboration aims to empower individuals with disabilities, such as ALS and severe spinal cord injuries, allowing them to navigate and operate their devices without physical gestures.

Apple's initiative involves Synchron's Stentrode, a stent-like implant placed in a vein near the brain's motor cortex. This device picks up neural activity and translates it into commands, enabling users to select icons on a screen or navigate virtual environments. The brain signals work in conjunction with Apple's Switch Control feature, a part of its operating system designed to support alternative input devices. While early users have noted the interface is slower compared to traditional methods, Apple plans to introduce a dedicated software standard later this year to simplify the development of BCI tools and improve performance.

In addition to BCI technology, Apple is also focusing on enhancing battery life in future iPhones through artificial intelligence. The upcoming iOS 19 is expected to feature an AI-powered battery optimization mode that learns user habits and manages app energy usage accordingly. This feature is particularly relevant for the iPhone 17 Air, where it will help offset the impact of a smaller battery. Furthermore, Apple is reportedly exploring the use of advanced memory technology and innovative screen designs for its 20th-anniversary iPhone in 2027, aiming for faster AI processing and extended battery life.

Recommended read:
References :
  • bsky.app: Do you want to control your iPhone with your brain? You might soon be able to. Apple has partnered with brain-computer interface startup Synchron to explore letting people with disabilities or diseases like ALS control their iPhones using decoded brain signals:
  • eWEEK: Apple is developing technology that will allow users to control iPhones, iPads, and Vision Pro headsets with their brain signals, marking a major step toward hands-free, thought-driven device interaction.
  • www.techradar.com: Apple’s move into brain-computer interfaces could be a boon for those with disabilities.

@the-decoder.com //
Google is enhancing its AI capabilities across several platforms. NotebookLM, the AI-powered research tool, is expanding its "Audio Overviews" feature to approximately 75 languages, including less common ones such as Icelandic, Basque, and Latin. This enhancement will enable users worldwide to listen to AI-generated summaries of documents, web pages, and YouTube transcripts, making research more accessible. The audio for each language is generated by AI agents using metaprompting, with the Gemini 2.5 Pro language model as the underlying system, moving towards audio production technology based entirely on Gemini’s multimodality.

These Audio Overviews are designed to distill a mix of documents into a scripted conversation between two synthetic hosts. Users can direct the tone and depth through prompts, and then download an MP3 or keep playback within the notebook. This expansion rebuilds the speech stack and language detection while maintaining a one-click flow. Early testers have reported that multilingual voices make long reading lists easier to digest and provide an alternative channel for blind or low-vision audiences.

In addition to NotebookLM enhancements, Google Gemini is receiving AI-assisted image editing capabilities. Users will be able to modify backgrounds, swap objects, and make other adjustments to both AI-generated and personal photos directly within the chat interface. These editing tools are being introduced gradually for users on web and mobile devices, supporting over 45 languages in most countries. To access the new features on your phone, users will need the latest version of the Gemini app.

Recommended read:
References :
  • www.techradar.com: Google reveals powerful NotebookLM app for Android and iOS with release date – here's what it looks like
  • TestingCatalog: Google expands NotebookLM with Audio Overviews in over 50 languages
  • THE DECODER: Google Gemini brings AI-assisted image editing to chat
  • the-decoder.com: Google Gemini brings AI-assisted image editing to chat
  • www.tomsguide.com: Google Gemini adds new image-editing tools — here's what they can do
  • The Tech Basic: Google Brings NotebookLM AI Research Assistant to Mobile With Offline Podcasts and Enhanced Tools
  • PCMag Middle East ai: Google CEO: Gemini Could Be Integrated Into Apple Intelligence This Year
  • gHacks Technology News: Google is rolling out an update for its Gemini app that adds a quality-of-life feature. Users can now access the AI assistant directly from their home screens, bypassing the need to navigate
  • PCMag Middle East ai: Research in Your Pocket: Google's Powerful NotebookLM AI Tool Coming to iOS, Android
  • www.tomsguide.com: Google Gemini finally has an iPad app — better late than never

@techradar.com //
Google has officially launched its AI-powered NotebookLM app on both Android and iOS platforms, expanding the reach of this powerful research tool beyond the web. The app, which leverages AI to summarize and analyze documents, aims to enhance productivity and learning by enabling users to quickly extract key insights from large volumes of text. The release of the mobile app coincides with Google I/O 2025, where further details about the app's features and capabilities are expected to be unveiled. Users can now pre-order the app on both the Google Play Store and Apple App Store, ensuring automatic download upon its full launch on May 20th.

NotebookLM provides users with an AI-powered workspace to collate information from multiple sources, including documents, webpages, and more. The app offers smart summaries and allows users to ask questions about the data, making it a helpful alternative to Google Gemini for focused research tasks. The mobile version of NotebookLM retains most of the web app's features, including the ability to create and browse notebooks, add sources, and engage in conversations with the AI about the content. Users can also utilize the app to generate audio overviews or "podcasts" of their notes, which can be interrupted for follow-up questions.

In addition to the mobile app launch, Google has significantly expanded the language support for NotebookLM's "Audio Overviews" feature. Originally available only in English, the AI-generated summaries can now be accessed in approximately 75 languages, including Spanish, French, Hindi, Turkish, Korean, Icelandic, Basque and Latin. This expansion allows researchers, students, and content creators worldwide to benefit from the audio summarization capabilities of NotebookLM, making it easier to digest long reading lists and providing an alternative channel for blind or low-vision users.

Recommended read:
References :
  • www.techradar.com: Google is turning your favorite AI podcast hosts into polyglots
  • Security & Identity: From insight to action: M-Trends, agentic AI, and how we’re boosting defenders at RSAC 2025
  • Maginative: NotebookLM’s Audio Overviews Now Supports Over 50 Languages
  • TestingCatalog: Google expands NotebookLM with Audio Overviews in over 50 languages
  • www.marktechpost.com: Google has significantly expanded the capabilities of its experimental AI tool, NotebookLM, by introducing Audio Overviews in over 50 languages. This marks a notable leap in global content accessibility, making the platform far more inclusive and versatile for a worldwide audience.
  • the-decoder.com: Google expands "Audio Overviews" to 75 languages using Gemini-based audio production
  • MarkTechPost: Google has significantly expanded the capabilities of its experimental AI tool, NotebookLM, by introducing Audio Overviews in over 50 languages.
  • THE DECODER: Google expands “Audio Overviews” to 75 languages using Gemini-based audio production
  • The Official Google Blog: NotebookLM Audio Overviews are now available in over 50 languages
  • Search Engine Journal: The Data Behind Google’s AI Overviews: What Sundar Pichai Won’t Tell You
  • chromeunboxed.com: NotebookLM’s popular Audio Overviews are now available in over 50 languages
  • PCMag Middle East ai: Research in Your Pocket: Google's Powerful NotebookLM AI Tool Coming to iOS, Android
  • www.techradar.com: Google reveals powerful NotebookLM app for Android and iOS with release date – here's what it looks like
  • www.tomsguide.com: Google has confirmed the launch date for the NotebookLM app, giving users much more freedom and flexibility.
  • The Tech Basic: Google is launching mobile apps for NotebookLM, its AI study helper, on May 20. The apps are available for preorder now on iPhones, iPads, and Android devices. NotebookLM helps students, workers, and researchers understand complicated topics by turning notes into easy podcasts and summaries. What Can NotebookLM Do? NotebookLM is like a smart friend who ...

Facebook@Meta Newsroom //
Meta has launched its first dedicated AI application, directly challenging ChatGPT in the burgeoning AI assistant market. The Meta AI app, built on the Llama 4 large language model (LLM), aims to offer users a more personalized AI experience. The application is designed to learn user preferences, remember context from previous interactions, and provide seamless voice-based conversations, setting it apart from competitors. This move is a significant step in Meta's strategy to establish itself as a major player in the AI landscape, offering a direct path to its generative AI models.

The new Meta AI app features a 'Discover' feed, a social component allowing users to explore how others are utilizing AI and share their own AI-generated creations. The app also replaces Meta View as the companion application for Ray-Ban Meta smart glasses, enabling a fluid experience across glasses, mobile, and desktop platforms. Users will be able to initiate conversations on one device and continue them seamlessly on another. To use the application, a Meta products account is required, though users can sign in with their existing Facebook or Instagram profiles.

CEO Mark Zuckerberg emphasized that the app is designed to be a user’s personal AI, highlighting the ability to engage in voice conversations. The app begins with basic information about a user's interests, evolving over time to incorporate more detailed knowledge about the user and their network. The launch of the Meta AI app comes as other companies are also developing their AI models, seeking to demonstrate the power and flexibility of its in-house Llama 4 models to both consumers and third-party software developers.

Recommended read:
References :
  • The Register - Software: Meta bets you want a sprinkle of social in your chatbot
  • THE DECODER: Meta launches AI assistant app and Llama API platform
  • Analytics Vidhya: Latest Features of Meta AI Web App Powered by Llama 4
  • www.techradar.com: Meta AI is here to take on ChatGPT and give your Ray-Ban Meta Smart Glasses a fresh AI upgrade
  • Meta Newsroom: Meta's launch of a new AI app is covered.
  • techxplore.com: Meta releases standalone AI app, competing with ChatGPT
  • AI News | VentureBeat: Meta’s first dedicated AI app is here with Llama 4 — but it’s more consumer than productivity or business oriented
  • Antonio Pequen?o IV: Meta's new AI app is designed to rival ChatGPT.
  • venturebeat.com: Meta partners with Cerebras to launch its new Llama API, offering developers AI inference speeds up to 18 times faster than traditional GPU solutions, challenging OpenAI and Google in the fast-growing AI services market.
  • about.fb.com: We're launching the Meta AI app, our first step in building a more personal AI.
  • siliconangle.com: Meta announces standalone AI app for personalized assistance
  • www.tomsguide.com: Meta takes on ChatGPT with new standalone AI app — here's what makes it different
  • Data Phoenix: Meta launched a dedicated Meta AI app
  • techstrong.ai: Can Meta’s New AI App Top ChatGPT?
  • the-decoder.com: Meta launches AI assistant app and Llama API platform
  • SiliconANGLE: Meta Platforms Inc. today announced a new standalone Meta AI app that houses an artificial intelligence assistant powered by the company’s Llama 4 large language model to provide a more personalized experience for users.
  • techstrong.ai: Meta Previews Llama API to Streamline AI Application Development
  • TestingCatalog: Meta tests new AI features including Reasoning and Voice Personalization
  • www.windowscentral.com: Mark Zuckerberg says Meta is developing AI friends to beat "the loneliness epidemic" — after Bill Gates claimed AI will replace humans for most things
  • Ken Yeung: IN THIS ISSUE: Meta hosts its first-ever event around its Llama model, launching a standalone app to take on Microsoft’s Copilot and ChatGPT. The company also plans to soon open its LLM up to developers via an API. But can Meta’s momentum match its ambition?
  • www.marktechpost.com: Meta AI Introduces First Version of Its Llama 4-Powered AI App: A Standalone AI Assistant to Rival ChatGPT
  • MarkTechPost: Meta AI Introduces First Version of Its Llama 4-Powered AI App: A Standalone AI Assistant to Rival ChatGPT

@www.eweek.com //
Google is expanding access to its AI tools, making Gemini Live's screen sharing feature free for all Android users. This reverses the previous requirement of a Google One AI Premium plan. Now, anyone with the Gemini app on an Android device can utilize Gemini's ability to view and interact with their screen or camera in real-time. This allows users to get assistance with anything from understanding complex app settings to translating foreign language signs. The rollout is gradual and will be available to all compatible Android devices within the coming weeks. Google seems to be prioritizing widespread adoption and gathering valuable AI training data from a larger user base.

Google is also extending the availability of its AI tools to college students in the U.S. Students can now access Google's AI Premium Plan at no cost until Spring 2026. This includes access to Gemini Advanced, NotebookLM Plus, and 2TB of Google One storage. Gemini Advanced provides assistance with exams and essays and includes Gemini 2.5 Pro, Google’s most powerful AI model. This free access could assist students with research, writing, studying, and overall academic support.

In the enterprise space, Google is improving its BigQuery data warehouse service with new AI capabilities. BigQuery Unified Governance helps organizations discover, understand, and trust their data assets. According to Google, BigQuery has five times the number of customers of both Snowflake and Databricks. Google Cloud is emphasizing the importance of having access to the correct data that meets business service level agreements (SLAs) for enterprise AI success.

Recommended read:
References :
  • AI News | VentureBeat: BigQuery is 5x bigger than Snowflake and Databricks: What Google is doing to make it even better
  • www.eweek.com: Free For Android Users: Gemini Live’s Popular Screen Sharing Feature
  • www.techradar.com: Gemini live video and screen sharing is now free for Android — here's how to use it
  • www.tomsguide.com: Google just made its AI tools free for college students — here's how to get them

Fiona Jackson@eWEEK //
References: NVIDIA Newsroom , eWEEK , WebProNews ...
Nvidia has launched Signs, a new AI-powered platform designed to teach American Sign Language (ASL). Developed in partnership with the American Society for Deaf Children and creative agency Hello Monday, Signs aims to bridge communication gaps by providing an interactive web platform for ASL learning. The platform utilizes AI to analyze users' movements through their webcams, offering real-time feedback and instruction via a 3D avatar demonstrating signs. It currently features a validated library of 100 signs, with plans to expand to 1,000 by collecting 400,000 video clips.

Signs is designed to support sign language learners of all levels and also allows contributors to add to Nvidia's growing ASL open-source video dataset. The dataset will be validated by fluent ASL users and interpreters. This dataset, slated for release later this year, will be a valuable resource for building accessible technologies like AI agents, digital human applications, and video conferencing tools. While the current focus is on hand movements and finger positions, future versions aim to incorporate facial expressions, head movements, and nuances like slang and regional variations.

Recommended read:
References :
  • NVIDIA Newsroom: It’s a Sign: AI Platform for Teaching American Sign Language Aims to Bridge Communication Gaps
  • eWEEK: Nvidia’s Signs AI Teaches American Sign Language to Children “As Young as Six to Eight Months Old”
  • www.eweek.com: Nvidia’s Signs AI Teaches American Sign Language to Children “As Young as Six to Eight Months Old”
  • WebProNews: Nvidia’s Jensen Huang Rebuffs Investor Panic Over DeepSeek Sell-Off: “They Got It Wrong”
  • NVIDIA Newsroom: Calling All Creators: GeForce RTX 5070 Ti GPU Accelerates Generative AI and Content Creation Workflows in Video Editing, 3D and More
  • blogs.nvidia.com: It’s a Sign: AI Platform for Teaching American Sign Language Aims to Bridge Communication Gaps
  • OODAloop: Nvidia helps launch AI platform for teaching American Sign Language
  • AI News | VentureBeat: Nvidia helps launch AI platform for teaching American Sign Language
  • SiliconANGLE: Nvidia uses AI to release Signs, a sign language teaching platform
  • oodaloop.com: Nvidia helps launch AI platform for teaching American Sign Language
  • siliconangle.com: Nvidia uses AI to release Signs, a sign language teaching platform
  • Quartz: What Nvidia CEO Jensen Huang thinks about DeepSeek
  • ChinaTechNews.com: Opinion: Yes, Nvidia Stock Is Still a Buy, Even Now