News from the AI & ML world

DeeperML - #appleintelligence

@thetechbasic.com //
References: thetechbasic.com
Apple has officially announced macOS Tahoe, version 26, at its annual WWDC event. The new operating system introduces a visually striking Liquid Glass design, offering a refreshed user experience with a cohesive design language spanning across Apple’s entire ecosystem, including iOS 26 and iPadOS 26. This marks the first time Apple has implemented a universal design philosophy across its platforms, aiming to bring a new level of vitality while maintaining the familiarity of Apple's software. The Liquid Glass aesthetic features translucent elements that dynamically reflect and refract their surroundings, creating a sense of depth and movement, enhancing the user experience.

The Liquid Glass design extends throughout the system, with glossy translucent menu bars, windows, and icons. The surfaces softly reflect light and display subtle color tints, allowing users to customize folders with various accent colors. Widgets and buttons now have a more three-dimensional feel while remaining crisp. The Dock appears to float on a frosted glass shelf, and Control Center icons animate with a soft glow when activated. These changes provide macOS Tahoe with a more modern look while keeping familiar layouts and workflows intact. Furthermore, macOS Tahoe includes a dedicated Phone app that mirrors the iPhone Phone app through Continuity integration. Users can see Live Activities directly on their Mac lock screen and screen unknown callers with Call Screening and Hold Assist.

In addition to the design overhaul, Apple is embedding generative AI models directly into Xcode and iOS apps, emphasizing privacy and user control. The company introduced the Foundation Models framework, allowing developers to add Apple's AI models to their apps with just three lines of Swift code. These models run entirely on the device, requiring no cloud connection and designed to protect user privacy. The framework includes features like "Guided Generation" and "Tool Calling," making it easier to add generative AI to existing apps. Additionally, Xcode 26 now allows developers to access ChatGPT directly inside the IDE, even without a personal OpenAI account.

Recommended read:
References :
  • thetechbasic.com: macOS Tahoe 26 Features: Liquid Glass Design, Apple Intelligence, Spotlight

Amanda Caswell@Latest from Tom's Guide //
Apple's Worldwide Developers Conference (WWDC) 2025 highlighted the continued development of Apple Intelligence, despite initial delays and underwhelming features from the previous year. While the spotlight shifted towards software revamps and new apps, Apple reaffirmed its commitment to AI by unveiling a series of enhancements and integrations across its ecosystem. Notably, the company emphasized the progression of Apple Intelligence with more capable and efficient models, teasing additional features to be revealed throughout the presentation.

Apple is expanding Apple Intelligence through access to its on-device foundation model to third-party developers, allowing them to implement offline AI features. These AI features will be private and come without API fees. Users gain deeper access through new Shortcuts actions that offer direct access to Apple Intelligence models. The AI action will also include the option to use ChatGPT instead.

A key update is the introduction of Live Translation, integrated into Messages, FaceTime, and the Phone app. This feature facilitates real-time language translation, automatically translating texts and displaying captions during conversations. Visual intelligence, will allow users to select an object and search for similar products. These enhancements demonstrate Apple's focus on providing practical and user-friendly AI tools across its devices, aiming to streamline communication and improve user experience.

Recommended read:
References :
  • PCMag Middle East ai: Apple Intelligence Takes a Backseat at WWDC 2025
  • THE DECODER: Here's every Apple Intelligence update Apple announced at WWDC 25
  • MacStories: Apple Intelligence Expands: Onscreen Visual Intelligence, Shortcuts, Third-Party Apps, and More
  • www.techradar.com: Apple Intelligence was firmly in the background at WWDC 2025 as iPad finally had its chance to shine
  • www.tomsguide.com: Everyone’s talking about 'Liquid Glass' — but these 5 WWDC 2025 AI features impressed me most
  • www.techradar.com: Apple Intelligence is a year old - here are 3 genuinely useful AI tools you should use on your Apple products
  • www.techradar.com: TechRadar and Tom's Guide sat down with Apple's Craig Federighi and Greg Joswiak to talk about the company's latest plans for integrating Siri and Apple Intelligence.
  • www.eweek.com: Visual intelligence will work across more apps this fall, among other AI features announced at Apple’s Worldwide Developers Conference.
  • www.laptopmag.com: Apple isn’t just sharing its AI. It’s betting developers will finish the job.

Mark Gurman,@Bloomberg Technology //
Google is significantly expanding the reach and capabilities of its Gemini AI, with potential integration into Apple Intelligence on the horizon. Google CEO Sundar Pichai expressed optimism about reaching an agreement with Apple to make Gemini an option within Apple's AI framework by mid-year. This move could position Google's AI technology in front of a vast number of iPhone users. Furthermore, Google is broadening access to Gemini’s AI mode, previously available through a waitlist in Google Labs, to all US users over 18. This expansion includes new features like visual cards for places and products, enhanced shopping integration, and a history panel to support ongoing research projects.

In addition to these developments, Google is enhancing NotebookLM, its AI-powered research assistant. NotebookLM’s "Audio Overviews" feature is now available in approximately 75 languages, including less commonly spoken ones like Icelandic and Latin, using a Gemini-based audio production. Mobile apps for NotebookLM are set to launch on May 20th for both iOS and Android, making the tool accessible on smartphones and tablets. The mobile app will allow users to create and join audio discussions about saved sources.

The Gemini app itself is receiving significant updates, including native AI image editing tools that allow users to modify both AI-generated and uploaded images. These tools support over 45 languages and are rolling out gradually to most countries. Users can change backgrounds, replace objects, and add elements directly within the chat interface. In a move toward responsible AI usage, Gemini will add an invisible SynthID digital watermark to images created or edited using its tools, with experiments underway for visible watermarks as well. Google is also working on a version of Gemini for children under 13, complete with parental controls and safety features powered by Family Link. This Gemini version aims to assist children with homework and creative writing while ensuring a safe and monitored AI experience.

Recommended read:
References :
  • Mark Gurman: NEW: Google CEO Sundar Pichai said in court he is hopeful to have an agreement with Apple to have Gemini as an option as part of Apple Intelligence by middle of this year.
  • THE DECODER: Google expands "Audio Overviews" to 75 languages using Gemini-based audio production
  • www.techradar.com: Google reveals powerful NotebookLM app for Android and iOS with release date – here's what it looks like
  • www.tomsguide.com: Google's new AI upgrade will change the way millions search — and it’s rolling out now
  • the-decoder.com: Google Gemini brings AI-assisted image editing to chat
  • chromeunboxed.com: Image editing directly in the Gemini app is beginning to roll out right now
  • THE DECODER: Google Gemini brings AI-assisted image editing to chat
  • PCMag Middle East ai: Google CEO: Gemini Could Be Integrated Into Apple Intelligence This Year
  • The Tech Basic: Google is launching mobile apps for NotebookLM, its AI study helper, on May 20. The apps are available for preorder now on iPhones, iPads, and Android devices.

Ashutosh Singh@The Tech Portal //
Apple is enhancing its AI capabilities, known as Apple Intelligence, by employing synthetic data and differential privacy to prioritize user privacy. The company aims to improve features like Personal Context and Onscreen Awareness, set to debut in the fall, without collecting or copying personal content from iPhones or Macs. By generating synthetic text and images that mimic user behavior, Apple can gather usage data and refine its AI models while adhering to its strict privacy policies.

Apple's approach involves creating artificial data that closely matches real user input to enhance Apple Intelligence features. This method addresses the limitations of training AI models solely on synthetic data, which may not always accurately reflect actual user interactions. When users opt into Apple's Device Analytics program, the AI models will compare these synthetic messages against a small sample of a user’s content stored locally on the device. The device then identifies which of the synthetic messages most closely matches its user sample, and sends information about the selected match back to Apple, with no actual user data leaving the device.

To further protect user privacy, Apple utilizes differential privacy techniques. This involves adding randomized data to broader datasets to prevent individual identification. For example, when analyzing Genmoji prompts, Apple polls participating devices to determine the popularity of specific prompt fragments. Each device responds with a noisy signal, ensuring that only widely-used terms become visible to Apple, and no individual response can be traced back to a user or device. Apple plans to extend these methods to other Apple Intelligence features, including Image Playground, Image Wand, Memories Creation, and Writing Tools. This technique allows Apple to improve its models for longer-form text generation tasks without collecting real user content.

Recommended read:
References :
  • www.artificialintelligence-news.com: Apple leans on synthetic data to upgrade AI privately
  • The Tech Portal: Apple to use synthetic data that matches user data to enhance Apple Intelligence features
  • www.it-daily.net: Apple AI stresses privacy with synthetic and anonymised data
  • www.macworld.com: How will Apple improve its AI while protecting your privacy?
  • www.techradar.com: Apple has a plan for improving Apple Intelligence, but it needs your help – and your data
  • machinelearning.apple.com: Understanding Aggregate Trends for Apple Intelligence Using Differential Privacy
  • AI News: Apple AI stresses privacy with synthetic and anonymised data
  • THE DECODER: Apple will use your emails to improve AI features without ever seeing them
  • www.computerworld.com: Apple’s big plan for better AI is you
  • Maginative: Apple Unveils Clever Workaround to Improve AI Without Collecting Your Data
  • thetechbasic.com: Apple intends to improve its AI products, Siri and Genmoji, by developing better detection capabilities without accessing personal communication content. Apple released a method that functions with artificial data and privacy mechanisms.
  • www.verdict.co.uk: Apple to begin on-device data analysis to enhance AI
  • 9to5mac.com: Apple details on-device Apple Intelligence training system using user data
  • The Tech Basic: Apple intends to improve its AI products, Siri and Genmoji, by developing better detection capabilities without accessing personal communication content.
  • Digital Information World: Apple Silently Shifting Gears on AI by Analyzing User Data Through Recent Snippets of Real World Data
  • PCMag Middle East ai: With an upcoming OS update, Apple will compare synthetic AI training data with real customer data to improve Apple Intelligence—but only if you opt in.
  • www.zdnet.com: How Apple plans to train its AI on your data without sacrificing your privacy
  • www.eweek.com: Apple recently outlined several methods it plans to use to improve Apple Intelligence while maintaining user privacy.
  • eWEEK: Apple Reveals How It Plans to Train AI – Without Sacrificing Users’ Privacy
  • analyticsindiamag.com: New Training Methods to Save Apple Intelligence?
  • Pivot to AI: If you report a bug, Apple reserves the right to train Apple Intelligence on your private logs

@www.theapplepost.com //
Apple is significantly ramping up its efforts in the field of artificial intelligence, with a dedicated focus on enhancing Siri and the overall Apple Intelligence platform. Teams within Apple have been instructed to prioritize the development of superior AI features for Apple Intelligence, demonstrating the company's commitment to leading in this domain. This push involves improving Siri's capabilities through features like Personal Context, Onscreen Awareness, and deeper app integration, aiming to create a more intuitive and capable virtual assistant.

Apple has also made strides in machine learning research, particularly in the area of multimodal large language models (LLMs). Their research, named MM-Ego focuses on enabling models to better understand egocentric video. These capabilities could provide users with real-time activity suggestions, automated task management, personalized training programs, and automated summarization of recorded experiences. Moreover, Apple is committed to making on-device model updates available, ensuring that users benefit from the latest AI advancements directly on their devices.

According to reports, Apple is planning to release its delayed Apple Intelligence features this fall. The release will include Personal Context, Onscreen Awareness, and deeper app integration. These enhancements are designed to enable Siri to understand and reference a user's personal information, such as emails, messages, files, and photos, to assist with various tasks. Onscreen Awareness will allow Siri to respond to content displayed on the screen, while Deeper App Integration will empower Siri to perform complex actions across multiple apps without manual input.

Recommended read:
References :
  • www.applemust.com: Apple’s Siri team to do “whatever it takes” to make Apple Intelligence the best it can be
  • The Apple Post: Delayed Apple Intelligence features slated to launch in the fall, report claims

@computerworld.com //
Apple is facing significant internal challenges in its efforts to revamp Siri and integrate Apple Intelligence features. A new report has revealed epic dysfunction within the company, highlighting conflicts between managerial styles, shifting priorities, and a sense of being "second-class citizens" among Siri engineers. The issues stem, in part, from leadership differences, with some leaders favoring slow, incremental updates while others prefer a more brash and efficient approach. These conflicts have reportedly led to stalled projects and a lack of clear direction within the teams.

Despite these internal struggles, Apple intends to rollout the contextual Siri features it promised at WWDC 2024 this fall, potentially as part of iOS 19. The company has shifted senior leadership to ensure this happens. A key point of contention has been the integration of AI development efforts, with the software team led by Craig Federighi reportedly taking on more AI responsibilities and building within existing systems, which left the original Siri team feeling sidelined and slow to make progress. It remains unclear if the company can resolve these internal conflicts in time to deliver a seamless and improved Siri experience.

Apple's AI teams have been instructed to "do whatever it takes" to build the best artificial intelligence features, even if that means using open-source models instead of Apple's own creations. This decision follows years of focus on the wrong things, internal conflict, and confused decision-making within the teams, according to the report. A spoken user interface for VisionOS that never got completed, despite being an exciting-sounding prospect, is just one example of shelved ideas in favor of projects with little impact. Despite the chaos the "tech bros got to work it out", says Jonny Evans in his column about Apple.

Recommended read:
References :
  • www.applemust.com: This article is about an investigation about Apple Intelligence which includes some internal conflicts.
  • www.tomsguide.com: New Siri report reveals epic dysfunction within Apple — but there's hope.
  • THE DECODER: Apple wanted to catch up with ChatGPT and others with new AI features for Siri. Instead, there have been technical setbacks, internal power struggles - and a divided management team.
  • www.tomsguide.com: A revamped Siri is on the way — but a surprising snag is slowing things down
  • www.techradar.com: Apple's uncharacteristic Siri stumble is bad news for you, and now we may know how it happened and why there's reason for hope