@thetechbasic.com
//
References:
thetechbasic.com
Apple has officially announced macOS Tahoe, version 26, at its annual WWDC event. The new operating system introduces a visually striking Liquid Glass design, offering a refreshed user experience with a cohesive design language spanning across Apple’s entire ecosystem, including iOS 26 and iPadOS 26. This marks the first time Apple has implemented a universal design philosophy across its platforms, aiming to bring a new level of vitality while maintaining the familiarity of Apple's software. The Liquid Glass aesthetic features translucent elements that dynamically reflect and refract their surroundings, creating a sense of depth and movement, enhancing the user experience.
The Liquid Glass design extends throughout the system, with glossy translucent menu bars, windows, and icons. The surfaces softly reflect light and display subtle color tints, allowing users to customize folders with various accent colors. Widgets and buttons now have a more three-dimensional feel while remaining crisp. The Dock appears to float on a frosted glass shelf, and Control Center icons animate with a soft glow when activated. These changes provide macOS Tahoe with a more modern look while keeping familiar layouts and workflows intact. Furthermore, macOS Tahoe includes a dedicated Phone app that mirrors the iPhone Phone app through Continuity integration. Users can see Live Activities directly on their Mac lock screen and screen unknown callers with Call Screening and Hold Assist. In addition to the design overhaul, Apple is embedding generative AI models directly into Xcode and iOS apps, emphasizing privacy and user control. The company introduced the Foundation Models framework, allowing developers to add Apple's AI models to their apps with just three lines of Swift code. These models run entirely on the device, requiring no cloud connection and designed to protect user privacy. The framework includes features like "Guided Generation" and "Tool Calling," making it easier to add generative AI to existing apps. Additionally, Xcode 26 now allows developers to access ChatGPT directly inside the IDE, even without a personal OpenAI account. Recommended read:
References :
Amanda Caswell@Latest from Tom's Guide
//
Apple's Worldwide Developers Conference (WWDC) 2025 highlighted the continued development of Apple Intelligence, despite initial delays and underwhelming features from the previous year. While the spotlight shifted towards software revamps and new apps, Apple reaffirmed its commitment to AI by unveiling a series of enhancements and integrations across its ecosystem. Notably, the company emphasized the progression of Apple Intelligence with more capable and efficient models, teasing additional features to be revealed throughout the presentation.
Apple is expanding Apple Intelligence through access to its on-device foundation model to third-party developers, allowing them to implement offline AI features. These AI features will be private and come without API fees. Users gain deeper access through new Shortcuts actions that offer direct access to Apple Intelligence models. The AI action will also include the option to use ChatGPT instead. A key update is the introduction of Live Translation, integrated into Messages, FaceTime, and the Phone app. This feature facilitates real-time language translation, automatically translating texts and displaying captions during conversations. Visual intelligence, will allow users to select an object and search for similar products. These enhancements demonstrate Apple's focus on providing practical and user-friendly AI tools across its devices, aiming to streamline communication and improve user experience. Recommended read:
References :
Mark Gurman,@Bloomberg Technology
//
Google is significantly expanding the reach and capabilities of its Gemini AI, with potential integration into Apple Intelligence on the horizon. Google CEO Sundar Pichai expressed optimism about reaching an agreement with Apple to make Gemini an option within Apple's AI framework by mid-year. This move could position Google's AI technology in front of a vast number of iPhone users. Furthermore, Google is broadening access to Gemini’s AI mode, previously available through a waitlist in Google Labs, to all US users over 18. This expansion includes new features like visual cards for places and products, enhanced shopping integration, and a history panel to support ongoing research projects.
In addition to these developments, Google is enhancing NotebookLM, its AI-powered research assistant. NotebookLM’s "Audio Overviews" feature is now available in approximately 75 languages, including less commonly spoken ones like Icelandic and Latin, using a Gemini-based audio production. Mobile apps for NotebookLM are set to launch on May 20th for both iOS and Android, making the tool accessible on smartphones and tablets. The mobile app will allow users to create and join audio discussions about saved sources. The Gemini app itself is receiving significant updates, including native AI image editing tools that allow users to modify both AI-generated and uploaded images. These tools support over 45 languages and are rolling out gradually to most countries. Users can change backgrounds, replace objects, and add elements directly within the chat interface. In a move toward responsible AI usage, Gemini will add an invisible SynthID digital watermark to images created or edited using its tools, with experiments underway for visible watermarks as well. Google is also working on a version of Gemini for children under 13, complete with parental controls and safety features powered by Family Link. This Gemini version aims to assist children with homework and creative writing while ensuring a safe and monitored AI experience. Recommended read:
References :
Ashutosh Singh@The Tech Portal
//
Apple is enhancing its AI capabilities, known as Apple Intelligence, by employing synthetic data and differential privacy to prioritize user privacy. The company aims to improve features like Personal Context and Onscreen Awareness, set to debut in the fall, without collecting or copying personal content from iPhones or Macs. By generating synthetic text and images that mimic user behavior, Apple can gather usage data and refine its AI models while adhering to its strict privacy policies.
Apple's approach involves creating artificial data that closely matches real user input to enhance Apple Intelligence features. This method addresses the limitations of training AI models solely on synthetic data, which may not always accurately reflect actual user interactions. When users opt into Apple's Device Analytics program, the AI models will compare these synthetic messages against a small sample of a user’s content stored locally on the device. The device then identifies which of the synthetic messages most closely matches its user sample, and sends information about the selected match back to Apple, with no actual user data leaving the device. To further protect user privacy, Apple utilizes differential privacy techniques. This involves adding randomized data to broader datasets to prevent individual identification. For example, when analyzing Genmoji prompts, Apple polls participating devices to determine the popularity of specific prompt fragments. Each device responds with a noisy signal, ensuring that only widely-used terms become visible to Apple, and no individual response can be traced back to a user or device. Apple plans to extend these methods to other Apple Intelligence features, including Image Playground, Image Wand, Memories Creation, and Writing Tools. This technique allows Apple to improve its models for longer-form text generation tasks without collecting real user content. Recommended read:
References :
@www.theapplepost.com
//
References:
www.applemust.com
, The Apple Post
Apple is significantly ramping up its efforts in the field of artificial intelligence, with a dedicated focus on enhancing Siri and the overall Apple Intelligence platform. Teams within Apple have been instructed to prioritize the development of superior AI features for Apple Intelligence, demonstrating the company's commitment to leading in this domain. This push involves improving Siri's capabilities through features like Personal Context, Onscreen Awareness, and deeper app integration, aiming to create a more intuitive and capable virtual assistant.
Apple has also made strides in machine learning research, particularly in the area of multimodal large language models (LLMs). Their research, named MM-Ego focuses on enabling models to better understand egocentric video. These capabilities could provide users with real-time activity suggestions, automated task management, personalized training programs, and automated summarization of recorded experiences. Moreover, Apple is committed to making on-device model updates available, ensuring that users benefit from the latest AI advancements directly on their devices. According to reports, Apple is planning to release its delayed Apple Intelligence features this fall. The release will include Personal Context, Onscreen Awareness, and deeper app integration. These enhancements are designed to enable Siri to understand and reference a user's personal information, such as emails, messages, files, and photos, to assist with various tasks. Onscreen Awareness will allow Siri to respond to content displayed on the screen, while Deeper App Integration will empower Siri to perform complex actions across multiple apps without manual input. Recommended read:
References :
@computerworld.com
//
Apple is facing significant internal challenges in its efforts to revamp Siri and integrate Apple Intelligence features. A new report has revealed epic dysfunction within the company, highlighting conflicts between managerial styles, shifting priorities, and a sense of being "second-class citizens" among Siri engineers. The issues stem, in part, from leadership differences, with some leaders favoring slow, incremental updates while others prefer a more brash and efficient approach. These conflicts have reportedly led to stalled projects and a lack of clear direction within the teams.
Despite these internal struggles, Apple intends to rollout the contextual Siri features it promised at WWDC 2024 this fall, potentially as part of iOS 19. The company has shifted senior leadership to ensure this happens. A key point of contention has been the integration of AI development efforts, with the software team led by Craig Federighi reportedly taking on more AI responsibilities and building within existing systems, which left the original Siri team feeling sidelined and slow to make progress. It remains unclear if the company can resolve these internal conflicts in time to deliver a seamless and improved Siri experience. Apple's AI teams have been instructed to "do whatever it takes" to build the best artificial intelligence features, even if that means using open-source models instead of Apple's own creations. This decision follows years of focus on the wrong things, internal conflict, and confused decision-making within the teams, according to the report. A spoken user interface for VisionOS that never got completed, despite being an exciting-sounding prospect, is just one example of shelved ideas in favor of projects with little impact. Despite the chaos the "tech bros got to work it out", says Jonny Evans in his column about Apple. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |