Oscar Gonzalez@laptopmag.com
//
Apple is reportedly exploring the acquisition of AI startup Perplexity, a move that could significantly bolster its artificial intelligence capabilities. According to recent reports, Apple executives have engaged in internal discussions about potentially bidding for the company, with Adrian Perica, Apple's VP of corporate development, and Eddy Cue, SVP of Services, reportedly weighing the idea. Perplexity is known for its AI-powered search engine and chatbot, which some view as leading alternatives to ChatGPT. This acquisition could provide Apple with both the advanced AI technology and the necessary talent to enhance its own AI initiatives.
This potential acquisition reflects Apple's growing interest in AI-driven search and its desire to compete more effectively in this rapidly evolving market. One of the key drivers behind Apple's interest in Perplexity is the possible disruption of its longstanding agreement with Google, which involves Google being the default search engine on Apple devices. This deal generates approximately $20 billion annually for Apple, but is currently under threat from US antitrust enforcers. Acquiring Perplexity could provide Apple with a strategic alternative, enabling it to develop its own AI-based search engine and reduce its reliance on Google. While discussions are in the early stages and no formal offer has been made, acquiring Perplexity would be a strategic fallback for Apple if forced to end its partnership with Google. Apple aims to integrate Perplexity's technology into an AI-based search engine or to enhance the capabilities of Siri. With Perplexity, Apple could accelerate the development of its own AI-powered search engine across its devices. A Perplexity spokesperson stated they have no knowledge of any M&A discussions, and Apple has not yet released any information. Recommended read:
References :
@www.marktechpost.com
//
Apple is enhancing its developer tools to empower developers in building AI-informed applications. While Siri may not yet be the smart assistant Apple envisions, the company has significantly enriched its offerings for developers. A powerful update to Xcode, including ChatGPT integration, is set to transform app development. This move signals Apple's commitment to integrating AI capabilities into its ecosystem, even as challenges persist with its own AI assistant.
However, experts have voiced concerns about Apple's downbeat AI outlook, attributing it to a potential lack of high-powered hardware. Professor Seok Joon Kwon of Sungkyunkwan University suggests that Apple's research paper revealing fundamental reasoning limits of modern large reasoning models (LRMs) and large language models (LLMs) is flawed because Apple lacks the hardware to adequately test high-end LRMs and LLMs. The professor argues that Apple's hardware is unsuitable for AI development compared to the resources available to companies like Google, Microsoft, or xAI. If Apple wants to catch up with rivals, it will either have to buy a lot of Nvidia GPUs or develop its own AI ASICs. Apple's much-anticipated Siri upgrade, powered by Apple Intelligence, is now reportedly targeting a "spring 2026" launch. According to Mark Gurman at Bloomberg, Apple has set an internal release target of spring 2026 for its delayed upgrade of Siri, marking a key step in its artificial intelligence turnaround effort and is slated for iOS 26.4. The upgrade is expected to give Siri on-screen awareness and personal context capabilities. Recommended read:
References :
@www.marktechpost.com
//
Apple researchers are challenging the perceived reasoning capabilities of Large Reasoning Models (LRMs), sparking debate within the AI community. A recent paper from Apple, titled "The Illusion of Thinking," suggests that these models, which generate intermediate thinking steps like Chain-of-Thought reasoning, struggle with fundamental reasoning tasks. The research indicates that current evaluation methods relying on math and code benchmarks are insufficient, as they often suffer from data contamination and fail to assess the structure or quality of the reasoning process.
To address these shortcomings, Apple researchers introduced controllable puzzle environments, including the Tower of Hanoi, River Crossing, Checker Jumping, and Blocks World, allowing for precise manipulation of problem complexity. These puzzles require diverse reasoning abilities, such as constraint satisfaction and sequential planning, and are free from data contamination. The Apple paper concluded that state-of-the-art LRMs ultimately fail to develop generalizable problem-solving capabilities, with accuracy collapsing to zero beyond certain complexities across different environments. However, the Apple research has faced criticism. Experts, like Professor Seok Joon Kwon, argue that Apple's lack of high-performance hardware, such as a large GPU-based cluster comparable to those operated by Google or Microsoft, could be a factor in their findings. Some argue that the models perform better on familiar puzzles, suggesting that their success may be linked to training exposure rather than genuine problem-solving skills. Others, such as Alex Lawsen and "C. Opus," argue that the Apple researchers' results don't support claims about fundamental reasoning limitations, but rather highlight engineering challenges related to token limits and evaluation methods. Recommended read:
References :
nftjedi@chatgptiseatingtheworld.com
//
Apple researchers recently published a study titled "The Illusion of Thinking," suggesting that advanced language models (LLMs) struggle with true reasoning, relying instead on pattern matching. The study presented findings based on tasks like the Tower of Hanoi puzzle, where models purportedly failed when complexity increased, leading to the conclusion that these models possess limited problem-solving abilities. However, these conclusions are now under scrutiny, with critics arguing the experiments were not fairly designed.
Alex Lawsen of Open Philanthropy has published a counter-study challenging the foundations of Apple's claims. Lawsen argues that models like Claude, Gemini, and OpenAI's latest systems weren't failing due to cognitive limits, but rather because the evaluation methods didn't account for key technical constraints. One issue raised was that models were often cut off from providing full answers because they neared their maximum token limit, a built-in cap on output text, which Apple's evaluation counted as a reasoning failure rather than a practical limitation. Another point of contention involved the River Crossing test, where models faced unsolvable problem setups. When the models correctly identified the tasks as impossible and refused to attempt them, they were still marked wrong. Furthermore, the evaluation system strictly judged outputs against exhaustive solutions, failing to credit models for partial but correct answers, pattern recognition, or strategic shortcuts. To illustrate, Lawsen demonstrated that when models were instructed to write a program to solve the Hanoi puzzle, they delivered accurate, scalable solutions even with 15 disks, contradicting Apple's assertion of limitations. Recommended read:
References :
Mark Gurman@Bloomberg Technology
//
Apple is facing delays in the release of its AI-powered Siri upgrade, now reportedly slated for Spring 2026 with the iOS 26.4 update. This news follows the recent WWDC 2025 event, where AI features were showcased across various Apple operating systems, but the highly anticipated Siri overhaul was notably absent. Sources indicate that the delay stems from challenges in integrating older Siri systems with newer platforms, forcing engineers to rebuild the assistant from scratch. Craig Federighi, Apple’s head of software engineering, explained that the previous V1 architecture was insufficient for achieving the desired quality, prompting a shift to a "deeper end-to-end architecture" known as V2.
This delay has also reportedly caused internal tensions within Apple, with the AI and marketing teams allegedly blaming each other for overpromising and failing to meet timelines. While no exact date has been finalized for the iOS 26.4 release, insiders suggest a spring timeframe, aligning with Apple's typical release schedule for ".4" updates. The upgraded Siri is expected to offer smarter responses, improved app control, and on-screen awareness, allowing it to tap into users' personal context and perform actions based on what's displayed on their devices. Separately, Apple researchers have revealed structural failures in large reasoning models (LRMs) through puzzle-based evaluations. A recently released Apple research paper claimed that contemporary AI LLMs and LRMs fail to make sound judgements as the complexity of problems in controlled puzzle environments they were tasked to solve increased, revealing their fundamental limitations and debunking the common belief that these models can think like a human being. This work, conducted using puzzles like the Tower of Hanoi and River Crossing, aimed to assess the true reasoning capabilities of AI models by analyzing their performance on unfamiliar tasks, free from data contamination. Professor Seok Joon Kwon of Sungkyunkwan University believes Apple does not have enough high-performance hardware to test what high-end LRMs and LLMs are truly capable of. Recommended read:
References :
@www.sify.com
//
References:
www.artificialintelligence-new
, www.macstories.net
,
Apple's Worldwide Developers Conference (WWDC) 2025, held on June 10, showcased a significant transformation in both user interface and artificial intelligence. A major highlight was the unveiling of "Liquid Glass," a new design language offering a "glass-like" experience with translucent layers, fluid animations, and spatial depth. This UI refresh, described as Apple's boldest in over a decade, impacts core system elements like the lock screen, home screen, and apps such as Safari and Music, providing floating controls and glassy visual effects. iPhones from the 15 series onward will support Liquid Glass, with public betas rolling out soon to deliver a more immersive and dynamic feel.
Apple also announced advancements in AI, positioning itself to catch up in the competitive landscape. Apple Intelligence, a system-wide, on-device AI layer, integrates with iOS 26, macOS Tahoe, and other platforms. It enables features such as summarizing emails and notifications, auto-completing messages, real-time call translation, and creating personalized emoji called Genmoji. Visual Intelligence allows users to extract text or gain information from photos, documents, and app screens. Siri is slated to receive intelligence upgrades as well, though its full capabilities may be slightly delayed. In a significant shift, Apple has opened its foundational AI model to third-party developers, granting direct access to the on-device large language model powering Apple Intelligence. This move, announced at WWDC, marks a departure from Apple's traditionally closed ecosystem. The newly accessible three-billion parameter model operates entirely on-device, reflecting Apple’s privacy-first approach. The Foundation Models framework allows developers to integrate Apple Intelligence features with minimal code, offering privacy-focused AI inference at no cost. Xcode 26 now includes AI assistance, embedding large language models directly into the coding experience, and third-party developers can now leverage Visual Intelligence capabilities within their apps. Recommended read:
References :
@felloai.com
//
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.
Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning." The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations. Recommended read:
References :
@www.artificialintelligence-news.com
//
Apple has announced a significant shift in its approach to AI development by opening its foundational AI model to third-party developers. This move, unveiled at the Worldwide Developers Conference (WWDC), grants developers direct access to the on-device large language model that powers Apple Intelligence. The newly accessible three-billion parameter model operates entirely on the device, reflecting Apple’s commitment to user privacy. This on-device approach distinguishes Apple from competitors relying on cloud-based AI solutions, emphasizing privacy and user control.
The new Foundation Models framework enables developers to integrate Apple Intelligence features into their apps with minimal code, using just three lines of Swift. This framework offers guided generation and tool-calling capabilities, making it easier to add generative AI to existing applications. Automattic's Day One journaling app is already leveraging this framework to provide privacy-centric intelligent features. According to Paul Mayne, head of Day One at Automattic, the framework is helping them rethink what’s possible with journaling by bringing intelligence and privacy together in ways that deeply respect their users. Apple is also enhancing developer tools within Xcode 26, which now embeds large language models directly into the coding environment. Developers can access ChatGPT without needing a personal OpenAI account and connect API keys from other providers or run local models on Apple silicon Macs. Furthermore, Apple has upgraded the App Intents interface to support visual intelligence, allowing apps to present visual search results directly within the operating system. Etsy is already exploring these features to improve product discovery, with CTO Rafe Colburn noting the potential to meet shoppers right on their iPhone with visual intelligence. Recommended read:
References :
@thetechbasic.com
//
References:
thetechbasic.com
, www.theguardian.com
Apple has officially announced macOS Tahoe, version 26, at its annual WWDC event. The new operating system introduces a visually striking Liquid Glass design, offering a refreshed user experience with a cohesive design language spanning across Apple’s entire ecosystem, including iOS 26 and iPadOS 26. This marks the first time Apple has implemented a universal design philosophy across its platforms, aiming to bring a new level of vitality while maintaining the familiarity of Apple's software. The Liquid Glass aesthetic features translucent elements that dynamically reflect and refract their surroundings, creating a sense of depth and movement, enhancing the user experience.
The Liquid Glass design extends throughout the system, with glossy translucent menu bars, windows, and icons. The surfaces softly reflect light and display subtle color tints, allowing users to customize folders with various accent colors. Widgets and buttons now have a more three-dimensional feel while remaining crisp. The Dock appears to float on a frosted glass shelf, and Control Center icons animate with a soft glow when activated. These changes provide macOS Tahoe with a more modern look while keeping familiar layouts and workflows intact. Furthermore, macOS Tahoe includes a dedicated Phone app that mirrors the iPhone Phone app through Continuity integration. Users can see Live Activities directly on their Mac lock screen and screen unknown callers with Call Screening and Hold Assist. In addition to the design overhaul, Apple is embedding generative AI models directly into Xcode and iOS apps, emphasizing privacy and user control. The company introduced the Foundation Models framework, allowing developers to add Apple's AI models to their apps with just three lines of Swift code. These models run entirely on the device, requiring no cloud connection and designed to protect user privacy. The framework includes features like "Guided Generation" and "Tool Calling," making it easier to add generative AI to existing apps. Additionally, Xcode 26 now allows developers to access ChatGPT directly inside the IDE, even without a personal OpenAI account. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
References:
AI News | VentureBeat
, www.laptopmag.com
Apple is making significant strides in the field of artificial intelligence, particularly with its new image generation technology. Apple's machine learning research team has developed STARFlow, a breakthrough AI system that rivals the performance of popular image generators like DALL-E and Midjourney. STARFlow combines normalizing flows with autoregressive transformers to achieve what the team calls “competitive performance” with state-of-the-art diffusion models. This advancement comes at a critical time for Apple, which has faced increasing criticism over its progress in artificial intelligence, showcasing its broader effort to develop distinctive AI capabilities that differentiate its products from competitors.
This research tackles the challenge of scaling normalizing flows to work effectively with high-resolution images, something that has traditionally been overshadowed by diffusion models and generative adversarial networks. The STARFlow system demonstrates versatility across different types of image synthesis challenges, achieving competitive performance in both class-conditional and text-conditional image generation tasks. The research team includes Apple machine learning researchers along with academic collaborators, highlighting the company's commitment to pushing the boundaries of AI image generation. Despite the image generation advancements, Apple Intelligence took a backseat at WWDC 2025. While Apple is giving developers access to Apple's on-device large language model (LLM) and introducing features like Live Translation in Messages, FaceTime, and the Phone app, the excitement around Apple Intelligence was more muted compared to previous years. Craig Federighi, Apple's SVP of Software Engineering, indicated that Siri needs "more time to reach a high-quality bar," suggesting that significant AI upgrades to Siri are still under development. Recommended read:
References :
Amanda Caswell@Latest from Tom's Guide
//
Apple's Worldwide Developers Conference (WWDC) 2025 highlighted the continued development of Apple Intelligence, despite initial delays and underwhelming features from the previous year. While the spotlight shifted towards software revamps and new apps, Apple reaffirmed its commitment to AI by unveiling a series of enhancements and integrations across its ecosystem. Notably, the company emphasized the progression of Apple Intelligence with more capable and efficient models, teasing additional features to be revealed throughout the presentation.
Apple is expanding Apple Intelligence through access to its on-device foundation model to third-party developers, allowing them to implement offline AI features. These AI features will be private and come without API fees. Users gain deeper access through new Shortcuts actions that offer direct access to Apple Intelligence models. The AI action will also include the option to use ChatGPT instead. A key update is the introduction of Live Translation, integrated into Messages, FaceTime, and the Phone app. This feature facilitates real-time language translation, automatically translating texts and displaying captions during conversations. Visual intelligence, will allow users to select an object and search for similar products. These enhancements demonstrate Apple's focus on providing practical and user-friendly AI tools across its devices, aiming to streamline communication and improve user experience. Recommended read:
References :
@machinelearning.apple.com
//
Apple researchers have released a new study questioning the capabilities of Large Reasoning Models (LRMs), casting doubt on the industry's pursuit of Artificial General Intelligence (AGI). The research paper, titled "The Illusion of Thinking," reveals that these models, including those from OpenAI, Google DeepMind, Anthropic, and DeepSeek, experience a 'complete accuracy collapse' when faced with complex problems. Unlike existing evaluations primarily focused on mathematical and coding benchmarks, this study evaluates the reasoning traces of these models, offering insights into how LRMs "think".
Researchers tested various models, including OpenAI's o3-mini, DeepSeek-R1, and Claude 3.7 Sonnet, using puzzles like the Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. These environments allowed for the manipulation of complexity while maintaining consistent logical structures. The team discovered that standard language models surprisingly outperformed LRMs in low-complexity scenarios, while LRMs only demonstrated advantages in medium-complexity tasks. However, all models experienced a performance collapse when faced with highly complex tasks. The study suggests that the so-called reasoning of LRMs may be more akin to sophisticated pattern matching, which is fragile and prone to failure when challenged with significant complexity. Apple's research team identified three distinct performance regimes: low-complexity tasks where standard models outperform LRMs, medium-complexity tasks where LRMs show advantages, and high-complexity tasks where all models collapse. Apple has begun integrating powerful generative AI into its own apps and experiences. The new Foundation Models framework gives app developers access to the on-device foundation language model. Recommended read:
References :
@www.theapplepost.com
//
Apple's Worldwide Developers Conference (WWDC) 2025 is set to begin next week, and anticipation is building around the potential unveiling of iOS 26, alongside updates to macOS. Bloomberg's Mark Gurman has highlighted a new "digital glass" design expected to debut on the iPhone, drawing inspiration from the visionOS operating system used in the Apple Vision Pro headset. This design promises a fresh aesthetic for Apple's software, emphasizing light and transparency throughout the operating system. Gurman notes that the default apps will feature redesigned icons, and a more fluid and dynamic design for toolbars and tabs with intuitive pop-out menus, home screen widgets aligning with the visionOS look, and a streamlined Camera app are all expected.
Apple admins are also preparing for changes previewed at WWDC 2025 and the next macOS release. These updates are expected to span the Apple product suite, unifying user experience across devices and introducing Apple Intelligence features. Kandji, a company specializing in Apple device management, is offering resources to help IT teams navigate these changes and prepare for potential enterprise impacts. The conference is also expected to showcase how Apple plans to open up its on-device AI models to developers, allowing them to incorporate AI into their applications, while the Translate app may be revamped and integrated with AirPods. Despite the expected design changes and potential AI advancements, some inside Apple believe WWDC 2025 may be a letdown from an AI perspective. Wall Street also seems to hold similar views, citing potential shortcomings in Apple Intelligence and a stalled Siri revamp as reasons for concern. Reports suggest that while several projects are underway at Apple, they may not be ready for this year's WWDC. However, the event remains a key opportunity for Apple to showcase its latest software innovations and provide a glimpse into the future of its product ecosystem. Recommended read:
References :
@felloai.com
//
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.
Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning." The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations. Recommended read:
References :
@www.theapplepost.com
//
References:
mastodon.macstories.net
, MacStories
Sky, an innovative macOS application developed by the original creators of Shortcuts, is poised to revolutionize user experience by bringing AI-powered automation to Macs. This mind-blowing app, set to launch later this year, allows users to extend AI integration and automation capabilities system-wide. Early testers, including Federico Viticci, have expressed immense excitement, drawing parallels to the transformative impact of Editorial and Workflow, apps previously associated with members of the Sky team.
The developers behind Sky, Ari Weinstein, Conrad Kramer, and Kim Beverett at Software Applications Incorporated, have been secretly working on the project since 2023. Sky aims to be more intuitive than its predecessors, harnessing the power of AI and large language models (LLMs) to streamline tasks and workflows. Viticci, who has had exclusive access to Sky for the past two weeks, believes it will fundamentally change how users approach macOS automation. He felt the same excitement when first trying Editorial, Workflow, and Shortcuts. Simultaneously, Apple is expected to unveil a major software rebrand at WWDC 2025, headlined by the introduction of iOS 26, iPadOS 26, macOS 26, visionOS 26, tvOS 26, and watchOS 26. This strategic shift involves moving away from traditional version numbers to a year-based system, mirroring a practice seen in the past with Windows. This change aims to reflect significant operating system redesigns and simplify branding for both users and developers. Bloomberg’s Mark Gurman suggests this would create consistency across Apple's platforms. Recommended read:
References :
@gradientflow.com
//
References:
eWEEK
, Gradient Flow
,
Apple is ramping up its efforts in the artificial intelligence space, focusing on efficiency, privacy, and seamless integration across its hardware and software. The tech giant is reportedly accelerating the development of its first AI-powered smart glasses, with a target release date of late 2026. These glasses, described as similar to Meta's Ray-Ban smart glasses but "better made," will feature built-in cameras, microphones, and speakers, enabling them to analyze the external world and respond to requests via Siri. This move positions Apple to compete directly with Meta, Google, and the emerging OpenAI/Jony Ive partnership in the burgeoning AI device market.
Apple also plans to open its on-device AI models to developers at WWDC 2025. This initiative aims to empower developers to create innovative AI-driven applications that leverage Apple's hardware capabilities while prioritizing user privacy. By providing developers with access to its AI models, Apple hopes to foster a vibrant ecosystem of AI-enhanced experiences across its product line. The company's strategy reflects a desire to integrate sophisticated intelligence deeply into its products without compromising its core values of user privacy and trust, distinguishing it from competitors who may have rapidly deployed high-profile AI models. While Apple is pushing forward with its smart glasses, it has reportedly shelved plans for an Apple Watch with a built-in camera. This decision suggests a strategic shift in focus, with the company prioritizing the development of AI-powered wearables that align with its vision of seamless integration and user privacy. The abandonment of the camera-equipped watch may also reflect concerns about privacy implications or technical challenges associated with incorporating such features into a smaller wearable device. Ultimately, Apple's success in the AI arena will depend on its ability to deliver genuinely useful and seamlessly embedded AI experiences that enhance user experience. Recommended read:
References :
Aminu Abdullahi@eWEEK
//
Apple is accelerating its entry into the AI-powered wearable market with plans to launch its first smart glasses by late 2026. These glasses, codenamed "N401," will feature built-in cameras, microphones, and speakers, enabling users to interact with the Siri voice assistant for tasks such as making phone calls, playing music, conducting live translations, and receiving GPS directions. The company aims to compete with Meta's Ray-Ban smart glasses, which have seen significant success, but is initially focusing on simplicity by foregoing full augmented reality (AR) capabilities in the first iteration. Apple hopes that by defining their product vision and investing strategically, they can overcome their late start in the AI race and deliver a superior experience.
The move comes as Apple recognizes the growing importance of AI in wearable technology and seeks to catch up with competitors like Meta, Google, and OpenAI. While Meta is working on higher-end smart glasses with built-in displays, Apple is taking a different approach by prioritizing essential functionalities and a sleek design, similar to Meta's Ray-Bans. Google has also partnered with brands like Samsung, Warby Parker, and Gentle Monster to launch smart glasses using its Android XR system. Apple is looking to capture the AI devices market, which is set to become more crowded next year. OpenAI, the company behind popular AI chatbot ChatGPT, announced earlier this week that it was collaborating with former Apple designer Jony Ive on AI gadgets to be released beginning next year. Amidst these developments, Apple has reportedly scrapped plans for a camera-equipped Apple Watch, signaling a shift in its wearables strategy. Sources indicate that the company had been actively working on releasing a camera-equipped Apple Watch and Apple Watch Ultra by 2027, but that project was recently shut down. Instead, Apple is concentrating its resources on smart glasses, with prototypes expected to be ordered in bulk for testing. Apple faces competitors like OpenAI, Meta, Google, and Amazon. Bloomberg reported that Apple was running multiple focus groups to find out what its employees liked about smart glasses from competitors. Recommended read:
References :
Josh Render@tomsguide.com
//
Apple is reportedly undertaking a significant overhaul of Siri, rebuilding it from the ground up with a new AI-centric architecture. This move comes after earlier versions of Siri, which relied on AI, did not perform as desired, struggling to provide helpful and effective responses. Attempts to integrate AI capabilities into the older version only resulted in further complications for Apple, with employees noting that fixing one issue often led to additional problems. Recognizing their delayed start in the AI race compared to other tech companies, Apple is now aiming to create a smarter and more conversational Siri, potentially leveraging a large language model developed by its Zurich AI team.
In a notable shift, Apple is also considering opening its operating systems to allow iPhone users in the European Union to choose third-party AI assistants like ChatGPT or Gemini as their default option, effectively replacing Siri. This potential change is reportedly driven by regulatory pressures from the EU, which are pushing Apple to allow more flexibility in its ecosystem. If implemented, this move would align Apple more closely with competitors like Samsung and Google, who already offer more diverse AI options on their devices. The possibility of integrating external AI assistants could also provide Apple users with access to advanced AI features while the company continues to refine and improve its own Siri. However, Apple's AI strategy is also facing scrutiny on other fronts. The Trump administration previously raised national security concerns over Apple's potential AI deal with Alibaba, specifically regarding the integration of Alibaba's AI technology into iPhones sold in China. These concerns center around the potential implications for national security, data privacy, and the broader geopolitical landscape, given the Chinese government's regulations on data sharing and content control. While Apple aims to comply with local regulations and compete more effectively in the Chinese market through this partnership, the US government worries that it could inadvertently aid China's AI development and expose user data to potential risks. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |