News from the AI & ML world

DeeperML - #apple

Oscar Gonzalez@laptopmag.com //
Apple is reportedly exploring the acquisition of AI startup Perplexity, a move that could significantly bolster its artificial intelligence capabilities. According to recent reports, Apple executives have engaged in internal discussions about potentially bidding for the company, with Adrian Perica, Apple's VP of corporate development, and Eddy Cue, SVP of Services, reportedly weighing the idea. Perplexity is known for its AI-powered search engine and chatbot, which some view as leading alternatives to ChatGPT. This acquisition could provide Apple with both the advanced AI technology and the necessary talent to enhance its own AI initiatives.

This potential acquisition reflects Apple's growing interest in AI-driven search and its desire to compete more effectively in this rapidly evolving market. One of the key drivers behind Apple's interest in Perplexity is the possible disruption of its longstanding agreement with Google, which involves Google being the default search engine on Apple devices. This deal generates approximately $20 billion annually for Apple, but is currently under threat from US antitrust enforcers. Acquiring Perplexity could provide Apple with a strategic alternative, enabling it to develop its own AI-based search engine and reduce its reliance on Google.

While discussions are in the early stages and no formal offer has been made, acquiring Perplexity would be a strategic fallback for Apple if forced to end its partnership with Google. Apple aims to integrate Perplexity's technology into an AI-based search engine or to enhance the capabilities of Siri. With Perplexity, Apple could accelerate the development of its own AI-powered search engine across its devices. A Perplexity spokesperson stated they have no knowledge of any M&A discussions, and Apple has not yet released any information.

Recommended read:
References :
  • Spyglass: A partnership is probably more likely, but they have to at least think about buying here...
  • the-decoder.com: Apple executives have held internal discussions about potentially bidding for AI startup Perplexity
  • www.laptopmag.com: A new report says Apple had talks about making a big AI acquisition.
  • www.tomsguide.com: Apple is reportedly in talks to acquire AI startup Perplexity, putting it one step closer to its AI-powered search engine and smarter Siri

@www.marktechpost.com //
Apple is enhancing its developer tools to empower developers in building AI-informed applications. While Siri may not yet be the smart assistant Apple envisions, the company has significantly enriched its offerings for developers. A powerful update to Xcode, including ChatGPT integration, is set to transform app development. This move signals Apple's commitment to integrating AI capabilities into its ecosystem, even as challenges persist with its own AI assistant.

However, experts have voiced concerns about Apple's downbeat AI outlook, attributing it to a potential lack of high-powered hardware. Professor Seok Joon Kwon of Sungkyunkwan University suggests that Apple's research paper revealing fundamental reasoning limits of modern large reasoning models (LRMs) and large language models (LLMs) is flawed because Apple lacks the hardware to adequately test high-end LRMs and LLMs. The professor argues that Apple's hardware is unsuitable for AI development compared to the resources available to companies like Google, Microsoft, or xAI. If Apple wants to catch up with rivals, it will either have to buy a lot of Nvidia GPUs or develop its own AI ASICs.

Apple's much-anticipated Siri upgrade, powered by Apple Intelligence, is now reportedly targeting a "spring 2026" launch. According to Mark Gurman at Bloomberg, Apple has set an internal release target of spring 2026 for its delayed upgrade of Siri, marking a key step in its artificial intelligence turnaround effort and is slated for iOS 26.4. The upgrade is expected to give Siri on-screen awareness and personal context capabilities.

Recommended read:
References :
  • MarkTechPost: Apple Researchers Reveal Structural Failures in Large Reasoning Models Using Puzzle-Based Evaluation
  • www.techradar.com: Apple reportedly targets 'spring 2026' for launch of delayed AI Siri upgrade – but is that too late?
  • www.tomshardware.com: Expert pours cold water on Apple's downbeat AI outlook — says lack of high-powered hardware could be to blame
  • www.marktechpost.com: Apple researchers reveal structural failures in large reasoning models using puzzle-based evaluation

@www.marktechpost.com //
Apple researchers are challenging the perceived reasoning capabilities of Large Reasoning Models (LRMs), sparking debate within the AI community. A recent paper from Apple, titled "The Illusion of Thinking," suggests that these models, which generate intermediate thinking steps like Chain-of-Thought reasoning, struggle with fundamental reasoning tasks. The research indicates that current evaluation methods relying on math and code benchmarks are insufficient, as they often suffer from data contamination and fail to assess the structure or quality of the reasoning process.

To address these shortcomings, Apple researchers introduced controllable puzzle environments, including the Tower of Hanoi, River Crossing, Checker Jumping, and Blocks World, allowing for precise manipulation of problem complexity. These puzzles require diverse reasoning abilities, such as constraint satisfaction and sequential planning, and are free from data contamination. The Apple paper concluded that state-of-the-art LRMs ultimately fail to develop generalizable problem-solving capabilities, with accuracy collapsing to zero beyond certain complexities across different environments.

However, the Apple research has faced criticism. Experts, like Professor Seok Joon Kwon, argue that Apple's lack of high-performance hardware, such as a large GPU-based cluster comparable to those operated by Google or Microsoft, could be a factor in their findings. Some argue that the models perform better on familiar puzzles, suggesting that their success may be linked to training exposure rather than genuine problem-solving skills. Others, such as Alex Lawsen and "C. Opus," argue that the Apple researchers' results don't support claims about fundamental reasoning limitations, but rather highlight engineering challenges related to token limits and evaluation methods.

Recommended read:
References :
  • TheSequence: The Sequence Research #663: The Illusion of Thinking, Inside the Most Controversial AI Paper of Recent Weeks
  • chatgptiseatingtheworld.com: Research: Did Apple researchers overstate “The Illusion of Thinking†in reasoning models. Opus, Lawsen think so.
  • www.marktechpost.com: Apple Researchers Reveal Structural Failures in Large Reasoning Models Using Puzzle-Based Evaluation
  • arstechnica.com: New Apple study challenges whether AI models truly “reason†through problems
  • 9to5Mac: New paper pushes back on Apple’s LLM ‘reasoning collapse’ study

nftjedi@chatgptiseatingtheworld.com //
Apple researchers recently published a study titled "The Illusion of Thinking," suggesting that advanced language models (LLMs) struggle with true reasoning, relying instead on pattern matching. The study presented findings based on tasks like the Tower of Hanoi puzzle, where models purportedly failed when complexity increased, leading to the conclusion that these models possess limited problem-solving abilities. However, these conclusions are now under scrutiny, with critics arguing the experiments were not fairly designed.

Alex Lawsen of Open Philanthropy has published a counter-study challenging the foundations of Apple's claims. Lawsen argues that models like Claude, Gemini, and OpenAI's latest systems weren't failing due to cognitive limits, but rather because the evaluation methods didn't account for key technical constraints. One issue raised was that models were often cut off from providing full answers because they neared their maximum token limit, a built-in cap on output text, which Apple's evaluation counted as a reasoning failure rather than a practical limitation.

Another point of contention involved the River Crossing test, where models faced unsolvable problem setups. When the models correctly identified the tasks as impossible and refused to attempt them, they were still marked wrong. Furthermore, the evaluation system strictly judged outputs against exhaustive solutions, failing to credit models for partial but correct answers, pattern recognition, or strategic shortcuts. To illustrate, Lawsen demonstrated that when models were instructed to write a program to solve the Hanoi puzzle, they delivered accurate, scalable solutions even with 15 disks, contradicting Apple's assertion of limitations.

Recommended read:
References :
  • chatgptiseatingtheworld.com: Research: Did Apple researchers overstate “The Illusion of Thinking†in reasoning models. Opus, Lawsen think so.
  • Digital Information World: Apple’s AI Critique Faces Pushback Over Flawed Testing Methods
  • NextBigFuture.com: Apple Researcher Claims Illusion of AI Thinking Versus OpenAI Solving Ten Disk Puzzle
  • Bernard Marr: Beyond The Hype: What Apple's AI Warning Means For Business Leaders

Mark Gurman@Bloomberg Technology //
Apple is facing delays in the release of its AI-powered Siri upgrade, now reportedly slated for Spring 2026 with the iOS 26.4 update. This news follows the recent WWDC 2025 event, where AI features were showcased across various Apple operating systems, but the highly anticipated Siri overhaul was notably absent. Sources indicate that the delay stems from challenges in integrating older Siri systems with newer platforms, forcing engineers to rebuild the assistant from scratch. Craig Federighi, Apple’s head of software engineering, explained that the previous V1 architecture was insufficient for achieving the desired quality, prompting a shift to a "deeper end-to-end architecture" known as V2.

This delay has also reportedly caused internal tensions within Apple, with the AI and marketing teams allegedly blaming each other for overpromising and failing to meet timelines. While no exact date has been finalized for the iOS 26.4 release, insiders suggest a spring timeframe, aligning with Apple's typical release schedule for ".4" updates. The upgraded Siri is expected to offer smarter responses, improved app control, and on-screen awareness, allowing it to tap into users' personal context and perform actions based on what's displayed on their devices.

Separately, Apple researchers have revealed structural failures in large reasoning models (LRMs) through puzzle-based evaluations. A recently released Apple research paper claimed that contemporary AI LLMs and LRMs fail to make sound judgements as the complexity of problems in controlled puzzle environments they were tasked to solve increased, revealing their fundamental limitations and debunking the common belief that these models can think like a human being. This work, conducted using puzzles like the Tower of Hanoi and River Crossing, aimed to assess the true reasoning capabilities of AI models by analyzing their performance on unfamiliar tasks, free from data contamination. Professor Seok Joon Kwon of Sungkyunkwan University believes Apple does not have enough high-performance hardware to test what high-end LRMs and LLMs are truly capable of.

Recommended read:
References :
  • Bloomberg Technology: Apple targets spring 2026 for release of delayed Siri AI upgrade
  • PCMag Middle East ai: Apple Explains Why It Delayed AI Siri, Confirms It Won't Arrive Until 2026
  • www.marktechpost.com: Apple Researchers Reveal Structural Failures in Large Reasoning Models Using Puzzle-Based Evaluation
  • www.techradar.com: Apple reportedly targets 'spring 2026' for launch of delayed AI Siri upgrade – but is that too late?
  • www.tomsguide.com: Siri may not get an AI upgrade until next Spring — what we know
  • thetechbasic.com: Apple AI Roadmap: Contextual Siri 2026, Knowledge Chatbot & Copilot
  • PCMag Middle East ai: Report: Siri's Long-Delayed AI Features May Arrive With iOS 26.4
  • AppleMagazine: Apple Targets iOS 26.4 for Siri AI Upgrade in March 2026
  • Mark Gurman: NEW: Apple has set an internal release target of spring 2026 for its delayed upgrade of Siri, marking a key step in its artificial intelligence turnaround effort. Plus, the latest on the company's efforts here.
  • The Tech Basic: Apple AI Roadmap: Contextual Siri 2026, Knowledge Chatbot & Copilot
  • www.eweek.com: ‘This Work Needed More Time’: Apple Delays Siri Upgrade to Spring 2026
  • eWEEK: ‘This Work Needed More Time’: Apple Delays Siri Upgrade to Spring 2026
  • Analytics India Magazine: Apple sets target to release delayed Siri AI update by Spring 2026
  • www.laptopmag.com: Apple’s AI-powered Siri reportedly has a new target date. Will it stick this time?

@www.sify.com //
Apple's Worldwide Developers Conference (WWDC) 2025, held on June 10, showcased a significant transformation in both user interface and artificial intelligence. A major highlight was the unveiling of "Liquid Glass," a new design language offering a "glass-like" experience with translucent layers, fluid animations, and spatial depth. This UI refresh, described as Apple's boldest in over a decade, impacts core system elements like the lock screen, home screen, and apps such as Safari and Music, providing floating controls and glassy visual effects. iPhones from the 15 series onward will support Liquid Glass, with public betas rolling out soon to deliver a more immersive and dynamic feel.

Apple also announced advancements in AI, positioning itself to catch up in the competitive landscape. Apple Intelligence, a system-wide, on-device AI layer, integrates with iOS 26, macOS Tahoe, and other platforms. It enables features such as summarizing emails and notifications, auto-completing messages, real-time call translation, and creating personalized emoji called Genmoji. Visual Intelligence allows users to extract text or gain information from photos, documents, and app screens. Siri is slated to receive intelligence upgrades as well, though its full capabilities may be slightly delayed.

In a significant shift, Apple has opened its foundational AI model to third-party developers, granting direct access to the on-device large language model powering Apple Intelligence. This move, announced at WWDC, marks a departure from Apple's traditionally closed ecosystem. The newly accessible three-billion parameter model operates entirely on-device, reflecting Apple’s privacy-first approach. The Foundation Models framework allows developers to integrate Apple Intelligence features with minimal code, offering privacy-focused AI inference at no cost. Xcode 26 now includes AI assistance, embedding large language models directly into the coding experience, and third-party developers can now leverage Visual Intelligence capabilities within their apps.

Recommended read:
References :

@felloai.com //
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.

Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning."

The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations.

Recommended read:
References :
  • felloai.com: Apple’s Latest Research Exposed Shocking Flaw in Today’s Smartest AI Models
  • The Register - Software: Apple AI boffins puncture AGI hype as reasoning models flail on complex planning
  • www.livescience.com: AI reasoning models aren’t as smart as they were cracked up to be, Apple study claims
  • www.theguardian.com: Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds
  • www.computerworld.com: Apple warns: GenAI still isn’t very smart
  • futurism.com: Apple Researchers Just Released a Damning Paper That Pours Water on the Entire AI Industry

@www.artificialintelligence-news.com //
Apple has announced a significant shift in its approach to AI development by opening its foundational AI model to third-party developers. This move, unveiled at the Worldwide Developers Conference (WWDC), grants developers direct access to the on-device large language model that powers Apple Intelligence. The newly accessible three-billion parameter model operates entirely on the device, reflecting Apple’s commitment to user privacy. This on-device approach distinguishes Apple from competitors relying on cloud-based AI solutions, emphasizing privacy and user control.

The new Foundation Models framework enables developers to integrate Apple Intelligence features into their apps with minimal code, using just three lines of Swift. This framework offers guided generation and tool-calling capabilities, making it easier to add generative AI to existing applications. Automattic's Day One journaling app is already leveraging this framework to provide privacy-centric intelligent features. According to Paul Mayne, head of Day One at Automattic, the framework is helping them rethink what’s possible with journaling by bringing intelligence and privacy together in ways that deeply respect their users.

Apple is also enhancing developer tools within Xcode 26, which now embeds large language models directly into the coding environment. Developers can access ChatGPT without needing a personal OpenAI account and connect API keys from other providers or run local models on Apple silicon Macs. Furthermore, Apple has upgraded the App Intents interface to support visual intelligence, allowing apps to present visual search results directly within the operating system. Etsy is already exploring these features to improve product discovery, with CTO Rafe Colburn noting the potential to meet shoppers right on their iPhone with visual intelligence.

Recommended read:
References :
  • machinelearning.apple.com: With Apple Intelligence, we're integrating powerful generative AI right into the apps and experiences people use every day, all while protecting their privacy.
  • AI News: Apple has opened its foundational AI model to third-party developers for the first time, allowing direct access to the on-device large language model that powers Apple Intelligence.

@thetechbasic.com //
References: thetechbasic.com
Apple has officially announced macOS Tahoe, version 26, at its annual WWDC event. The new operating system introduces a visually striking Liquid Glass design, offering a refreshed user experience with a cohesive design language spanning across Apple’s entire ecosystem, including iOS 26 and iPadOS 26. This marks the first time Apple has implemented a universal design philosophy across its platforms, aiming to bring a new level of vitality while maintaining the familiarity of Apple's software. The Liquid Glass aesthetic features translucent elements that dynamically reflect and refract their surroundings, creating a sense of depth and movement, enhancing the user experience.

The Liquid Glass design extends throughout the system, with glossy translucent menu bars, windows, and icons. The surfaces softly reflect light and display subtle color tints, allowing users to customize folders with various accent colors. Widgets and buttons now have a more three-dimensional feel while remaining crisp. The Dock appears to float on a frosted glass shelf, and Control Center icons animate with a soft glow when activated. These changes provide macOS Tahoe with a more modern look while keeping familiar layouts and workflows intact. Furthermore, macOS Tahoe includes a dedicated Phone app that mirrors the iPhone Phone app through Continuity integration. Users can see Live Activities directly on their Mac lock screen and screen unknown callers with Call Screening and Hold Assist.

In addition to the design overhaul, Apple is embedding generative AI models directly into Xcode and iOS apps, emphasizing privacy and user control. The company introduced the Foundation Models framework, allowing developers to add Apple's AI models to their apps with just three lines of Swift code. These models run entirely on the device, requiring no cloud connection and designed to protect user privacy. The framework includes features like "Guided Generation" and "Tool Calling," making it easier to add generative AI to existing apps. Additionally, Xcode 26 now allows developers to access ChatGPT directly inside the IDE, even without a personal OpenAI account.

Recommended read:
References :
  • thetechbasic.com: macOS Tahoe 26 Features: Liquid Glass Design, Apple Intelligence, Spotlight

Michael Nuñez@AI News | VentureBeat //
Apple is making significant strides in the field of artificial intelligence, particularly with its new image generation technology. Apple's machine learning research team has developed STARFlow, a breakthrough AI system that rivals the performance of popular image generators like DALL-E and Midjourney. STARFlow combines normalizing flows with autoregressive transformers to achieve what the team calls “competitive performance” with state-of-the-art diffusion models. This advancement comes at a critical time for Apple, which has faced increasing criticism over its progress in artificial intelligence, showcasing its broader effort to develop distinctive AI capabilities that differentiate its products from competitors.

This research tackles the challenge of scaling normalizing flows to work effectively with high-resolution images, something that has traditionally been overshadowed by diffusion models and generative adversarial networks. The STARFlow system demonstrates versatility across different types of image synthesis challenges, achieving competitive performance in both class-conditional and text-conditional image generation tasks. The research team includes Apple machine learning researchers along with academic collaborators, highlighting the company's commitment to pushing the boundaries of AI image generation.

Despite the image generation advancements, Apple Intelligence took a backseat at WWDC 2025. While Apple is giving developers access to Apple's on-device large language model (LLM) and introducing features like Live Translation in Messages, FaceTime, and the Phone app, the excitement around Apple Intelligence was more muted compared to previous years. Craig Federighi, Apple's SVP of Software Engineering, indicated that Siri needs "more time to reach a high-quality bar," suggesting that significant AI upgrades to Siri are still under development.

Recommended read:
References :
  • AI News | VentureBeat: Apple makes major AI advance with image generation technology rivaling DALL-E and Midjourney
  • www.laptopmag.com: Apple isn’t just sharing its AI. It’s betting developers will finish the job.

Amanda Caswell@Latest from Tom's Guide //
Apple's Worldwide Developers Conference (WWDC) 2025 highlighted the continued development of Apple Intelligence, despite initial delays and underwhelming features from the previous year. While the spotlight shifted towards software revamps and new apps, Apple reaffirmed its commitment to AI by unveiling a series of enhancements and integrations across its ecosystem. Notably, the company emphasized the progression of Apple Intelligence with more capable and efficient models, teasing additional features to be revealed throughout the presentation.

Apple is expanding Apple Intelligence through access to its on-device foundation model to third-party developers, allowing them to implement offline AI features. These AI features will be private and come without API fees. Users gain deeper access through new Shortcuts actions that offer direct access to Apple Intelligence models. The AI action will also include the option to use ChatGPT instead.

A key update is the introduction of Live Translation, integrated into Messages, FaceTime, and the Phone app. This feature facilitates real-time language translation, automatically translating texts and displaying captions during conversations. Visual intelligence, will allow users to select an object and search for similar products. These enhancements demonstrate Apple's focus on providing practical and user-friendly AI tools across its devices, aiming to streamline communication and improve user experience.

Recommended read:
References :
  • PCMag Middle East ai: Apple Intelligence Takes a Backseat at WWDC 2025
  • THE DECODER: Here's every Apple Intelligence update Apple announced at WWDC 25
  • MacStories: Apple Intelligence Expands: Onscreen Visual Intelligence, Shortcuts, Third-Party Apps, and More
  • www.techradar.com: Apple Intelligence was firmly in the background at WWDC 2025 as iPad finally had its chance to shine
  • www.tomsguide.com: Everyone’s talking about 'Liquid Glass' — but these 5 WWDC 2025 AI features impressed me most
  • www.techradar.com: Apple Intelligence is a year old - here are 3 genuinely useful AI tools you should use on your Apple products
  • www.techradar.com: TechRadar and Tom's Guide sat down with Apple's Craig Federighi and Greg Joswiak to talk about the company's latest plans for integrating Siri and Apple Intelligence.
  • www.eweek.com: Visual intelligence will work across more apps this fall, among other AI features announced at Apple’s Worldwide Developers Conference.
  • www.laptopmag.com: Apple isn’t just sharing its AI. It’s betting developers will finish the job.

@machinelearning.apple.com //
Apple researchers have released a new study questioning the capabilities of Large Reasoning Models (LRMs), casting doubt on the industry's pursuit of Artificial General Intelligence (AGI). The research paper, titled "The Illusion of Thinking," reveals that these models, including those from OpenAI, Google DeepMind, Anthropic, and DeepSeek, experience a 'complete accuracy collapse' when faced with complex problems. Unlike existing evaluations primarily focused on mathematical and coding benchmarks, this study evaluates the reasoning traces of these models, offering insights into how LRMs "think".

Researchers tested various models, including OpenAI's o3-mini, DeepSeek-R1, and Claude 3.7 Sonnet, using puzzles like the Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. These environments allowed for the manipulation of complexity while maintaining consistent logical structures. The team discovered that standard language models surprisingly outperformed LRMs in low-complexity scenarios, while LRMs only demonstrated advantages in medium-complexity tasks. However, all models experienced a performance collapse when faced with highly complex tasks.

The study suggests that the so-called reasoning of LRMs may be more akin to sophisticated pattern matching, which is fragile and prone to failure when challenged with significant complexity. Apple's research team identified three distinct performance regimes: low-complexity tasks where standard models outperform LRMs, medium-complexity tasks where LRMs show advantages, and high-complexity tasks where all models collapse. Apple has begun integrating powerful generative AI into its own apps and experiences. The new Foundation Models framework gives app developers access to the on-device foundation language model.

Recommended read:
References :
  • THE DECODER: LLMs designed for reasoning, like Claude 3.7 and Deepseek-R1, are supposed to excel at complex problem-solving by simulating thought processes.
  • machinelearning.apple.com: Apple machine learning discusses Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
  • PPC Land: PPC Land reports on Apple study exposes fundamental limits in AI reasoning models through puzzle tests.
  • the-decoder.com: The Decoder covers Apple's study, highlighting the limitation in thinking abilities of reasoning models.
  • felloai.com: In a breakthrough paper, Apple researchers reveal the uncomfortable truth about large reasoning models (LRMs): their internal “thought processes” might be nothing more than performative illusions.
  • Gadgets 360: Apple Claims AI Reasoning Models Suffer From ‘Accuracy Collapse’ When Solving Complex Problems
  • futurism.com: Apple Researchers Just Released a Damning Paper That Pours Water on the Entire AI Industry
  • The Register - Software: Apple AI boffins puncture AGI hype as reasoning models flail on complex planning
  • www.theguardian.com: Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds
  • chatgptiseatingtheworld.com: Apple researchers cast doubt on AI reasoning models of other companies
  • www.livescience.com: AI reasoning models aren’t as smart as they were cracked up to be, Apple study claims
  • www.computerworld.com: Apple warns: GenAI still isn’t very smart
  • Fello AI: Apple's research paper, "The Illusion of Thinking," argues that large reasoning models face a complete accuracy collapse beyond certain complexities, highlighting limitations in their reasoning capabilities.
  • WIRED: Apple's research paper challenges the claims of significant reasoning capabilities in current AI models, particularly those relying on pattern matching instead of genuine understanding.
  • Analytics Vidhya: Apple Exposes Reasoning Flaws in o3, Claude, and DeepSeek-R1
  • www.itpro.com: ‘A complete accuracy collapse’: Apple throws cold water on the potential of AI reasoning – and it's a huge blow for the likes of OpenAI, Google, and Anthropic
  • www.tomshardware.com: Apple says generative AI cannot think like a human - research paper pours cold water on reasoning models
  • Digital Information World: Apple study questions AI reasoning models in stark new report
  • www.theguardian.com: A research paper by Apple has taken the AI world by storm, all but eviscerating the popular notion that large language models (LLMs, and their newest variant, LRMs, large reasoning models) are able to reason reliably.
  • AI Alignment Forum: Researchers at Apple released a paper provocatively titled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexityâ€, which “challenge[s] prevailing assumptions about [language model] capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoningâ€.
  • Ars OpenForum: New Apple study challenges whether AI models truly “reason†through problems
  • 9to5Mac: New paper pushes back on Apple’s LLM ‘reasoning collapse’ study
  • AI News | VentureBeat: Do reasoning models really “think†or not? Apple research sparks lively debate, response
  • www.marktechpost.com: Apple Researchers Reveal Structural Failures in Large Reasoning Models Using Puzzle-Based Evaluation
  • MarkTechPost: Apple Researchers Reveal Structural Failures in Large Reasoning Models Using Puzzle-Based Evaluation
  • 9to5mac.com: New paper pushes back on Apple’s LLM ‘reasoning collapse’ study

@www.theapplepost.com //
Apple's Worldwide Developers Conference (WWDC) 2025 is set to begin next week, and anticipation is building around the potential unveiling of iOS 26, alongside updates to macOS. Bloomberg's Mark Gurman has highlighted a new "digital glass" design expected to debut on the iPhone, drawing inspiration from the visionOS operating system used in the Apple Vision Pro headset. This design promises a fresh aesthetic for Apple's software, emphasizing light and transparency throughout the operating system. Gurman notes that the default apps will feature redesigned icons, and a more fluid and dynamic design for toolbars and tabs with intuitive pop-out menus, home screen widgets aligning with the visionOS look, and a streamlined Camera app are all expected.

Apple admins are also preparing for changes previewed at WWDC 2025 and the next macOS release. These updates are expected to span the Apple product suite, unifying user experience across devices and introducing Apple Intelligence features. Kandji, a company specializing in Apple device management, is offering resources to help IT teams navigate these changes and prepare for potential enterprise impacts. The conference is also expected to showcase how Apple plans to open up its on-device AI models to developers, allowing them to incorporate AI into their applications, while the Translate app may be revamped and integrated with AirPods.

Despite the expected design changes and potential AI advancements, some inside Apple believe WWDC 2025 may be a letdown from an AI perspective. Wall Street also seems to hold similar views, citing potential shortcomings in Apple Intelligence and a stalled Siri revamp as reasons for concern. Reports suggest that while several projects are underway at Apple, they may not be ready for this year's WWDC. However, the event remains a key opportunity for Apple to showcase its latest software innovations and provide a glimpse into the future of its product ecosystem.

Recommended read:
References :
  • www.laptopmag.com: Keep up with the latest events from Apple's WWDC event right here, before, during, and after the event.
  • www.theapplepost.com: With WWDC 2025 just around the corner, Bloomberg’s Mark Gurman has shared his insights on the upcoming “digital glass†design set to debut on the iPhone.

@felloai.com //
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.

Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning."

The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations.

Recommended read:
References :
  • www.theguardian.com: Apple researchers have found “fundamental limitationsâ€� in cutting-edge artificial intelligence models, in a paper raising doubts about the technology industry’s race to reach a stage of AI at which it matches human intelligence.
  • felloai.com: In a breakthrough paper, Apple researchers reveal the uncomfortable truth about large reasoning models (LRMs): their internal “thought processes†might be nothing more than performative illusions.
  • www.computerworld.com: Filling the void in the few hours before WWDC begins, Apple’s machine learning team raced out of the gate with a research paper, arguing that while the intelligence is artificial, it’s only superficially smart.
  • www.livescience.com: A new study by Apple has ignited controversy in the AI field by showing how reasoning models undergo 'complete accuracy collapse' when overloaded with complex problems.

@www.theapplepost.com //
Sky, an innovative macOS application developed by the original creators of Shortcuts, is poised to revolutionize user experience by bringing AI-powered automation to Macs. This mind-blowing app, set to launch later this year, allows users to extend AI integration and automation capabilities system-wide. Early testers, including Federico Viticci, have expressed immense excitement, drawing parallels to the transformative impact of Editorial and Workflow, apps previously associated with members of the Sky team.

The developers behind Sky, Ari Weinstein, Conrad Kramer, and Kim Beverett at Software Applications Incorporated, have been secretly working on the project since 2023. Sky aims to be more intuitive than its predecessors, harnessing the power of AI and large language models (LLMs) to streamline tasks and workflows. Viticci, who has had exclusive access to Sky for the past two weeks, believes it will fundamentally change how users approach macOS automation. He felt the same excitement when first trying Editorial, Workflow, and Shortcuts.

Simultaneously, Apple is expected to unveil a major software rebrand at WWDC 2025, headlined by the introduction of iOS 26, iPadOS 26, macOS 26, visionOS 26, tvOS 26, and watchOS 26. This strategic shift involves moving away from traditional version numbers to a year-based system, mirroring a practice seen in the past with Windows. This change aims to reflect significant operating system redesigns and simplify branding for both users and developers. Bloomberg’s Mark Gurman suggests this would create consistency across Apple's platforms.

Recommended read:
References :
  • mastodon.macstories.net: For the past two weeks, I’ve been testing Sky, the new app by the original Shortcuts team. It’s a mind-blowing app that brings AI + automation everywhere on your Mac. Coming later this year. It’ll change how we think of AI features on macOS. My exclusive story:
  • MacStories: From the Creators of Shortcuts, Sky Extends AI Integration and Automation to Your Entire Mac

@gradientflow.com //
References: eWEEK , Gradient Flow ,
Apple is ramping up its efforts in the artificial intelligence space, focusing on efficiency, privacy, and seamless integration across its hardware and software. The tech giant is reportedly accelerating the development of its first AI-powered smart glasses, with a target release date of late 2026. These glasses, described as similar to Meta's Ray-Ban smart glasses but "better made," will feature built-in cameras, microphones, and speakers, enabling them to analyze the external world and respond to requests via Siri. This move positions Apple to compete directly with Meta, Google, and the emerging OpenAI/Jony Ive partnership in the burgeoning AI device market.

Apple also plans to open its on-device AI models to developers at WWDC 2025. This initiative aims to empower developers to create innovative AI-driven applications that leverage Apple's hardware capabilities while prioritizing user privacy. By providing developers with access to its AI models, Apple hopes to foster a vibrant ecosystem of AI-enhanced experiences across its product line. The company's strategy reflects a desire to integrate sophisticated intelligence deeply into its products without compromising its core values of user privacy and trust, distinguishing it from competitors who may have rapidly deployed high-profile AI models.

While Apple is pushing forward with its smart glasses, it has reportedly shelved plans for an Apple Watch with a built-in camera. This decision suggests a strategic shift in focus, with the company prioritizing the development of AI-powered wearables that align with its vision of seamless integration and user privacy. The abandonment of the camera-equipped watch may also reflect concerns about privacy implications or technical challenges associated with incorporating such features into a smaller wearable device. Ultimately, Apple's success in the AI arena will depend on its ability to deliver genuinely useful and seamlessly embedded AI experiences that enhance user experience.

Recommended read:
References :
  • eWEEK: Indicates that Apple is speeding up development on its first pair of AI-powered smart glasses for a late 2026 release.
  • Gradient Flow: Discusses Apple’s AI focus on efficiency, privacy, and seamless integration.
  • gradientflow.com: Apple’s success has been built upon a meticulous fusion of hardware, software, and services, consistently shaping how people interact with technology while championing user privacy.

Aminu Abdullahi@eWEEK //
References: 9to5Mac , www.engadget.com , eWEEK ...
Apple is accelerating its entry into the AI-powered wearable market with plans to launch its first smart glasses by late 2026. These glasses, codenamed "N401," will feature built-in cameras, microphones, and speakers, enabling users to interact with the Siri voice assistant for tasks such as making phone calls, playing music, conducting live translations, and receiving GPS directions. The company aims to compete with Meta's Ray-Ban smart glasses, which have seen significant success, but is initially focusing on simplicity by foregoing full augmented reality (AR) capabilities in the first iteration. Apple hopes that by defining their product vision and investing strategically, they can overcome their late start in the AI race and deliver a superior experience.

The move comes as Apple recognizes the growing importance of AI in wearable technology and seeks to catch up with competitors like Meta, Google, and OpenAI. While Meta is working on higher-end smart glasses with built-in displays, Apple is taking a different approach by prioritizing essential functionalities and a sleek design, similar to Meta's Ray-Bans. Google has also partnered with brands like Samsung, Warby Parker, and Gentle Monster to launch smart glasses using its Android XR system. Apple is looking to capture the AI devices market, which is set to become more crowded next year. OpenAI, the company behind popular AI chatbot ChatGPT, announced earlier this week that it was collaborating with former Apple designer Jony Ive on AI gadgets to be released beginning next year.

Amidst these developments, Apple has reportedly scrapped plans for a camera-equipped Apple Watch, signaling a shift in its wearables strategy. Sources indicate that the company had been actively working on releasing a camera-equipped Apple Watch and Apple Watch Ultra by 2027, but that project was recently shut down. Instead, Apple is concentrating its resources on smart glasses, with prototypes expected to be ordered in bulk for testing. Apple faces competitors like OpenAI, Meta, Google, and Amazon. Bloomberg reported that Apple was running multiple focus groups to find out what its employees liked about smart glasses from competitors.

Recommended read:
References :
  • 9to5Mac: Apple is in a tough spot. While the company is painfully behind the competition when it comes to getting a solid handle on AI development, it seems to be the timeline to release its first AI-powered smart glasses.
  • www.engadget.com: Bloomberg 's Mark Gurman that aims to release smart glasses by the end of 2026.
  • Entrepreneur: Apple’s AI glasses are set to launch by the end of next year and will function like Meta's Ray-Bans.
  • eWEEK: Apple is speeding up development on its first pair of AI-powered smart glasses, aiming for a late 2026 release.
  • Spyglass: Apple Eyes 2026 for Smart Glasses
  • thetechbasic.com: Apple AI Smart Glasses Launch 2026 Features Siri Cameras Compete Meta

Josh Render@tomsguide.com //
Apple is reportedly undertaking a significant overhaul of Siri, rebuilding it from the ground up with a new AI-centric architecture. This move comes after earlier versions of Siri, which relied on AI, did not perform as desired, struggling to provide helpful and effective responses. Attempts to integrate AI capabilities into the older version only resulted in further complications for Apple, with employees noting that fixing one issue often led to additional problems. Recognizing their delayed start in the AI race compared to other tech companies, Apple is now aiming to create a smarter and more conversational Siri, potentially leveraging a large language model developed by its Zurich AI team.

In a notable shift, Apple is also considering opening its operating systems to allow iPhone users in the European Union to choose third-party AI assistants like ChatGPT or Gemini as their default option, effectively replacing Siri. This potential change is reportedly driven by regulatory pressures from the EU, which are pushing Apple to allow more flexibility in its ecosystem. If implemented, this move would align Apple more closely with competitors like Samsung and Google, who already offer more diverse AI options on their devices. The possibility of integrating external AI assistants could also provide Apple users with access to advanced AI features while the company continues to refine and improve its own Siri.

However, Apple's AI strategy is also facing scrutiny on other fronts. The Trump administration previously raised national security concerns over Apple's potential AI deal with Alibaba, specifically regarding the integration of Alibaba's AI technology into iPhones sold in China. These concerns center around the potential implications for national security, data privacy, and the broader geopolitical landscape, given the Chinese government's regulations on data sharing and content control. While Apple aims to comply with local regulations and compete more effectively in the Chinese market through this partnership, the US government worries that it could inadvertently aid China's AI development and expose user data to potential risks.

Recommended read:
References :
  • thetechbasic.com: Apple Is Rebuilding Siri from Scratch with Smarter AI
  • www.techradar.com: Apple could soon let iPhone owners use alternative voice assistants to Siri, but you can call up Gemini or ChatGPT right now with this simple hack
  • The Tech Basic: Apple Is Rebuilding Siri from Scratch with Smarter AI
  • www.tomsguide.com: Apple could soon allow iPhone users to ditch Siri as the default assistant for ChatGPT or Gemini — if you’re in the EU
  • www.techradar.com: Apple’s ‘AI crisis’ could mean EU users will have the option to swap Siri for another default voice assistant
  • The Tech Portal: Trump administration flags national security concerns over Apple’s AI deal with Alibaba: Report
  • Techloy: Apple might soon let users in Europe replace Siri

@www.eweek.com //
References: bsky.app , eWEEK ,
Apple is exploring groundbreaking technology to enable users to control iPhones, iPads, and Vision Pro headsets with their thoughts, marking a significant leap towards hands-free device interaction. The company is partnering with Synchron, a brain-computer interface (BCI) startup, to develop a universal standard for translating neural activity into digital commands. This collaboration aims to empower individuals with disabilities, such as ALS and severe spinal cord injuries, allowing them to navigate and operate their devices without physical gestures.

Apple's initiative involves Synchron's Stentrode, a stent-like implant placed in a vein near the brain's motor cortex. This device picks up neural activity and translates it into commands, enabling users to select icons on a screen or navigate virtual environments. The brain signals work in conjunction with Apple's Switch Control feature, a part of its operating system designed to support alternative input devices. While early users have noted the interface is slower compared to traditional methods, Apple plans to introduce a dedicated software standard later this year to simplify the development of BCI tools and improve performance.

In addition to BCI technology, Apple is also focusing on enhancing battery life in future iPhones through artificial intelligence. The upcoming iOS 19 is expected to feature an AI-powered battery optimization mode that learns user habits and manages app energy usage accordingly. This feature is particularly relevant for the iPhone 17 Air, where it will help offset the impact of a smaller battery. Furthermore, Apple is reportedly exploring the use of advanced memory technology and innovative screen designs for its 20th-anniversary iPhone in 2027, aiming for faster AI processing and extended battery life.

Recommended read:
References :
  • bsky.app: Do you want to control your iPhone with your brain? You might soon be able to. Apple has partnered with brain-computer interface startup Synchron to explore letting people with disabilities or diseases like ALS control their iPhones using decoded brain signals:
  • eWEEK: Apple is developing technology that will allow users to control iPhones, iPads, and Vision Pro headsets with their brain signals, marking a major step toward hands-free, thought-driven device interaction.
  • www.techradar.com: Apple’s move into brain-computer interfaces could be a boon for those with disabilities.

John-Anthony Disotto@techradar.com //
Apple is reportedly focusing on AI and design upgrades for its upcoming iPhone lineup. A new Apple Intelligence feature is being developed for iOS 19, set to launch this fall. This feature is an AI-powered battery optimization mode designed to extend battery life, especially for the iPhone 17 Air. This model is expected to utilize the feature to compensate for a smaller battery. The company appears to be heavily invested in AI, viewing it as a crucial element for future device enhancements.

Meanwhile, reports indicate Apple is contemplating a price increase for the next iPhone, possibly the iPhone 17 series. Sources suggest the company aims to avoid attributing the increase to tariffs. Apple has historically relied on Chinese manufacturers, and while past tariffs have been a concern, current trade conditions show a pause on tariffs between the US and China until early August. Apple is considering attributing the price hike to the inclusion of next-generation features and design changes.

In other news, Apple is gearing up for Global Accessibility Awareness Day with a preview of new accessibility features coming to its platforms later this year. This marks the 40th anniversary of Apple's dedicated accessibility office, with continuous development in this area. Upcoming features include App Store Accessibility Nutrition Labels, which will inform users about supported accessibility features like VoiceOver and Reduce Motion. Additionally, the Magnifier app, currently on the iPhone, will be introduced to the Mac, allowing users to manipulate images for better visibility using Continuity Camera or a USB-compatible camera.

Recommended read:
References :
  • Mark Gurman: Reports Apple is preparing a new Apple Intelligence feature for iOS 19 coming this fall — an AI-powered battery optimization mode to extend battery life.
  • www.techradar.com: Apple could be about to launch a new AI battery tool in iOS 19 to help improve your iPhone's run time.
  • The Tech Portal: Apple to introduce AI-powered battery management in iOS 19: Report
  • www.laptopmag.com: A potential pain with the iPhone 17 Air could be fixed with AI: report
  • Fello AI: Reports Is Google Cooked? Apple Exec Drops BOMBSHELL About AI Search Future!
  • MacStories: Source: Apple. With Global Accessibility Awareness Day coming up this Thursday, May 15, Apple is back with its annual preview of accessibility features coming to its platforms later in the year.
  • Bloomberg Technology: NEW: Apple prepares a new Apple Intelligence feature for iOS 19 coming this fall — an AI-powered battery optimization mode to extend battery life. This will be particularly aimed at the iPhone 17 Air, which will use the feature to offset a smaller battery.
  • thetechbasic.com: Apple’s 20th iPhone Upgrade to Boost AI Speed and Battery Life
  • thetechbasic.com: Apple’s iOS 19 AI: Your iPhone’s New Battery-Saving Superpower

@thetechbasic.com //
Apple is making significant investments in AI and expanding connectivity features across its devices. According to recent reports, the tech giant is developing specialized chips designed for a range of upcoming products, including non-AR glasses aimed at competing with the Meta Ray-Bans, new chips for AirPods and Apple Watches potentially equipped with cameras, a high-end AI server chip, and updated M-series chips for its Mac line. This move highlights Apple's commitment to pushing the boundaries of its hardware capabilities and integrating AI deeper into its ecosystem.

Apple is also working on streamlining the Wi-Fi connectivity experience for users in hotels, trains, and airplanes. The company aims to implement a system where users can log in to Wi-Fi networks once on their iPhone, and all their other Apple devices, such as iPads and Macs, will automatically connect without requiring repeated logins. This feature would significantly reduce the hassle of re-entering credentials on multiple devices, especially beneficial for families and business travelers who rely on seamless connectivity on the go.

While the expanded Wi-Fi sharing feature promises convenience, its effectiveness may be limited by the policies of individual establishments. Some hotels, for example, restrict the number of devices that can be connected simultaneously. Apple's new system might simplify device switching within those limits, but it won't bypass the overall restriction. The tech giant's focus on AI integration and enhanced connectivity underscores its dedication to improving user experience across its hardware and software offerings.

Recommended read:
References :

@www.artificialintelligence-news.com //
Apple is doubling down on its custom silicon efforts, developing a new generation of chips destined for future smart glasses, AI-capable servers, and the next iterations of its Mac computers. The company's hardware strategy continues to focus on in-house production, aiming to optimize performance and efficiency across its product lines. This initiative includes a custom chip for smart glasses, designed for voice commands, photo capture, and audio playback, drawing inspiration from the low-power components of the Apple Watch but with modifications to reduce energy consumption and support multiple cameras. Production for the smart glasses chip is anticipated to begin in late 2026 or early 2027, potentially bringing the device to market within two years, with Taiwan Semiconductor Manufacturing Co. expected to handle production, as they do for most Apple chips.

Apple is also exploring integrating cameras into devices like AirPods and Apple Watches, utilizing chips currently under development codenamed "Nevis" for Apple Watch and "Glennie" for AirPods, both slated for a potential release around 2027. In addition to hardware advancements, Apple is considering incorporating AI-powered search results in its Safari browser, potentially shifting away from reliance on Google Search. Eddy Cue, Apple's SVP of services, confirmed the company has engaged in discussions with AI companies like Anthropic, OpenAI, and Perplexity to ensure it has alternative options available, demonstrating a commitment to staying nimble in the face of technological shifts.

Apple is planning the launch of AR and non-AR glasses under the codename N401. The company's CEO, Tim Cook, hopes for Apple to take a lead in this market segment. Eddy Cue said that in 10 years you may not need an iPhone. Cue acknowledges that AI is a new technology shift, and it’s creating new opportunities for new entrants and that Apple needs to stay open to future possibilities.

Recommended read:
References :
  • www.artificialintelligence-news.com: Apple developing custom chips for smart glasses and more
  • PCMag UK mobile-phones: The Way of the iPod: Apple's Eddy Cue Says the iPhone May Be Gone in 10 Years
  • www.verdict.co.uk: Apple plans new chips for smart glasses and AI
  • thetechbasic.com: Apple’s M6 and M7 Chips Set to Redefine Mac Performance by 2027
  • www.tomsguide.com: Apple M6 and M7 chips for Mac may already be in the works — to power future AI features
  • thetechbasic.com: Apple’s AR Glasses Target Late 2026 Launch to Challenge Meta and Google
  • THE DECODER: Google shares slide as Apple explores AI-powered search alternatives
  • thetechbasic.com: Apple Plans AI Search Overhaul for Safari Amid Google Antitrust Battle
  • Dataconomy: Apple develops new chips for AI smart glasses and Macs
  • Mark Gurman: NEW: Apple is working on a dedicated chip for upcoming non-AR glasses to rival the Meta Ray-Bans, new chips for AirPods and Apple Watches with cameras, a high-end AI server chip, as well as new M-series Mac chips.

Mark Gurman@Bloomberg Technology //
Apple is collaborating with Anthropic to integrate the Claude AI tool into its Xcode development environment. This partnership marks a significant shift for Apple, which typically prefers to develop its own internal tools. The goal is to enhance coding efficiency and help Apple engineers fix errors more quickly. By adding Claude to Xcode, the software used to create iPhone and Mac apps, Apple aims to streamline the development process and improve overall productivity.

The integration of Claude will allow engineers to receive code suggestions, automatically find and fix bugs, and test applications more efficiently. For example, an engineer can simply request "add a dark mode button," and Claude will recommend the appropriate code. This saves time, allowing engineers to focus on developing core concepts. Apple's previous attempt at building its own AI helper, Swift Assist, reportedly suffered from issues such as generating incorrect code and being too slow.

This collaboration with Anthropic allows Apple to catch up with other tech giants like Microsoft and Google, which already have robust AI tools. The plan is to combine Claude with Swift Assist and other tools like GitHub Copilot to improve Xcode. If the integration proves successful internally, Apple may release it to third-party developers in the future, potentially at the company's WWDC event in June. This new way of working signals a major change in Apple's approach to AI and software development.

Recommended read:
References :
  • Mark Gurman: Reports on Apple's partnership with Anthropic to build an AI-powered "vibe coding" platform, highlighting its potential impact on Apple's AI efforts and Swift Assist.
  • The Tech Basic: Details Apple's collaboration with Anthropic to integrate Claude AI into Xcode, enhancing coding efficiency and error correction for its engineers.
  • Maginative: The system will allow programmers to describe what they want in natural language through a chat interface, with the AI handling everything from writing code to testing user interfaces and managing bug fixes.
  • Miguel Afonso Caetano: Apple Inc. is teaming up with startup Anthropic PBC on a new “vibe-coding†software platform that will use artificial intelligence to write, edit and test code on behalf of programmers.
  • thetechbasic.com: Apple’s AI U-Turn: How Anthropic’s Claude Is Reshaping Xcode Development
  • www.computerworld.com: In the latest act of the Apple-does-AI drama, the company is allegedly working with Google/Amazon-backed start-up   PBC to build AI-powered coding tools for developers, . The move leans into this year’s fastest-emerging AI buzz-word, “ †and means developers will be able to get AI-equipped dev tools to write, edit, and test code on their behalf. It relies on AI agents to generate code.
  • www.verdict.co.uk: The new platform will be designed to assist programmers in writing, editing, and testing code.   The post appeared first on .
  • Verdict: Apple reportedly partners with Anthropic to develop AI coding platform