News from the AI & ML world
@felloai.com
//
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.
Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning."
The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations.
ImgSrc: felloai.com
References :
- www.theguardian.com: Apple researchers have found “fundamental limitations� in cutting-edge artificial intelligence models, in a paper raising doubts about the technology industry’s race to reach a stage of AI at which it matches human intelligence.
- felloai.com: In a breakthrough paper, Apple researchers reveal the uncomfortable truth about large reasoning models (LRMs): their internal “thought processes†might be nothing more than performative illusions.
- www.computerworld.com: Filling the void in the few hours before WWDC begins, Apple’s machine learning team raced out of the gate with a research paper, arguing that while the intelligence is artificial, it’s only superficially smart.
- www.livescience.com: A new study by Apple has ignited controversy in the AI field by showing how reasoning models undergo 'complete accuracy collapse' when overloaded with complex problems.
Classification:
- HashTags: #AI #Apple #LRMs
- Company: Apple
- Target: Apple Intelligence
- Product: Apple Intelligence
- Type: Research
- Severity: Major