@www.aiwire.net
//
References:
AIwire
, www.aiwire.net
,
The Quantum Economic Development Consortium (QED-C) has released a report detailing the potential synergies between Quantum Computing (QC) and Artificial Intelligence (AI). The report, based on a workshop, highlights how these two technologies can work together to solve problems currently beyond the reach of classical computing. AI could be used to accelerate circuit design, application development, and error correction in QC. Conversely, QC offers the potential to enhance AI models by efficiently solving complex optimization and probabilistic tasks, which are infeasible for classical systems.
A hybrid approach, integrating the strengths of classical AI methods with QC algorithms, is expected to substantially reduce algorithmic complexity and improve the efficiency of computational processes and resource allocation. The report identifies key areas where this integration can yield significant benefits, including chemistry, materials science, logistics, energy, and environmental modeling. The applications could range from predicting high-impact weather events to improving the modeling of chemical reactions for pharmaceutical advancements. The report also acknowledges the necessity of cross-industry collaboration, expanded academic research, and increased federal support to advance QC + AI development. Celia Merzbacher, Executive Director of QED-C, emphasized the importance of collaboration between industry, academia, and governments to maximize the potential of these technologies. A House Science Committee hearing is scheduled to assess the progress of the National Quantum Initiative, underscoring the growing importance of quantum technologies in the U.S. Recommended read:
References :
@www.microsoft.com
//
References:
The Register - Software
, www.microsoft.com
Microsoft is actively exploring the potential of artificial intelligence to revolutionize fusion energy research. This initiative aims to accelerate the development of a clean, scalable, and virtually limitless energy source. The first Microsoft Research Fusion Summit recently convened global experts to discuss and explore how AI can play a pivotal role in unlocking the secrets of fusion power. This summit fostered collaborations with leading institutions and researchers, with the ultimate goal of expediting progress toward practical fusion energy generation.
The summit showcased ongoing efforts to apply AI in various aspects of fusion research. Experts from the DIII-D National Fusion Program, North America's largest fusion facility, demonstrated how AI is already being used to advance reactor design and operations. These applications include using AI for active plasma control to prevent disruptive instabilities, implementing AI-controlled trajectories to avoid tearing modes, and utilizing machine learning-derived density limits for safer, high-density operations. Microsoft believes that AI can significantly shorten the timeline for realizing nuclear fusion as a viable energy source. This advancement, in turn, could provide the immense power required to fuel the growing demands of AI itself. Ashley Llorens, Corporate Vice President and Managing Director of Microsoft Research Accelerator, envisions a self-reinforcing system where AI drives sustainability, including the development of fusion energy. The challenge now lies in harnessing the combined power of AI and high-performance computing, along with international collaboration, to model and optimize future fusion reactor designs. Recommended read:
References :
Evan Ackerman@IEEE Spectrum
//
Amazon is enhancing its warehouse operations with the introduction of Vulcan, a new robot equipped with a sense of touch. This advancement is aimed at improving the efficiency of picking and handling packages within its fulfillment centers. The Vulcan robot, armed with gripping pincers, built-in conveyor belts, and a pointed probe, is designed to handle 75% of the package types encountered in Amazon's warehouses. This new capability represents a "fundamental leap forward in robotics," according to Aaron Parness, Amazon’s director of applied science, as it enables the robot to "feel" the objects it's interacting with, a feature previously unattainable for Amazon's robots.
Vulcan's sense of touch allows it to navigate the challenges of picking items from cluttered bins, mastering what some call 'bin etiquette'. Unlike older robots, which Parness describes as "numb and dumb" because of a lack of sensors, Vulcan can measure grip strength and gently push surrounding objects out of the way. This ensures that it remains below the damage threshold when handling items, a critical improvement for retrieving items from the small fabric pods Amazon uses to store inventory in fulfillment centers. These pods contain up to 10 items within compartments that are only about one foot square, posing a challenge for robots without the finesse to remove a single object without damaging others. Amazon claims that Vulcan's introduction is made possible through key advancements in robotics, engineering, and physical artificial intelligence. While the company did not specify the exact number of jobs Vulcan may create or displace, it emphasized that its robotics systems have historically led to the creation of new job categories focused on training, operating, and maintaining the robots. Vulcan, with its enhanced capabilities, is poised to significantly impact Amazon's ability to manage the 400 million SKUs at a typical fulfillment center, promising increased efficiency and reduced risk of damage to items. Recommended read:
References :
Evan Ackerman@IEEE Spectrum
//
Amazon has unveiled Vulcan, an AI-powered robot with a sense of touch, designed for use in its fulfillment centers. This groundbreaking robot represents a "fundamental leap forward in robotics," according to Amazon's director of applied science, Aaron Parness. Vulcan is equipped with sensors that allow it to "feel" the objects it is handling, enabling capabilities previously unattainable for Amazon robots. This sense of touch allows Vulcan to manipulate objects with greater dexterity and avoid damaging them or other items nearby.
Vulcan operates using "end of arm tooling" that includes force feedback sensors. These sensors enable the robot to understand how hard it is pushing or holding an object, ensuring it remains below the damage threshold. Amazon says that Vulcan can easily manipulate objects to make room for whatever it’s stowing, because it knows when it makes contact and how much force it’s applying. Vulcan helps to bridge the gap between humans and robots, bringing greater dexterity to the devices. The introduction of Vulcan addresses a significant challenge in Amazon's fulfillment centers, where the company handles a vast number of stock-keeping units (SKUs). While robots already play a crucial role in completing 75% of Amazon orders, Vulcan fills the ability gap of previous generations of robots. According to Amazon, one business per second is adopting AI, and Vulcan demonstrates the potential for AI and robotics to revolutionize warehouse operations. Amazon did not specify how many jobs the Vulcan model may create or displace. Recommended read:
References :
@www.microsoft.com
//
References:
news.microsoft.com
, www.microsoft.com
,
Microsoft Research is delving into the transformative potential of AI as "Tools for Thought," aiming to redefine AI's role in supporting human cognition. At the upcoming CHI 2025 conference, researchers will present four new research papers and co-host a workshop exploring this intersection of AI and human thinking. The research includes a study on how AI is changing the way we think and work along with three prototype systems designed to support different cognitive tasks. The goal is to explore how AI systems can be used as Tools for Thought and reimagine AI’s role in human thinking.
As AI tools become increasingly capable, Microsoft has unveiled new AI agents designed to enhance productivity in various domains. The "Researcher" agent can tackle complex research tasks by analyzing work data, emails, meetings, files, chats, and web information to deliver expertise on demand. Meanwhile, the "Analyst" agent functions as a virtual data scientist, capable of processing raw data from multiple spreadsheets to forecast demand or visualize customer purchasing patterns. The new AI agents unveiled over the past few weeks can help people every day with things like research, cybersecurity and more. Johnson & Johnson has reportedly found that only a small percentage, between 10% and 15%, of AI use cases deliver the vast majority (80%) of the value. After encouraging employees to experiment with AI and tracking the results of nearly 900 use cases over about three years, the company is now focusing resources on the highest-value projects. These high-value applications include a generative AI copilot for sales representatives and an internal chatbot answering employee questions. Other AI tools being developed include one for drug discovery and another for identifying and mitigating supply chain risks. Recommended read:
References :
@learn.aisingapore.org
//
MIT researchers have achieved a breakthrough in artificial intelligence, specifically aimed at enhancing the accuracy of AI-generated code. This advancement focuses on guiding large language models (LLMs) to produce outputs that strictly adhere to the rules and structures of various programming languages, preventing common errors that can cause system crashes. The new technique, developed by MIT and collaborators, ensures that the AI's focus remains on generating valid and accurate code by quickly discarding less promising outputs. This approach not only improves code quality but also significantly boosts computational efficiency.
This efficiency gain allows smaller LLMs to perform better than larger models in producing accurate and well-structured outputs across diverse real-world scenarios, including molecular biology and robotics. The new method tackles issues with existing methods which distort the model’s intended meaning or are too time-consuming for complex tasks. Researchers developed a more efficient way to control the outputs of a large language model, guiding it to generate text that adheres to a certain structure, like a programming language, and remains error free. The implications of this research extend beyond academic circles, potentially revolutionizing programming assistants, AI-driven data analysis, and scientific discovery tools. By enabling non-experts to control AI-generated content, such as business professionals creating complex SQL queries using natural language prompts, this architecture could democratize access to advanced programming and data manipulation. The findings will be presented at the International Conference on Learning Representations. Recommended read:
References :
Ryan Daws@AI News
//
OpenAI has secured a massive $40 billion funding round, led by SoftBank, catapulting its valuation to an unprecedented $300 billion. This landmark investment makes OpenAI the world's second-most valuable private company alongside TikTok parent ByteDance Ltd, trailing only Elon Musk's SpaceX Corp. This deal marks one of the largest capital infusions in the tech industry and signifies a major milestone for the company, underscoring the escalating significance of AI.
The fresh infusion of capital is expected to fuel several key initiatives at OpenAI. The funding will support expanded research and development, and upgrades to computational infrastructure. This includes the upcoming release of a new open-weight language model with enhanced reasoning capabilities. OpenAI said the funding round would allow the company to “push the frontiers of AI research even further” and “pave the way” towards AGI, or artificial general intelligence. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
OpenAI, the company behind ChatGPT, has announced a significant strategic shift by planning to release its first open-weight AI model since 2019. This move comes amidst mounting economic pressures from competitors like DeepSeek and Meta, whose open-source models are increasingly gaining traction. CEO Sam Altman revealed the plans on X, stating that the new model will have reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's cloud-based subscription model.
This decision marks a notable change for OpenAI, which has historically defended closed, proprietary models. The company is now looking to gather developer feedback to make the new model as useful as possible, planning events in San Francisco, Europe and Asia-Pacific. As models improve, startups and developers increasingly want more tunable latency, and want to use on-prem deplouments requiring full data control, according to OpenAI. The shift comes alongside a monumental $40 billion funding round led by SoftBank, which has catapulted OpenAI's valuation to $300 billion. SoftBank will initially invest $10 billion, with the remaining $30 billion contingent on OpenAI transitioning to a for-profit structure by the end of the year. This funding will help OpenAI continue building AI systems that drive scientific discovery, enable personalized education, enhance human creativity, and pave the way toward artificial general intelligence. The release of the open-weight model is expected to help OpenAI compete with the growing number of efficient open-source alternatives and counter the criticisms that have come from remaining a closed model. Recommended read:
References :
Ryan Daws@AI News
//
OpenAI is set to release its first open-weight language model since 2019, marking a strategic shift for the company. This move comes amidst growing competition in the AI landscape, with rivals like DeepSeek and Meta already offering open-source alternatives. Sam Altman, OpenAI's CEO, announced the upcoming model will feature reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's traditional cloud-based approach.
This decision follows OpenAI securing a $40 billion funding round, although reports suggest a potential breakdown of $30 billion from SoftBank and $10 billion from Microsoft and venture capital funds. Despite the fresh funding, OpenAI also faces scrutiny over its training data. A recent study by the AI Disclosures Project suggests that OpenAI's GPT-4o model demonstrates "strong recognition" of copyrighted data, potentially accessed without consent. This raises ethical questions about the sources used to train OpenAI's large language models. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
OpenAI, the company behind ChatGPT, has announced a significant strategic shift by planning to release its first open-weight AI model since 2019. This move comes amidst mounting economic pressures from competitors like DeepSeek and Meta, whose open-source models are increasingly gaining traction. CEO Sam Altman revealed the plans on X, stating that the new model will have reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's cloud-based subscription model.
This decision marks a notable change for OpenAI, which has historically defended closed, proprietary models. The company is now looking to gather developer feedback to make the new model as useful as possible, planning events in San Francisco, Europe and Asia-Pacific. As models improve, startups and developers increasingly want more tunable latency, and want to use on-prem deplouments requiring full data control, according to OpenAI. The shift comes alongside a monumental $40 billion funding round led by SoftBank, which has catapulted OpenAI's valuation to $300 billion. SoftBank will initially invest $10 billion, with the remaining $30 billion contingent on OpenAI transitioning to a for-profit structure by the end of the year. This funding will help OpenAI continue building AI systems that drive scientific discovery, enable personalized education, enhance human creativity, and pave the way toward artificial general intelligence. The release of the open-weight model is expected to help OpenAI compete with the growing number of efficient open-source alternatives and counter the criticisms that have come from remaining a closed model. Recommended read:
References :
Maximilian Schreiner@THE DECODER
//
Google has unveiled Gemini 2.5 Pro, its latest and "most intelligent" AI model to date, showcasing significant advancements in reasoning, coding proficiency, and multimodal functionalities. According to Google, these improvements come from combining a significantly enhanced base model with improved post-training techniques. The model is designed to analyze complex information, incorporate contextual nuances, and draw logical conclusions with unprecedented accuracy. Gemini 2.5 Pro is now available for Gemini Advanced users and on Google's AI Studio.
Google emphasizes the model's "thinking" capabilities, achieved through chain-of-thought reasoning, which allows it to break down complex tasks into multiple steps and reason through them before responding. This new model can handle multimodal input from text, audio, images, videos, and large datasets. Additionally, Gemini 2.5 Pro exhibits strong performance in coding tasks, surpassing Gemini 2.0 in specific benchmarks and excelling at creating visually compelling web apps and agentic code applications. The model also achieved 18.8% on Humanity’s Last Exam, demonstrating its ability to handle complex knowledge-based questions. Recommended read:
References :
Tris Warkentin@The Official Google Blog
//
Google AI has released Gemma 3, a new family of open-source AI models designed for efficient and on-device AI applications. Gemma 3 models are built with technology similar to Gemini 2.0, intended to run efficiently on a single GPU or TPU. The models are available in various sizes: 1B, 4B, 12B, and 27B parameters, with options for both pre-trained and instruction-tuned variants, allowing users to select the model that best fits their hardware and specific application needs.
Gemma 3 offers practical advantages including efficiency and portability. For example, the 27B version has demonstrated robust performance in evaluations while still being capable of running on a single GPU. The 4B, 12B, and 27B models are capable of processing both text and images, and supports more than 140 languages. The models have a context window of 128,000 tokens, making them well suited for tasks that require processing large amounts of information. Google has built safety protocols into Gemma 3, including a safety checker for images called ShieldGemma 2. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |