@www.pcmag.com
//
References:
PCMag Middle East ai
, Maginative
,
Amazon CEO Andy Jassy has delivered a candid message to employees, stating that the company's increased investment in artificial intelligence will lead to workforce reductions in the coming years. In an internal memo, Jassy outlined an aggressive generative-AI roadmap, highlighting projects like Alexa+ and the new Nova models. He bluntly predicted that software agents will take over rote work, resulting in a smaller corporate workforce. The company anticipates efficiency gains from AI will reduce the need for human workers in various roles.
Jassy emphasized that Amazon currently has over 1,000 generative AI services and applications in development across every business line. These AI agents are expected to contribute to innovation while simultaneously trimming corporate headcount. The company hopes to use agents that can act as "teammates that we can call on at various stages of our work" according to Jassy. He acknowledged that the company will "need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs," though the specific departments impacted were not detailed. While Jassy didn't provide a precise timeline for the layoffs, he stated that efficiency gains from AI will reduce the company's total corporate workforce in the next few years. This announcement comes after Amazon has already eliminated approximately 27,000 corporate jobs since 2022. The company has also started testing humanoid robots at a Seattle warehouse, capable of moving, grasping, and handling items like human workers. Similarly, the Prime Air drone service has already begun delivering packages in eligible areas. Recommended read:
References :
Fiona Jackson@eWEEK
//
Amazon CEO Andy Jassy has announced that the company anticipates a reduction in its corporate workforce as generative AI is integrated into various business operations. In an internal memo to employees, Jassy stated that Amazon expects to cut human workers and replace them with AI to achieve efficiency gains through automation. This decision stems from the company's aggressive push into AI, with over 1,000 generative AI services and applications already in development, including Alexa+ and the Nova foundation models. The use of AI agents is expected to accelerate internal processes and innovation across every business line.
Jassy emphasized that the deployment of AI will change the way work is done at Amazon. He noted that "we will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs." While the exact impact on specific departments remains unspecified, the memo highlights the critical role of AI agents in the company's future. These agents are capable of engaging in deep research, writing code, and ultimately transforming the speed and scope of innovation for customers. The announcement follows a history of workforce reductions at Amazon, with over 27,000 corporate jobs eliminated since 2022. Although previous layoffs were primarily attributed to economic uncertainty and organizational efficiency, Jassy's recent memo indicates that AI-driven automation will be a significant factor moving forward. Jassy acknowledged the need for Amazon to operate like the "world's largest startup" and stressed the importance of AI investment for internal productivity improvements. He expects that these changes will reduce the company's total corporate workforce in the next few years. Recommended read:
References :
@techstrong.ai
//
Amazon is making a substantial investment in artificial intelligence infrastructure, announcing plans to spend $10 billion in North Carolina. The investment will be used to build a cloud computing and AI campus just east of Charlotte, NC. This project is anticipated to create hundreds of good-paying jobs and provide a significant economic boost to Richmond County, positioning North Carolina as a hub for cutting-edge technology.
This investment underscores Amazon's commitment to driving innovation and advancing the future of cloud computing and AI technologies. The company plans to expand its AI data center infrastructure in North Carolina, following a trend among Big Tech companies who are building out infrastructure to meet escalating AI resource requirements. The new "innovation campus" will house data centers containing servers, storage drives, networking equipment, and other essential technology. Amazon is also focused on improving efficiency by enhancing warehouse operations through the use of AI. The company unveiled AI upgrades to boost warehouse efficiency. These upgrades center around the development of "agentic AI" robots. These robots are designed to perform a variety of tasks, from unloading trailers to retrieving repair parts and lifting heavy objects, all based on natural language instructions. The goal is to create systems that can understand and act on commands, transforming robots into multi-talented helpers, ultimately leading to faster deliveries and improved efficiency. Recommended read:
References :
@techstrong.ai
//
Amazon is making a significant push into robotics with the development of humanoid robots designed for package delivery. According to reports, the tech giant is working on the AI software needed to power these robots and is constructing a dedicated "humanoid park" at its San Francisco facility. This indoor testing ground, resembling the size of a coffee shop, will serve as an obstacle course where the robots can practice the entire delivery process, including navigating sidewalks, climbing stairs, and handling packages. The initiative reflects Amazon's continued efforts to enhance efficiency and optimize its logistics operations through advanced automation.
Amazon envisions these humanoid robots eventually riding in its Rivian electric vans and independently completing the last leg of the delivery journey. The company is reportedly testing various robot models, including the Unitree G1, and focusing on developing AI software that will allow them to navigate real-world environments. This move comes as Amazon continues to invest heavily in AI and robotics, including the deployment of over 750,000 robots in its warehouses. The integration of humanoid robots into the delivery process has the potential to reduce physical strain on human workers and address labor shortages, especially during peak seasons. This initiative is part of a broader trend of leveraging AI and robotics to optimize supply chains and reduce operational costs. While there is no official rollout date for the humanoid delivery robots, Amazon's investment in this technology signals its commitment to exploring innovative solutions for package delivery. Furthermore, it coincides with Amazon investing $10 billion in North Carolina to build new data centers as part of a massive AI infrastructure expansion. Recommended read:
References :
@techxplore.com
//
The New York Times has entered into a multi-year licensing agreement with Amazon, marking its first generative AI licensing deal. This agreement allows Amazon to utilize summaries and excerpts of The Times' content, including articles and recipes, in products such as Alexa. Additionally, Amazon will use Times articles to train its artificial intelligence models. The financial terms of the deal were not disclosed, but it is understood that Amazon is paying a licensing fee for the use of the content.
This deal signifies a shift for The New York Times, which had previously resisted allowing its content to be used in the artificial intelligence race. Other media groups, including News Corp, Le Monde, The Washington Post, Axel Springer, the Associated Press, and Agence France-Presse, have already established similar agreements with major tech companies like OpenAI and Google. The Times' decision to partner with Amazon comes as the newspaper is engaged in a legal battle with OpenAI and Microsoft, whom it accuses of copyright infringement for using its content to train the ChatGPT chatbot without permission. According to Meredith Kopit Levien, chief executive of The Times, this deal aligns with the company's belief that "high-quality journalism is worth paying for." The agreement will enable Amazon customers to have direct access to The Times' journalism through Alexa and other connected devices. The news of the partnership with Amazon caused The New York Times share price to increase by 1.85%, nearing its all-time high. This agreement reflects the ongoing efforts of media outlets to navigate the evolving landscape of artificial intelligence and its impact on the global information environment. Recommended read:
References :
@learn.aisingapore.org
//
Amazon is expanding its AI capabilities, focusing on both customer-facing and internal operational improvements. A key development is the enhanced Amazon Q Business, a generative AI-powered assistant now supporting anonymous user access. This feature allows businesses to create public-facing applications, such as Q&A sections on websites, documentation portals, and self-service customer support, without requiring user authentication. This provides guest users with AI-driven assistance to quickly find product information, navigate documentation, and troubleshoot issues.
The anonymous Amazon Q Business applications can be integrated into websites using either an embedded web experience via an iframe or through customized interfaces built with Chat, ChatSync, and PutFeedback APIs. Amazon offers a consumption-based pricing model for these anonymous applications, charging based on the number of Chat or ChatSync API operations. This allows businesses to offer powerful AI assistance to a wider audience while maintaining control over costs and deployment. In addition to AI-powered customer service, Amazon is also enhancing its warehouse operations with the introduction of the Vulcan robot. Equipped with gripping pincers, built-in conveyor belts, and a pointed probe, Vulcan is designed to handle 75% of the package types in Amazon's fulfillment centers. This robot represents a significant advancement in robotics, as it can "feel" objects, enabling it to handle a variety of items with the necessary strength and agility. Amazon says this "touch" capability is a fundamental leap forward, differentiating Vulcan from previous robots that lacked the ability to sense contact. Recommended read:
References :
Evan Ackerman@IEEE Spectrum
//
Amazon is enhancing its warehouse operations with the introduction of Vulcan, a new robot equipped with a sense of touch. This advancement is aimed at improving the efficiency of picking and handling packages within its fulfillment centers. The Vulcan robot, armed with gripping pincers, built-in conveyor belts, and a pointed probe, is designed to handle 75% of the package types encountered in Amazon's warehouses. This new capability represents a "fundamental leap forward in robotics," according to Aaron Parness, Amazon’s director of applied science, as it enables the robot to "feel" the objects it's interacting with, a feature previously unattainable for Amazon's robots.
Vulcan's sense of touch allows it to navigate the challenges of picking items from cluttered bins, mastering what some call 'bin etiquette'. Unlike older robots, which Parness describes as "numb and dumb" because of a lack of sensors, Vulcan can measure grip strength and gently push surrounding objects out of the way. This ensures that it remains below the damage threshold when handling items, a critical improvement for retrieving items from the small fabric pods Amazon uses to store inventory in fulfillment centers. These pods contain up to 10 items within compartments that are only about one foot square, posing a challenge for robots without the finesse to remove a single object without damaging others. Amazon claims that Vulcan's introduction is made possible through key advancements in robotics, engineering, and physical artificial intelligence. While the company did not specify the exact number of jobs Vulcan may create or displace, it emphasized that its robotics systems have historically led to the creation of new job categories focused on training, operating, and maintaining the robots. Vulcan, with its enhanced capabilities, is poised to significantly impact Amazon's ability to manage the 400 million SKUs at a typical fulfillment center, promising increased efficiency and reduced risk of damage to items. Recommended read:
References :
Evan Ackerman@IEEE Spectrum
//
Amazon has unveiled Vulcan, an AI-powered robot with a sense of touch, designed for use in its fulfillment centers. This groundbreaking robot represents a "fundamental leap forward in robotics," according to Amazon's director of applied science, Aaron Parness. Vulcan is equipped with sensors that allow it to "feel" the objects it is handling, enabling capabilities previously unattainable for Amazon robots. This sense of touch allows Vulcan to manipulate objects with greater dexterity and avoid damaging them or other items nearby.
Vulcan operates using "end of arm tooling" that includes force feedback sensors. These sensors enable the robot to understand how hard it is pushing or holding an object, ensuring it remains below the damage threshold. Amazon says that Vulcan can easily manipulate objects to make room for whatever it’s stowing, because it knows when it makes contact and how much force it’s applying. Vulcan helps to bridge the gap between humans and robots, bringing greater dexterity to the devices. The introduction of Vulcan addresses a significant challenge in Amazon's fulfillment centers, where the company handles a vast number of stock-keeping units (SKUs). While robots already play a crucial role in completing 75% of Amazon orders, Vulcan fills the ability gap of previous generations of robots. According to Amazon, one business per second is adopting AI, and Vulcan demonstrates the potential for AI and robotics to revolutionize warehouse operations. Amazon did not specify how many jobs the Vulcan model may create or displace. Recommended read:
References :
@techradar.com
//
References:
AWS News Blog
, Data Phoenix
,
AI adoption is accelerating rapidly, with Amazon reporting that a UK business is adopting AI every 60 seconds. This surge in adoption is highlighted in a recent AWS report, which indicates a 33% increase in the past year, bringing the total of UK businesses utilizing AI to 52%. Startups appear to be leading the charge, with 59% adoption rate, and are also more likely to have comprehensive AI strategies in place compared to larger enterprises, 31% versus 15% respectively. Benefit realization is also on the rise, with 92% of AI-adopting businesses reporting an increase in revenue, a substantial jump from 64% in 2024.
Amazon is also introducing new tools to assist developers in building and scaling AI solutions. Amazon Q Developer is now available in preview on GitHub, enabling developers to assign tasks to an AI agent directly within GitHub issues. This agent can develop features, conduct code reviews, enhance security, and migrate Java code. The tool aims to accelerate code generation and streamline the development process, allowing developers to quickly implement AI-driven functionalities within their projects. Installation is simple, and developers can begin using the application without connecting to an AWS account. Adding to its suite of AI offerings, Amazon has launched Nova Premier, its most capable foundation model, now generally available on Amazon Bedrock. Nova Premier is designed to handle complex workflows requiring multiple tools and data sources. It boasts a one-million token context window, enabling it to process lengthy documents and large codebases. One notable feature of Nova Premier is its model distillation capabilities, allowing users to transfer its advanced features to smaller, faster models for production deployment. Amazon is investing in AI training, with a UK initiative to train 100,000 people in AI skills by the end of the decade, collaborating with universities such as Exeter and Manchester. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
References:
venturebeat.com
, www.marktechpost.com
Amazon Web Services (AWS) has announced significant advancements in its AI coding and Large Language Model (LLM) infrastructure. A key highlight is the introduction of SWE-PolyBench, a comprehensive multi-language benchmark designed to evaluate the performance of AI coding assistants. This benchmark addresses the limitations of existing evaluation frameworks by assessing AI agents across a diverse range of programming languages like Python, JavaScript, TypeScript, and Java, using real-world scenarios derived from over 2,000 curated coding challenges from GitHub issues. The aim is to provide researchers and developers with a more accurate understanding of how well these tools can navigate complex codebases and solve intricate programming tasks involving multiple files.
The latest Amazon SageMaker Large Model Inference (LMI) container v15, powered by vLLM 0.8.4, further enhances LLM capabilities. This version supports a wider array of open-source models, including Meta’s Llama 4 models and Google’s Gemma 3, providing users with more flexibility in model selection. LMI v15 delivers significant performance improvements through an async mode and support for the vLLM V1 engine, resulting in higher throughput and reduced CPU overhead. This enables seamless deployment and serving of large language models at scale, with expanded API schema support and multimodal capabilities for vision-language models. AWS is also launching new Amazon EC2 Graviton4-based instances with NVMe SSD storage. These compute optimized (C8gd), general purpose (M8gd), and memory optimized (R8gd) instances offer up to 30% better compute performance and 40% higher performance for I/O intensive database workloads compared to Graviton3-based instances. They also include larger instance sizes with up to 3x more vCPUs, memory, and local storage. These instances are ideal for storage intensive Linux-based workloads including containerized and micro-services-based applications built using Amazon Elastic Kubernetes Service(Amazon EKS),Amazon Elastic Container Service(Amazon ECS),Amazon Elastic Container Registry(Amazon ECR), Kubernetes, and Docker, as well as applications written in popular programming languages such as C/C++, Rust, Go, Java, Python, .NET Core, Node.js, Ruby, and PHP. Recommended read:
References :
@amazon.jobs
//
References:
IEEE Spectrum
, Amazon Science homepage
,
Amazon is rapidly advancing the integration of robotics and artificial intelligence within its warehouses, marking a significant shift in how the e-commerce giant handles order fulfillment. This push for automation is not just about efficiency; it's about meeting the increasing demands of online shoppers. Amazon's fulfillment centers are becoming showcases for cutting-edge technology, demonstrating how robotics and AI can revolutionize warehouse operations. One example is the deployment of "Robin," a robotic arm capable of sorting packages for outbound shipping by moving them from conveyors to mobile robots, with over three billion successful package moves already completed across various Amazon facilities.
Amazon's robotics innovations are not limited to sorting packages. They are also focused on solving complex problems like robotic stowing, which involves intelligently placing items in cluttered storage bins. This requires robots to understand the three-dimensional world, manipulate a variety of objects, and even create space within bins by gently pushing items aside. Amazon's commitment to building safe and reliable technology that optimizes the supply chain is evident in its development of collaborative robots like Proteus, Cardinal, and Sparrow, as well as its new approach to inventory management through Containerized Storage. These systems are designed to work alongside humans safely, reducing physically demanding tasks and improving workplace ergonomics. The company has deployed more than 750,000 mobile robots across its global operations. Amazon's approach to robotics development involves rigorous testing in real-world environments, starting with small-scale implementations before wider deployment. Furthermore, Amazon is committed to upskilling its workforce. This commitment means that employees get the chance to learn new skills and use new innovative tools to deliver even more value for customers. Recommended read:
References :
mike@marketingaiinstitute.com (Mike@marketingaiinstitute.com
//
References:
AWS News Blog
, Bernard Marr
,
Amazon is aggressively pursuing advancements in artificial intelligence, marking a significant push into the AI agent arena. The company has unveiled Nova Act, an AI system designed to control web browsers and autonomously perform tasks such as booking reservations, ordering food, and filling out forms. This new AI has the potential to streamline and automate various online activities, reducing the need for human intervention. The integration of Nova Act into the upcoming Alexa+ upgrade could put this powerful AI agent into the hands of millions of users worldwide.
Amazon is also introducing Nova Sonic, a new foundation model aimed at creating human-like voice conversations for generative AI applications. Nova Sonic unifies speech recognition and generation into a single model. It enables developers to create natural, conversational AI experiences. This integrated approach streamlines development and reduces complexity when building voice-enabled applications. The model delivers expressive speech generation and real-time text transcription without requiring a separate model. These advancements reflect Amazon's commitment to investing in AI for future growth. CEO Andy Jassy highlighted the importance of aggressive AI investments in a recent shareholder letter, noting plans to spend over $100 billion on capital expenditure in 2025. He described AI as a "once-in-a-lifetime reinvention of everything we know". The move towards agentic AI, as demonstrated by Nova Act and Nova Sonic, is expected to revolutionize various aspects of customer experiences and workplace productivity. Recommended read:
References :
Danilo Poccia@AWS News Blog
//
Amazon has unveiled Nova Sonic, a new foundation model available on Amazon Bedrock, aimed at revolutionizing voice interactions within generative AI applications. This unified model streamlines the development of speech-enabled applications by integrating speech recognition and generation into a single system. This eliminates the traditional need for multiple fragmented models, reducing complexity and enhancing the naturalness of conversations. Nova Sonic seeks to provide more human-like interactions by understanding contextual nuances, tone, prosody, and speaking style.
Amazon Nova Sonic powers Alexa+ and is already incorporated into Alexa+, Amazon’s upgraded voice assistant. Rohit Prasad, Amazon’s head of AI, explained that Nova Sonic is good at deciding when to pull information from the internet or other apps. For example, if you ask about the weather, it checks a weather website. If you want to order groceries, it connects to your shopping list. This integrated approach reduces complexity when building conversational applications and delivers expressive speech generation and real-time text transcription without requiring a separate model, resulting in adaptive speech responses. The model is designed to recognize when users pause, hesitate, or even interrupt, responding fluidly to mimic natural human conversation. Developers can leverage function calling and agentic workflows to connect Nova Sonic with external services and APIs. The model currently supports American and British English, with plans to add more languages soon. This commitment to responsible AI also includes built-in protections for content moderation and watermarking. Amazon claims that the new model is 80% cheaper to use than OpenAI’s GPT-4o and also faster. Recommended read:
References :
Allison Siu@NVIDIA Blog
//
References:
Data Phoenix
, www.producthunt.com
,
Amazon has recently introduced two significant advancements in the realm of artificial intelligence: Nova Act, an AI model designed for browser-based task automation, and a testing phase for the ‘Buy for Me’ feature in its mobile shopping application. Nova Act, currently available as a research preview, prioritizes the reliable execution of simple commands over complex workflows. Amazon aims to unlock the potential of truly autonomous and capable AI agents. The Nova Act SDK allows developers to experiment with the model's capabilities, enabling agents to complete tasks such as submitting out-of-office requests and configuring automatic replies.
The company stresses that genuine AI agents should not primarily focus on conversation or knowledge retrieval, differentiating them from current AI-powered assistants. According to Amazon, Nova Act is designed to complete tasks and act in digital and physical environments on behalf of the user. The potential applications extend to complex, multi-step workflows, such as organizing a wedding or handling complex IT tasks. The company has designed Nova Act to prioritize reliability by accurately completing simpler, low-level actions that, according to the company, trip rival models more often, such as date picking or navigating drop-downs and pop-ups. Simultaneously, Amazon is testing the ‘Buy for Me’ feature, which integrates AI agents into the mobile shopping app to facilitate purchases from third-party brand websites, even for products not directly sold by Amazon. This feature, in limited beta for select iOS and Android users in the U.S., allows users to authorize Amazon to complete transactions on external brand sites, utilizing Amazon’s Nova AI, along with Anthropic’s Claude via Bedrock, to securely handle payment and shipping details. While the brand handles fulfillment, customer service, and returns, customers can track their purchases within the Amazon app, representing a narrowly scoped, highly-specialized AI agent doing something useful. Recommended read:
References :
Allison Siu@NVIDIA Blog
//
Amazon is currently testing a new feature called "Buy for Me" within its mobile shopping app. This innovative tool allows users to purchase products from third-party brand websites that are not directly sold by Amazon, all without ever leaving the Amazon app environment. The feature leverages AI agents to seamlessly complete the purchase process on these external sites. "Buy for Me" is in a limited beta release for select iOS and Android users in the U.S.
When a customer searches for an item not available on Amazon, the app will display qualifying products from external brand sites in a dedicated section titled "Shop brand sites directly". Tapping on one of these items opens a product detail page within the Amazon app. From this page, users can select the "Buy for Me" option, granting Amazon permission to complete the transaction. Amazon's AI, combined with Anthropic's Claude, securely enters the payment and shipping information, while the brand handles fulfillment, customer service, and any potential returns. This initiative showcases the potential of narrowly scoped, highly specialized AI agents in providing useful services. It keeps customers within Amazon's ecosystem while extending functionality beyond its own inventory. Retailers can deepen customer engagement, enhance their offerings and maintain a competitive edge in a rapidly shifting digital marketplace by tapping into AI agents. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |