News from the AI & ML world

DeeperML - #string

@pcmag.com //
A recent Windows 11 update has inadvertently uninstalled the Copilot AI assistant from some users' PCs, causing frustration. The bug, affecting updates KB5053598, KB5053602, and KB5053606 across Windows 11 and Windows 10, removes the Copilot app and unpins it from the taskbar. Microsoft has acknowledged the issue and updated the release notes, confirming that Copilot for Microsoft 365 is not affected.

Users affected by this bug can manually reinstall the Copilot app from the Microsoft Store and repin it to their taskbar as a temporary solution. It's worth noting that some users on Reddit have expressed that they appreciate this accidental "feature," stating they would prefer the option to install Copilot rather than having it forced upon them. Microsoft is currently working on a permanent solution and likely to issue an update soon.

Recommended read:
References :
  • futurism.com: Users Cheer as Microsoft Accidentally Removes Hated AI Feature From Windows 11
  • www.techrepublic.com: The Case of the Vanishing Copilot: Is Microsoft’s Update a Feature or a Bug?
  • www.zdnet.com: Windows 11 update accidentally erases Copilot for some users - here's how to get it back
  • PCMag Middle East ai: Oops: Microsoft Update Accidentally Removes Copilot From Windows
  • MSPoweruser: If you install KB5053598, you’ll delete all traces of Copilot in Windows 11
  • www.windowscentral.com: Is this Windows 11 'bug' the feature we've been waiting for? Say goodbye to Copilot (for now)
  • www.techradar.com: Windows 11 bug deletes Copilot from the OS – is this the first glitch ever some users will be happy to encounter?
  • PCWorld: Microsoft shot itself in the foot with its latest Windows update
  • Ars OpenForum: Report that a bug in the Windows 11 update caused Copilot to be removed from some devices.
  • How-To Geek: Explanation and guidance for reinstalling the Copilot app after a recent Windows update.
  • www.pcmag.com: Discussion of the Copilot uninstall issue and possible resolutions.
  • PCWorld: Article discussing the inadvertent uninstallation of the Copilot app in some Windows 11 installations due to a bug in the recent update.

@NVIDIA Newsroom //
Nvidia has launched its highly anticipated RTX 50 series GPUs at CES 2025, with aggressive pricing strategies and a strong focus on AI integration. The lineup features the flagship RTX 5090 priced at $1,999, which is built on the new Blackwell architecture and uses GDDR7 memory. Nvidia claims the 5090 provides twice the performance of the 4090. The series also includes the RTX 5080 at $999, with the 5070 Ti and 5070 following in February. Most models in the series offer enhanced performance without significant price increases, with the RTX 5070 launching at US$549, a price point that triggered applause during the presentation. An entry-level RTX 5070 has been shown to match the performance of the previous generation RTX 4090 while using half the power and costing a third of the price.

The key highlight of the RTX 50 series is the integration of AI into graphics processing. Nvidia is using AI to improve various aspects of rendering, including ray tracing and complex scenes, aiming for better visuals and performance while reducing artifacts and VRAM usage. New technologies like RTX Neural Shaders, Texture Compression, Materials and Radiance Cache use AI to optimize textures, compress shader code and enhance lighting. Nvidia also announced new AI techniques for improving facial rendering using AI to infer realistic facial features. This also includes RTX Neural Faces and a Character Rendering SDK. With the focus of this series being AI-enhanced, it is not surprising that Nvidia considers "AI is the new graphics".

Recommended read:
References :
  • analyticsindiamag.com: NVIDIA Launches GeForce RTX 50 Series with Blackwell Architecture
  • NVIDIA Newsroom: New GeForce RTX 50 Series GPUs Double Creative Performance in 3D, Video and Generative AI
  • Pivot to AI: Nvidia unveils its flagship RTX 5090 card — with AI-juiced frame rates
  • blogs.nvidia.com: GeForce RTX 50 Series Desktop and Laptop GPUs, unveiled today at the CES trade show, are poised to power the next era of generative and agentic AI content creation — offering new tools and capabilities for video, livestreaming, 3D and more. Built on the NVIDIA Blackwell architecture, GeForce RTX 50 Series GPUs can run creative
  • the-decoder.com: Nvidia is expanding its AI graphics technology with its new GeForce RTX 50 series cards, building on the foundation it laid years ago with DLSS. The latest desktop and laptop GPUs push even deeper into AI-assisted rendering.
  • www.digitimes.com: Nvidia CEO Jensen Huang captivated the CES 2025 audience with an unexpected revelation during his keynote address.
  • 9meters: Nvidia Launches RTX 5090: Unmatched Power and Performance For $1999
  • TechCrunch: Nvidia unveils $2,000 RTX 5090 GPU
  • 9meters: Nvidia RTX 50 Series GPU Pricing and Release Dates Announced
  • THE DECODER: Nvidia researcher says that 'AI is the new graphics'
  • Source: Exciting to see the new RTX 50-series GPUs and Windows Subsystem for Linux powering models from Nvidia NIM and Azure AI Foundry on Windows 11 PCs. A game-changer for running AI on the edge.
  • The Verge: Up close with the Nvidia GeForce RTX 5090 FE, an incredibly compact flagship video card
  • PCWorld: Interview: Nvidia explains RTX 5090 Founder’s Edition radical redesign
  • BigDATAwire: Nvidia announces a new $3000 desktop computer developed in collaboration with MediaTek, which is powered by a new cut-down Arm-based Grace CPU and Blackwell GPU
  • The Verge: Nvidia announces $3,000 personal AI supercomputer called Digits.

ChinaTechNews.com Staff@ChinaTechNews.com //
Nvidia Corp. has signaled a strong trajectory for AI-driven growth into 2025, bolstered by a solid fourth-quarter earnings and revenue beat. The company's revenue jumped 78% year-over-year, surpassing investor expectations, with earnings reaching $0.89 per share, exceeding estimates. Nvidia's guidance for the current quarter indicates continued growth, forecasting sales of $43 billion, which further demonstrates the company's confidence in sustained demand for its AI-related products.

Nvidia's success is attributed to the high demand for its GPUs, particularly for AI applications. The company has begun producing its next-generation Blackwell GPUs, with CEO Jensen Huang noting strong demand. Data Center revenue saw a remarkable increase of 93% year-over-year, reaching $35.6 billion. This performance underscores Nvidia's leadership in providing hardware for AI advancements and its pivotal role in the ongoing AI revolution.

Recommended read:
References :
  • NextBigFuture.com: Nvidia once again beat the quarterly earnings estimate and increased guidance more than expectations. Revenue: $39.3B vs. $38.1B est (+78% YoY) • EPS: $0.89 vs. $0.85 est • Data Center: $35.6B vs $33.5B est (+93% YoY)
  • www.theguardian.com: Nvidia beats Wall Street expectations in first earnings after DeepSeek’s AI debut - Investors were eyeing the firm for signs of slowing demand after revelation high-end chips not necessary, but found few surpassed investor expectations for the fourth quarter of 2024 with a 78% jump in revenue year over year.
  • Dataconomy: Quarterly earnings from Nvidia (NVDA.O) on Wednesday stand as a significant event for markets amid investor scrutiny regarding substantial spending in artificial intelligence (AI).
  • SiliconANGLE: Chipmaker Nvidia Corp. today signaled that it’s on course for yet more artificial intelligence-driven growth in 2025 after delivering a solid fourth-quarter earnings and revenue beat and offering strong guidance for the current quarter.
  • bsky.app: Nvidia’s Q4 revenue soared 78% YoY to $39.3B versus $38.05B expected, driven by strong demand for Blackwell AI chips.
  • ChinaTechNews.com: Nvidia posts $39B quarter: Has the AI chip giant defied market jitters over DeepSeek?
  • The Register - Software: Cash torrent pouring into Nvidia slows – despite booming Blackwell adoption
  • SiliconANGLE: Nvidia’s fine! Besides, who else is going to power all these new AI models?
  • insideAI News: Feb. 28, 2025 — SoftBank Corp., ZutaCore and Hon Hai Technology Group (Foxconn) today announced that they implemented ZutaCore’s two-phase direct liquid cooling technology*1 in an AI server using NVIDIA accelerated computing. The companies said this is the first implementation*2 of ZutaCore’s two-phase DLC*1 using NVIDIA H200 GPUs. In addition, SoftBank designed and developed a rack-integrated […]
  • THE DECODER: Chinese dealers advertise Nvidia's Blackwell processors despite strict US export controls

Esra Kayabali@AWS News Blog //
Anthropic has launched Claude 3.7 Sonnet, their most advanced AI model to date, designed for practical use in both business and development. The model is described as a hybrid system, offering both quick responses and extended, step-by-step reasoning for complex problem-solving. This versatility eliminates the need for separate models for different tasks. The company emphasized Claude 3.7 Sonnet’s strength in coding tasks. The model's reasoning capabilities allow it to analyze and modify complex codebases more effectively than previous versions and can process up to 128K tokens.

Anthropic also introduced Claude Code, an agentic coding tool, currently in limited research preview. The tool promises to revolutionize coding by automating parts of a developer's job. Claude 3.7 Sonnet is accessible across all Anthropic plans, including Free, Pro, Team, and Enterprise, and via the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI. Extended thinking mode is reserved for paid subscribers. Pricing is set at $3 per million input tokens and $15 per million output tokens. Anthropic stated they reduced unnecessary refusals by 45% compared to its predecessor.

Recommended read:
References :
  • AI & Machine Learning: Anthropic's Claude 3.7 Sonnet available on Vertex AI
  • Fello AI: Claude 3.7 Sonnet is a new release from Anthropic
  • PCMag Middle East ai: PCMag highlights the key features and trends embodied by Claude 3.7 Sonnet.
  • venturebeat.com: Claude 3.7 Sonnet aims to compete with other major AI models
  • Analytics Vidhya: Anthropic's new model can manage two types of information processing at once
  • Analytics Vidhya: Claude 3.7 Sonnet vs Grok 3: Which LLM is Better at Coding?
  • Digital Information World: Digital Information World reports on the launch of Claude 3.7 Sonnet and its competitive landscape.
  • Shelly Palmer: Claude 3.7 Sonnet: Coding Meets Reasoning
  • OODAloop: A new generation of AIs: Claude 3.7 and Grok 3
  • AWS News Blog: Anthropic’s Claude 3.7 Sonnet hybrid reasoning model is now available in Amazon Bedrock
  • Analytics Vidhya: Claude 3.7 Sonnet: The Best Coding Model Yet?
  • blog.jetbrains.com: Anthropic's Claude 3.7 Sonnet is a new AI reasoning model, described as a hybrid system blending fast responses with detailed reasoning, adjustable for various tasks. It is particularly strong in coding and demonstrates remarkable accuracy on real-world software tasks. It is designed to handle both quick answers and more challenging tasks.
  • Analytics Vidhya: Artificial intelligence is immensely revolutionizing technology, providing performance enhancements, tweaks, and improvements with each generation of models. One of its latest developments is the Anthropics Claude 3.7 Sonnet- a sophisticated AI model that primes itself for changing creative, analytical, and coding tasks. It offers new improved Claude code with great tools designed for automating and
  • Towards AI: TAI #141: Claude 3.7 Sonnet; Software Dev Focus in Anthropic’s First Thinking Model headline feature is its “extended thinkingâ€� mode, where the model now explicitly shows multi-step reasoning before finalizing answers.

Megan Crouse@eWEEK //
Google has launched a free tier of its AI-powered coding assistant, Gemini Code Assist, for individual developers. This new tier provides access to AI-driven coding assistance, including code suggestions, debugging support, and error explanations. It supports 22 programming languages and offers up to 180,000 code completions per month, significantly exceeding the free tier of GitHub Copilot, which only provides 2,000 completions per month. The tool integrates seamlessly with popular IDEs like Visual Studio Code and JetBrains IDEs.

Google's Gemini Code Assist is built on the Gemini 2.0 model, fine-tuned for programming tasks by analyzing real-world coding use cases. According to Google, the quality of AI-generated recommendations is now "better than ever." In addition to the free tier, Google has also introduced Gemini Code Assist for GitHub, which automates parts of the code review workflow by summarizing pull requests. This free offering positions Gemini as a direct competitor to GitHub Copilot, expanding access to AI-assisted coding for hobbyists and startup developers.

Recommended read:
References :
  • SiliconANGLE: Google launches free Gemini Code Assist tier for individuals
  • THE DECODER: Google's Gemini Code Assist lets solo developers get free AI coding help right in their IDE
  • eWEEK: Google’s Free Gemini Code Assist for Individual Developers Offers Recommendations That are “Better Than Everâ€�
  • AI News | VentureBeat: Google makes Gemini Code Assist free with 180,000 code completions per month as AI-powered dev race heats up

will@LearnAI //
AWS is enhancing its AI capabilities with the introduction of cost-effective AI inference solutions using Amazon Bedrock serverless features alongside Amazon SageMaker trained models. This advancement allows users to import their own custom fine-tuned models from SageMaker into Amazon Bedrock, providing access through a fully managed API. This new approach eliminates the need for self-managed infrastructure or costly provisioned throughput, making AI more accessible. Amazon Bedrock supports a variety of model architectures, including Mistral, Flan, Meta Llama 2 and Llama 3, which can be interacted with via the Bedrock playgrounds once imported.

AWS also launched a new cloud region in Thailand, aimed at bolstering Southeast Asia’s digital economy and presenting opportunities for Indian businesses to serve a wider audience. This expansion provides a strategic gateway to neighbouring markets in the APAC region. The move highlights AWS's commitment to expanding its global infrastructure and enhancing the availability of cloud-based services. The Thailand cloud region adds to AWS’s growing list of services focused on supporting AI development and deployment.

Recommended read:
References :
  • ai-techpark.com: AWS Launches Infrastructure Region in Thailand
  • AWS Machine Learning Blog: Unlock cost-effective AI inference using Amazon Bedrock serverless capabilities with an Amazon SageMaker trained model
  • LearnAI: Unlock cost-effective AI inference using Amazon Bedrock serverless capabilities with an Amazon SageMaker trained model
  • learn.aisingapore.org: Unlock cost-effective AI inference using Amazon Bedrock serverless capabilities with an Amazon SageMaker trained model
  • analyticsindiamag.com: AWS Launches New Cloud Region in Thailand to Boost Southeast Asia’s Digital Economy
  • Analytics India Magazine: AWS Launches New Cloud Region in Thailand to Boost Southeast Asia’s Digital Economy
  • AWS Machine Learning Blog: Build AI-powered malware analysis using Amazon Bedrock with Deep Instinct

Ellie Ramirez-Camara@Data Phoenix //
References: Shelly Palmer , Data Phoenix ,
OpenAI's new image generation tool, integrated into ChatGPT 4o, is experiencing immense popularity, leading to temporary limitations on GPU usage. CEO Sam Altman acknowledged the issue, stating that their GPUs are "melting" due to the overwhelming demand. This surge highlights the significant computational resources required for complex AI image generation, pushing the limits of current infrastructure. The new model within ChatGPT is designed to replace DALL·E as the default image generator and aims to provide superior realism and speed compared to its predecessor.

The company is implementing measures to manage the high demand, including rate-limiting image requests. OpenAI delayed the new GPT-4o image generation feature for free users due to high demand. While there's no word on specific limits for paid users, it's expected they are also experiencing slowdowns. The popularity of Ghibli-style images has even flooded social media, prompting discussions about artistic integrity and the physical constraints of supporting such intensive computational tasks at scale.

Recommended read:
References :
  • Shelly Palmer: OpenAI's image generation has gone so viral that even if you haven’t tried it, you probably know exactly what you’re missing — hyper-realistic AI images created in seconds, now throttled because the GPUs can’t take the heat.
  • Data Phoenix: OpenAI announced GPT-4o's new image generation capabilities this Tuesday. The GPT-4o is set to replace DALL·E as the default image generation model.
  • www.infoq.com: OpenAI released a new version of GPT-4o with native image generation capability. The model can modify uploaded images or create new ones from prompts and exhibits multi-turn consistency when refining images and improved generation of text in images.