@pcmag.com
//
A recent Windows 11 update has inadvertently uninstalled the Copilot AI assistant from some users' PCs, causing frustration. The bug, affecting updates KB5053598, KB5053602, and KB5053606 across Windows 11 and Windows 10, removes the Copilot app and unpins it from the taskbar. Microsoft has acknowledged the issue and updated the release notes, confirming that Copilot for Microsoft 365 is not affected.
Users affected by this bug can manually reinstall the Copilot app from the Microsoft Store and repin it to their taskbar as a temporary solution. It's worth noting that some users on Reddit have expressed that they appreciate this accidental "feature," stating they would prefer the option to install Copilot rather than having it forced upon them. Microsoft is currently working on a permanent solution and likely to issue an update soon. Recommended read:
References :
@NVIDIA Newsroom
//
Nvidia has launched its highly anticipated RTX 50 series GPUs at CES 2025, with aggressive pricing strategies and a strong focus on AI integration. The lineup features the flagship RTX 5090 priced at $1,999, which is built on the new Blackwell architecture and uses GDDR7 memory. Nvidia claims the 5090 provides twice the performance of the 4090. The series also includes the RTX 5080 at $999, with the 5070 Ti and 5070 following in February. Most models in the series offer enhanced performance without significant price increases, with the RTX 5070 launching at US$549, a price point that triggered applause during the presentation. An entry-level RTX 5070 has been shown to match the performance of the previous generation RTX 4090 while using half the power and costing a third of the price.
The key highlight of the RTX 50 series is the integration of AI into graphics processing. Nvidia is using AI to improve various aspects of rendering, including ray tracing and complex scenes, aiming for better visuals and performance while reducing artifacts and VRAM usage. New technologies like RTX Neural Shaders, Texture Compression, Materials and Radiance Cache use AI to optimize textures, compress shader code and enhance lighting. Nvidia also announced new AI techniques for improving facial rendering using AI to infer realistic facial features. This also includes RTX Neural Faces and a Character Rendering SDK. With the focus of this series being AI-enhanced, it is not surprising that Nvidia considers "AI is the new graphics". Recommended read:
References :
ChinaTechNews.com Staff@ChinaTechNews.com
//
Nvidia Corp. has signaled a strong trajectory for AI-driven growth into 2025, bolstered by a solid fourth-quarter earnings and revenue beat. The company's revenue jumped 78% year-over-year, surpassing investor expectations, with earnings reaching $0.89 per share, exceeding estimates. Nvidia's guidance for the current quarter indicates continued growth, forecasting sales of $43 billion, which further demonstrates the company's confidence in sustained demand for its AI-related products.
Nvidia's success is attributed to the high demand for its GPUs, particularly for AI applications. The company has begun producing its next-generation Blackwell GPUs, with CEO Jensen Huang noting strong demand. Data Center revenue saw a remarkable increase of 93% year-over-year, reaching $35.6 billion. This performance underscores Nvidia's leadership in providing hardware for AI advancements and its pivotal role in the ongoing AI revolution. Recommended read:
References :
Esra Kayabali@AWS News Blog
//
Anthropic has launched Claude 3.7 Sonnet, their most advanced AI model to date, designed for practical use in both business and development. The model is described as a hybrid system, offering both quick responses and extended, step-by-step reasoning for complex problem-solving. This versatility eliminates the need for separate models for different tasks. The company emphasized Claude 3.7 Sonnet’s strength in coding tasks. The model's reasoning capabilities allow it to analyze and modify complex codebases more effectively than previous versions and can process up to 128K tokens.
Anthropic also introduced Claude Code, an agentic coding tool, currently in limited research preview. The tool promises to revolutionize coding by automating parts of a developer's job. Claude 3.7 Sonnet is accessible across all Anthropic plans, including Free, Pro, Team, and Enterprise, and via the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI. Extended thinking mode is reserved for paid subscribers. Pricing is set at $3 per million input tokens and $15 per million output tokens. Anthropic stated they reduced unnecessary refusals by 45% compared to its predecessor. Recommended read:
References :
Megan Crouse@eWEEK
//
Google has launched a free tier of its AI-powered coding assistant, Gemini Code Assist, for individual developers. This new tier provides access to AI-driven coding assistance, including code suggestions, debugging support, and error explanations. It supports 22 programming languages and offers up to 180,000 code completions per month, significantly exceeding the free tier of GitHub Copilot, which only provides 2,000 completions per month. The tool integrates seamlessly with popular IDEs like Visual Studio Code and JetBrains IDEs.
Google's Gemini Code Assist is built on the Gemini 2.0 model, fine-tuned for programming tasks by analyzing real-world coding use cases. According to Google, the quality of AI-generated recommendations is now "better than ever." In addition to the free tier, Google has also introduced Gemini Code Assist for GitHub, which automates parts of the code review workflow by summarizing pull requests. This free offering positions Gemini as a direct competitor to GitHub Copilot, expanding access to AI-assisted coding for hobbyists and startup developers. Recommended read:
References :
will@LearnAI
//
AWS is enhancing its AI capabilities with the introduction of cost-effective AI inference solutions using Amazon Bedrock serverless features alongside Amazon SageMaker trained models. This advancement allows users to import their own custom fine-tuned models from SageMaker into Amazon Bedrock, providing access through a fully managed API. This new approach eliminates the need for self-managed infrastructure or costly provisioned throughput, making AI more accessible. Amazon Bedrock supports a variety of model architectures, including Mistral, Flan, Meta Llama 2 and Llama 3, which can be interacted with via the Bedrock playgrounds once imported.
AWS also launched a new cloud region in Thailand, aimed at bolstering Southeast Asia’s digital economy and presenting opportunities for Indian businesses to serve a wider audience. This expansion provides a strategic gateway to neighbouring markets in the APAC region. The move highlights AWS's commitment to expanding its global infrastructure and enhancing the availability of cloud-based services. The Thailand cloud region adds to AWS’s growing list of services focused on supporting AI development and deployment. Recommended read:
References :
Ellie Ramirez-Camara@Data Phoenix
//
References:
Shelly Palmer
, Data Phoenix
,
OpenAI's new image generation tool, integrated into ChatGPT 4o, is experiencing immense popularity, leading to temporary limitations on GPU usage. CEO Sam Altman acknowledged the issue, stating that their GPUs are "melting" due to the overwhelming demand. This surge highlights the significant computational resources required for complex AI image generation, pushing the limits of current infrastructure. The new model within ChatGPT is designed to replace DALL·E as the default image generator and aims to provide superior realism and speed compared to its predecessor.
The company is implementing measures to manage the high demand, including rate-limiting image requests. OpenAI delayed the new GPT-4o image generation feature for free users due to high demand. While there's no word on specific limits for paid users, it's expected they are also experiencing slowdowns. The popularity of Ghibli-style images has even flooded social media, prompting discussions about artistic integrity and the physical constraints of supporting such intensive computational tasks at scale. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |