News from the AI & ML world
@the-decoder.com
//
DeepSeek's R1 model has garnered significant attention in the AI landscape. Perplexity AI has created R1 1776, a modified version of DeepSeek-R1 designed to overcome Chinese censorship through specialized post-training techniques. This modification addresses the original model's limitation of responding to sensitive topics with pre-approved Communist Party messaging. Perplexity's post-training process involved extensive data collection on censored Chinese topics, developing a multilingual censorship detection system to identify and address censored responses.
This modification allows R1 1776 to handle previously censored topics comprehensively and without bias, while maintaining its mathematical and reasoning capabilities. Furthermore, IBM has confirmed its integration of distilled versions of DeepSeek's AI models into its WatsonX platform. This decision is validated by a commitment to open source innovation and an eye on the high costs of US-originated AI models. IBM aims to broaden WatsonX's ability to perform secure reasoning by incorporating the "best open source models" available, including those from DeepSeek.
ImgSrc: the-decoder.com
References :
- techstrong.ai: IBM Distills Chinese DeepSeek AI Models Into WatsonX
- www.analyticsvidhya.com: Grok 3 vs DeepSeek R1: Which is Better?
- www.artificialintelligence-news.com: DeepSeek to open-source AGI research amid privacy concerns
- composio.dev: Grok 3 vs. Deepseek r1
- Fello AI: Grok 3 vs ChatGPT vs DeepSeek vs Claude vs Gemini – Which AI Is Best in February 2025?
- AI News: DeepSeek, a Chinese AI startup aiming for artificial general intelligence (AGI), announced plans to open-source five repositories starting next week as part of its commitment to transparency and community-driven innovation.
Classification: