@the-decoder.com
//
OpenAI is expanding its global reach through strategic partnerships with governments and the introduction of advanced model customization tools. The organization has launched the "OpenAI for Countries" program, an initiative designed to collaborate with governments worldwide on building robust AI infrastructure. This program aims to assist nations in setting up data centers and adapting OpenAI's products to meet local language and specific needs. OpenAI envisions this initiative as part of a broader global strategy to foster cooperation and advance AI capabilities on an international scale.
This expansion also includes technological advancements, with OpenAI releasing Reinforcement Fine-Tuning (RFT) for its o4-mini reasoning model. RFT enables enterprises to fine-tune their own versions of the model using reinforcement learning, tailoring it to their unique data and operational requirements. This allows developers to customize the model to better fit their needs using OpenAI’s platform dashboard, tweaking it for internal terminology, goals, processes and more. Once deployed, if an employee or leader at the company wants to use it through a custom internal chatbot orcustom OpenAI GPTto pull up private, proprietary company knowledge, answer specific questions about company products and policies, or generate new communications and collateral in the company’s voice, they can do so more easily with their RFT version of the model. The "OpenAI for Countries" program is slated to begin with ten international projects, supported by funding from both OpenAI and participating governments. Chris Lehane, OpenAI's vice president of global policy, indicated that the program was inspired by the AI Action Summit in Paris, where several countries expressed interest in establishing their own "Stargate"-style projects. Moreover, the release of RFT on o4-mini signifies a major step forward in custom model optimization, offering developers a powerful new technique for tailoring foundation models to specialized tasks. This allows for fine-grained control over how models improve, by defining custom objectives and reward functions. References :
Classification:
|
BenchmarksBlogsResearch Tools |