Alexey Shabanov@TestingCatalog
//
Mistral AI is expanding its AI capabilities with the introduction of a new Agents feature within Le Chat, offering users intuitive customization, advanced controls, and faster performance. This redesigned Agents feature replaces the earlier Agent Builder interface and integrates closely with the main chat experience. It allows users to create and customize autonomous agents with functionalities similar to OpenAI's GPT Builder but with its unique design choices and system integrations.
Mistral AI has also launched its Agents API, a framework designed to empower developers to build AI agents capable of executing various tasks. These tasks include running Python code in a secure sandbox, generating images using the FLUX model, and performing retrieval-augmented generation (RAG). The Agents API provides a cohesive environment for large language models to interact with multiple tools and data sources, fostering efficient and versatile AI agent creation. The features that are converging across major LLM API vendors are code execution (Python in a sandbox), web search (using Brave), document library (hosted RAG), and image generation (FLUX for Mistral). The rate of MCP support is also similar across the major vendors with OpenAI adding it May 21st, Anthropic launched theirs May 22nd and now Mistral has launched theirs on May 27th. For professionals like Lead AI Engineers or Senior AI Engineers, the Mistral Agents API represents a powerful addition to their AI toolkit. References :
Classification:
Matthias Bastian@THE DECODER
//
Mistral AI, a French artificial intelligence startup, has launched Mistral Small 3.1, a new open-source language model boasting 24 billion parameters. According to the company, this model outperforms similar offerings from Google and OpenAI, specifically Gemma 3 and GPT-4o Mini, while operating efficiently on consumer hardware like a single RTX 4090 GPU or a MacBook with 32GB RAM. It supports multimodal inputs, processing both text and images, and features an expanded context window of up to 128,000 tokens, which makes it suitable for long-form reasoning and document analysis.
Mistral Small 3.1 is released under the Apache 2.0 license, promoting accessibility and competition within the AI landscape. Mistral AI aims to challenge the dominance of major U.S. tech firms by offering a high-performance, cost-effective AI solution. The model achieves inference speeds of 150 tokens per second and is designed for text and multimodal understanding, positioning itself as a powerful alternative to industry-leading models without the need for expensive cloud infrastructure. References :
Classification:
|
BenchmarksBlogsResearch Tools |