Ali Azhar@AIwire
//
References:
AIwire
Meta has announced the creation of Meta Superintelligence Labs (MSL), a new division focused on long-horizon goals and foundational AI development. This strategic move consolidates Meta's core AI efforts, bringing together the Fundamental Artificial Intelligence Research (FAIR) group, the LLaMA model team, and key infrastructure units into a single entity. The lab aims to pursue the next generation of AI systems with greater focus and resources, signaling Meta's ambition to be a leader in artificial general intelligence (AGI). Alexandr Wang, former CEO of Scale AI, has been appointed as Meta's first Chief AI Officer and will co-lead MSL's research and product direction alongside Nat Friedman, former GitHub CEO. Meta is making substantial investments in compute infrastructure, including a large-scale facility equipped with over 1.3 million Nvidia GPUs, underscoring its commitment to advancing AI capabilities.
The formation of MSL represents a significant shift in Meta's AI strategy, moving from developing AI tools for short-term product features to concentrating on foundational advancements and scientific leadership. This reorganization suggests that Meta views superintelligence not as a distant aspiration, but as a near-term opportunity. Meta has been actively recruiting top AI talent, including key figures from competitors like Apple, highlighting a competitive landscape for AI expertise. The company's investment in infrastructure and its aggressive hiring strategy indicate a strong determination to lead in the rapidly evolving AI field. In parallel with its AI research focus, Meta is also involved in initiatives to foster AI talent and its application for public good. The company is backing a £1 million 'Open Source AI Fellowship' in collaboration with the UK Government and the Alan Turing Institute. This program aims to embed AI experts within UK government departments to develop advanced tools for public services, utilizing open-source models such as Meta's Llama. This initiative demonstrates Meta's commitment to supporting the development of AI for societal benefit, alongside its ambitious internal research objectives. Recommended read:
References :
@ComputerWeekly.com
//
Meta and the UK Government have joined forces to launch a £1 million ‘Open Source AI Fellowship’ program. The goal is to embed some of the UK’s most promising AI experts within Whitehall, the UK government's administrative center, to develop advanced AI tools. These tools will aim to improve government agility and contribute to the delivery of the Plan for Change. The Alan Turing Institute is also backing the fellowship.
The program intends to harness the power of open source AI models, including Meta's Llama models. These models have shown great potential for scientific and medical breakthroughs and could transform public service delivery. Fellows will work within government departments, potentially contributing to high-security use cases like AI-powered language translation for national security, or speeding up the approval process for house building by leveraging construction planning data. The fellowship is a practical response to the growing demand for generative AI talent. It will provide engineers a chance to address high-impact public sector challenges, which aims to create transparent, sovereign AI infrastructure that can scale across departments while reducing costs and enhancing productivity. Technology Secretary Peter Kyle emphasizes the aim is to create open, practical AI tools "built for public good," focusing on delivery rather than just ideas and developing sovereign capabilities in areas like national security and critical infrastructure. Recommended read:
References :
@www.marktechpost.com
//
Meta AI has announced the release of V-JEPA 2, an open-source world model designed to enhance robots' ability to understand and interact with physical environments. V-JEPA 2 builds upon the Joint Embedding Predictive Architecture (JEPA) and leverages self-supervised learning from over one million hours of video and images. This approach allows the model to learn abstract concepts and predict future states, enabling robots to perform tasks in unfamiliar settings and improving their understanding of motion and appearance. The model can be useful in manufacturing automation, surveillance analytics, in-building logistics, robotics, and other more advanced use cases.
Meta researchers scaled JEPA pretraining by constructing a 22M-sample dataset (VideoMix22M) from public sources and expanded the encoder capacity to over 1B parameters. They also adopted a progressive resolution strategy and extended pretraining to 252K iterations, reaching 64 frames at 384x384 resolution. V-JEPA 2 avoids the inefficiencies of pixel-level prediction by focusing on predictable scene dynamics while disregarding irrelevant noise. This abstraction makes the system both more efficient and robust, requiring just 16 seconds to plan and control robots. Meta's V-JEPA 2 represents a step toward achieving "advanced machine intelligence" by enabling robots to interact effectively in environments they have never encountered before. The model achieves state-of-the-art results on motion recognition and action prediction benchmarks and can control robots without additional training. By focusing on the essential and predictable aspects of a scene, V-JEPA 2 aims to provide AI agents with the intuitive physics needed for effective planning and reasoning in the real world, distinguishing itself from generative models that attempt to predict every detail. Recommended read:
References :
@Latest news
//
Meta CEO Mark Zuckerberg is spearheading a new initiative to develop artificial general intelligence (AGI), recruiting top AI researchers to form an elite team. This push aims to create AI systems capable of performing any intellectual task that a human can, positioning Meta to compete directly with tech giants like Google and OpenAI. Zuckerberg's involvement includes personal recruitment efforts, indicating the high priority Meta is placing on this project. This signals a significant shift for Meta, aiming to lead in the rapidly evolving AI landscape.
Disappointment with the performance of Meta's LLaMA 4 model compared to competitors like OpenAI's GPT-4 and Google's Gemini spurred Zuckerberg's increased focus on AGI. Internally, LLaMA 4 was considered inadequate in real-world user experience, lacking coherence and usability. Furthermore, Meta's metaverse investments have not yielded the anticipated results, leading the company to redirect its focus and resources toward AI, aiming to recapture relevance and mindshare in the tech industry. With tens of billions already invested in infrastructure and foundational models, Meta is now fully committed to achieving AGI. To further bolster its AI ambitions, Meta is investing heavily in AI start-up Scale AI. Meta has invested €12 billion and acquired a 49% stake in Scale AI. The investment has caused Google to end its $200 million partnership with Scale AI. Zuckerberg has also offered large salaries to poach AI talent. This move is part of Meta's broader strategy to build superintelligence and challenge the dominance of other AI leaders. Meta's aggressive pursuit of AI talent and strategic investments highlight its determination to become a frontrunner in the race to build AGI. Recommended read:
References :
@techhq.com
//
References:
techhq.com
, Tech News | Euronews RSS
,
Meta is making significant strides in powering its AI infrastructure with nuclear energy and revolutionizing advertising through AI-generated content. The company has entered into a 20-year agreement with Constellation Energy to secure 1,121 megawatts of emissions-free power from the Clinton Clean Energy Center in southern Illinois, starting in 2027. This move aims to address the surging energy demands of Meta's AI data centers and positions the company alongside other tech giants like Google, Microsoft, and Amazon in embracing nuclear power as a sustainable solution.
The partnership with Constellation not only ensures a reliable energy supply but also contributes to the local economy by preserving over 1,100 jobs and generating $13.5 million in annual tax revenue. Meta emphasizes nuclear power's role in providing firm electricity while supporting local economies and strengthening America's energy leadership. Addressing grid reliability concerns, the agreement maintains existing capacity and adds incremental power to the grid, enabling Constellation to explore further nuclear development at the site. Simultaneously, Meta is betting big on AI-generated advertising, aiming to streamline the ad creation process for businesses. The goal is to enable users to input a product image and define a budget, then let AI handle everything else – generating copy, creating images and video, deploying the ads, targeting the audience, and even recommending spend. Meta's current tools already tweak and optimize ad variations and Meta believes that Small businesses will benefit the most, as generative tools could level the creative playing field, removing the need for agencies, studios, or in-house teams. Recommended read:
References :
@www.marktechpost.com
//
Meta is undergoing significant changes within its AI division, aiming to accelerate development and integrate AI more deeply into its advertising platform. The company is restructuring its AI organization into two teams: one focused on AI products and the other on advancing Artificial General Intelligence (AGI) research, particularly for its Llama models. This reorganization comes amidst a substantial talent exodus, with a significant portion of the original Llama research team having departed, many joining competitors like Mistral AI. Despite these challenges, Meta AI has reached a milestone of 1 billion monthly active users across Facebook, Instagram, WhatsApp, and Messenger, highlighting the broad reach of its AI initiatives.
Meta's focus is now shifting towards monetizing its AI capabilities, particularly through advertising. By the end of 2026, Meta intends to enable advertisers to fully create and target campaigns using AI, potentially disrupting traditional advertising agencies. Advertisers will be able to provide a product image and budget, and Meta's AI would generate the entire ad, including imagery, video, and text, while also targeting specific user demographics. This move aims to attract more advertisers, especially small and mid-sized businesses, by simplifying the ad creation process and leveraging Meta's extensive user data for targeted campaigns. However, Meta's increased reliance on AI raises concerns regarding data privacy and ethical considerations. The company has begun using data from Facebook and Instagram users, including posts, photos, and interactions with Meta AI, to train its AI models. Furthermore, Meta is reportedly planning to automate up to 90% of its risk assessments across Facebook and Instagram, including product development and rule changes. This shift raises questions about potential oversights and the impact on user safety, given the reliance on AI to evaluate potential risks and enforce policies. Recommended read:
References :
staff@insideAI News
//
References:
insideAI News
, Ken Yeung
Meta is partnering with Cerebras to enhance AI inference speeds within Meta's new Llama API. This collaboration combines Meta's open-source Llama models with Cerebras' specialized inference technology, aiming to provide developers with significantly faster performance. According to Cerebras, developers building on the Llama 4 Cerebras model within the API can expect speeds up to 18 times quicker than traditional GPU-based solutions. This acceleration is expected to unlock new possibilities for building real-time and agentic AI applications, making complex tasks like low-latency voice interaction, interactive code generation, and real-time reasoning more feasible.
This partnership allows Cerebras to expand its reach to a broader developer audience, strengthening its existing relationship with Meta. Since launching its inference solutions in 2024, Cerebras has emphasized its ability to deliver rapid Llama inference, serving billions of tokens through its AI infrastructure. Andrew Feldman, CEO and co-founder of Cerebras, stated that the company is proud to make Llama API the fastest inference API available, empowering developers to create AI systems previously unattainable with GPU-based inference clouds. Independent benchmarks by Artificial Analysis support this claim, indicating that Cerebras achieves significantly higher token processing speeds compared to platforms like ChatGPT and DeepSeek. Developers will have direct access to the enhanced Llama 4 inference by selecting Cerebras within the Llama API. Meta also continues to innovate with its AI app, testing new features such as "Reasoning" mode and "Voice Personalization," designed to enhance user interaction. The “Reasoning” feature could potentially offer more transparent explanations for the AI’s responses, while voice settings like "Focus on my voice" and "Welcome message" could offer more personalized audio interactions, especially relevant for Meta's hardware ambitions in areas such as smart glasses and augmented reality devices. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Meta is actively expanding the capabilities of its standalone Meta AI app, introducing new features focused on enhanced personalization and functionality. The company is developing a "Discover AIs" tab, which could serve as a hub for users to explore and interact with various AI assistants, potentially including third-party or specialized models. This aligns with Meta’s broader strategy to integrate personalized AI agents across its apps and hardware. Meta launched a dedicated Meta AI app powered by Llama 4 that focuses on offering more natural voice conversations and can leverage user data from Facebook and Instagram to provide tailored responses.
Meta is also testing a "reasoning" mode, suggesting the company aims to provide more transparent and advanced explanations in its AI assistant's responses. While the exact implementation remains unclear, the feature could emphasize structured logic or chain-of-thought capabilities, similar to developments in models from OpenAI and Google DeepMind. This would give users greater insight into how the AI derives its answers, potentially boosting trust and utility for complex queries. Further enhancing user experience, Meta is working on new voice settings, including "Focus on my voice" and "Welcome message." "Focus on my voice" could improve the AI's ability to isolate and respond to the primary user's speech in environments with multiple speakers. The "Welcome message" feature might offer a customizable greeting or onboarding experience when the assistant is activated. These features are particularly relevant for Meta’s hardware ambitions, such as its Ray-Ban smart glasses and future AR devices, where voice interaction plays a critical role. To ensure privacy, Meta is also developing Private Processing for AI tools on WhatsApp, allowing users to leverage AI in a secure way. Recommended read:
References :
kevinokemwa@outlook.com (Kevin@windowscentral.com
//
Meta is aggressively pursuing the development of AI-powered "friends" to combat what CEO Mark Zuckerberg identifies as a growing "loneliness epidemic." Zuckerberg envisions these AI companions as social chatbots capable of engaging in human-like interactions. This initiative aims to bridge the gap in human connectivity, which Zuckerberg believes is lacking in today's fast-paced world. He suggests that virtual friends might help individuals who struggle to establish meaningful connections with others in real life.
Zuckerberg revealed that Meta is launching a standalone Meta AI app powered by the Llama 4 model. This app is designed to facilitate more natural voice conversations and provide tailored responses by leveraging user data from Facebook and Instagram. This level of personalization aims to create a more engaging and relevant experience for users seeking companionship and interaction with AI. Furthermore, the CEO indicated that Meta is also focusing on AI smart glasses. He sees these glasses as a core element of the future of technology. However, Zuckerberg acknowledged that the development of AI friends is still in its early stages, and there may be societal stigmas associated with forming connections with AI-powered chatbots. He also stated that while smart glasses are a point of focus for the company, it's unlikely they will replace smartphones. In addition to the development of AI companions, Meta is also pushing forward with other AI initiatives, including integrating the new Meta AI app with the Meta View companion app for its Ray-Ban Meta smart glasses and launching an AI assistant app that personalizes its responses to user data. Recommended read:
References :
Facebook@Meta Newsroom
//
Meta has launched its first dedicated AI application, directly challenging ChatGPT in the burgeoning AI assistant market. The Meta AI app, built on the Llama 4 large language model (LLM), aims to offer users a more personalized AI experience. The application is designed to learn user preferences, remember context from previous interactions, and provide seamless voice-based conversations, setting it apart from competitors. This move is a significant step in Meta's strategy to establish itself as a major player in the AI landscape, offering a direct path to its generative AI models.
The new Meta AI app features a 'Discover' feed, a social component allowing users to explore how others are utilizing AI and share their own AI-generated creations. The app also replaces Meta View as the companion application for Ray-Ban Meta smart glasses, enabling a fluid experience across glasses, mobile, and desktop platforms. Users will be able to initiate conversations on one device and continue them seamlessly on another. To use the application, a Meta products account is required, though users can sign in with their existing Facebook or Instagram profiles. CEO Mark Zuckerberg emphasized that the app is designed to be a user’s personal AI, highlighting the ability to engage in voice conversations. The app begins with basic information about a user's interests, evolving over time to incorporate more detailed knowledge about the user and their network. The launch of the Meta AI app comes as other companies are also developing their AI models, seeking to demonstrate the power and flexibility of its in-house Llama 4 models to both consumers and third-party software developers. Recommended read:
References :
@about.fb.com
//
References:
about.fb.com
, techxplore.com
,
Meta has launched the Meta AI app, a standalone AI assistant designed to compete with ChatGPT. This marks Meta's initial step in building a more personalized AI experience for its users. The app is built with Llama 4, a model Meta touts as more cost-efficient than its competitors. Users can access Meta AI through voice conversations and interactions will be personalized over time as the AI learns user preferences and contexts across Meta's various apps.
The Meta AI app includes a Discover feed, enabling users to share and explore how others are utilizing AI. It also replaces Meta View as the companion app for Ray-Ban Meta smart glasses, allowing for seamless conversations across glasses, mobile app, and desktop interfaces. According to Meta CEO Mark Zuckerberg, the app is designed to be a personal AI, starting with basic context about user interests and evolving to incorporate more comprehensive knowledge from across Meta's platforms. With the introduction of the Meta AI app, Meta aims to provide a direct path to its generative AI models for its users, similar to the approach taken by OpenAI with ChatGPT. The release comes as OpenAI stands as a leader of straight-to-user AI through its ChatGPT assistant that is regularly updated with new capabilities. Zuckerberg noted that a billion people are already using Meta AI across Meta's apps. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |