Ali Azhar@AIwire
//
References:
AIwire
Meta has announced the creation of Meta Superintelligence Labs (MSL), a new division focused on long-horizon goals and foundational AI development. This strategic move consolidates Meta's core AI efforts, bringing together the Fundamental Artificial Intelligence Research (FAIR) group, the LLaMA model team, and key infrastructure units into a single entity. The lab aims to pursue the next generation of AI systems with greater focus and resources, signaling Meta's ambition to be a leader in artificial general intelligence (AGI). Alexandr Wang, former CEO of Scale AI, has been appointed as Meta's first Chief AI Officer and will co-lead MSL's research and product direction alongside Nat Friedman, former GitHub CEO. Meta is making substantial investments in compute infrastructure, including a large-scale facility equipped with over 1.3 million Nvidia GPUs, underscoring its commitment to advancing AI capabilities.
The formation of MSL represents a significant shift in Meta's AI strategy, moving from developing AI tools for short-term product features to concentrating on foundational advancements and scientific leadership. This reorganization suggests that Meta views superintelligence not as a distant aspiration, but as a near-term opportunity. Meta has been actively recruiting top AI talent, including key figures from competitors like Apple, highlighting a competitive landscape for AI expertise. The company's investment in infrastructure and its aggressive hiring strategy indicate a strong determination to lead in the rapidly evolving AI field. In parallel with its AI research focus, Meta is also involved in initiatives to foster AI talent and its application for public good. The company is backing a £1 million 'Open Source AI Fellowship' in collaboration with the UK Government and the Alan Turing Institute. This program aims to embed AI experts within UK government departments to develop advanced tools for public services, utilizing open-source models such as Meta's Llama. This initiative demonstrates Meta's commitment to supporting the development of AI for societal benefit, alongside its ambitious internal research objectives. Recommended read:
References :
@ComputerWeekly.com
//
Meta and the UK Government have joined forces to launch a £1 million ‘Open Source AI Fellowship’ program. The goal is to embed some of the UK’s most promising AI experts within Whitehall, the UK government's administrative center, to develop advanced AI tools. These tools will aim to improve government agility and contribute to the delivery of the Plan for Change. The Alan Turing Institute is also backing the fellowship.
The program intends to harness the power of open source AI models, including Meta's Llama models. These models have shown great potential for scientific and medical breakthroughs and could transform public service delivery. Fellows will work within government departments, potentially contributing to high-security use cases like AI-powered language translation for national security, or speeding up the approval process for house building by leveraging construction planning data. The fellowship is a practical response to the growing demand for generative AI talent. It will provide engineers a chance to address high-impact public sector challenges, which aims to create transparent, sovereign AI infrastructure that can scale across departments while reducing costs and enhancing productivity. Technology Secretary Peter Kyle emphasizes the aim is to create open, practical AI tools "built for public good," focusing on delivery rather than just ideas and developing sovereign capabilities in areas like national security and critical infrastructure. Recommended read:
References :
Rory Greener@XR Today
//
Meta and Oakley have officially announced their collaboration to create a new line of AR smart glasses. This partnership marks an expansion of Meta's efforts in the augmented reality wearable market, building on the success of their existing collaboration with Ray-Ban. The Oakley Meta HSTN smart glasses, designed with a focus on athletic performance, are set to launch later this summer with a starting price of $399. These glasses represent an evolution in AI glasses technology, combining Meta's technological expertise with Oakley's renowned design and brand recognition.
The Oakley Meta HSTN smart glasses will feature several hardware and software upgrades over the existing Ray-Ban Meta smart glasses. These upgrades include an enhanced camera for capturing higher-quality photos and videos. The glasses also boast open-ear speakers, an IPX4 water resistance rating, and other advanced features. The collaboration aims to dominate the smart glasses market by providing cutting-edge technology in a stylish and performance-oriented design, targeting both athletes and everyday users. Meta's strategic investment in Reality Labs is evident in this partnership. While the Reality Labs division has historically operated at a loss, Meta views it as a crucial long-term investment in the future of computing and augmented reality. The success of the Ray-Ban Meta AI glasses, which have seen a threefold increase in sales and growing usage of voice commands, has fueled Meta's confidence in the potential of smart glasses. This partnership with Oakley is another step toward expanding Meta's presence in the XR market and driving further revenue growth within the Reality Labs segment. Recommended read:
References :
Chris McKay@Maginative
//
Meta is making a significant move in the artificial intelligence race, investing $14.3 billion for a 49% stake in data-labeling startup Scale AI. This deal is more than just a financial investment; it brings Scale AI's CEO, 28-year-old Alexandr Wang, into Meta to lead a new "superintelligence" lab. The move highlights Meta's ambition to develop AI that surpasses human capabilities across multiple domains and is a calculated gamble to regain momentum in the competitive AI landscape. Meta is aiming for an AI reset and hopes that Scale's Wang is the right partner.
This acquisition reflects Meta's strategic shift towards building partnerships and leveraging external talent. Scale AI isn't a well-known name to the general public, but it's a vital component in the AI industry, providing the labeled training data that powers many AI systems, including those used by OpenAI, Microsoft, Google, and even the U.S. Department of Defense. Meta has agreed to dramatically increase its spending with Scale, but one person said Scale expects some other companies like Google and OpenAI will stop using Scale's services for fear of Meta using information about their usage to gain a competitive advantage. The "superintelligence" lab is part of a larger reorganization of Meta's AI divisions, aimed at sharpening the company's focus after facing internal challenges and criticism over its AI product releases. Meta, under CEO Mark Zuckerberg, has been heavily investing in AI infrastructure and product development since the rise of ChatGPT, launching its own large language model family, Llama. Zuckerberg has been personally recruiting top researchers to boost its AI efforts. The new lab will focus on developing a theoretical form of AI that surpasses human cognitive capabilities, a long-term and highly speculative goal that Meta is now seriously pursuing. Recommended read:
References :
@www.marktechpost.com
//
Meta AI has announced the release of V-JEPA 2, an open-source world model designed to enhance robots' ability to understand and interact with physical environments. V-JEPA 2 builds upon the Joint Embedding Predictive Architecture (JEPA) and leverages self-supervised learning from over one million hours of video and images. This approach allows the model to learn abstract concepts and predict future states, enabling robots to perform tasks in unfamiliar settings and improving their understanding of motion and appearance. The model can be useful in manufacturing automation, surveillance analytics, in-building logistics, robotics, and other more advanced use cases.
Meta researchers scaled JEPA pretraining by constructing a 22M-sample dataset (VideoMix22M) from public sources and expanded the encoder capacity to over 1B parameters. They also adopted a progressive resolution strategy and extended pretraining to 252K iterations, reaching 64 frames at 384x384 resolution. V-JEPA 2 avoids the inefficiencies of pixel-level prediction by focusing on predictable scene dynamics while disregarding irrelevant noise. This abstraction makes the system both more efficient and robust, requiring just 16 seconds to plan and control robots. Meta's V-JEPA 2 represents a step toward achieving "advanced machine intelligence" by enabling robots to interact effectively in environments they have never encountered before. The model achieves state-of-the-art results on motion recognition and action prediction benchmarks and can control robots without additional training. By focusing on the essential and predictable aspects of a scene, V-JEPA 2 aims to provide AI agents with the intuitive physics needed for effective planning and reasoning in the real world, distinguishing itself from generative models that attempt to predict every detail. Recommended read:
References :
@Latest news
//
Meta has made a significant move in the artificial intelligence race by acquiring a 49% stake in data-labeling startup Scale AI for a staggering $14.3 billion. This investment values Scale AI at over $29 billion and brings Scale AI's founder and CEO, Alexandr Wang, on board to lead a new "superintelligence" lab within Meta. The move underscores Meta's determination to accelerate its AI development and compete more effectively with industry leaders like OpenAI and Google.
This strategic acquisition signifies a shift in Meta's approach to AI development, where Zuckerberg has been personally recruiting top researchers from other companies. Scale AI, while not widely known to the public, plays a crucial role in the AI ecosystem by providing the labeled training data that powers large language models. They have a global workforce of over 200,000 contractors to label various forms of data. By bringing Wang and a portion of his team in-house, Meta aims to gain a competitive edge in building AI models that surpass human capabilities. Wang, who founded Scale AI in 2016 after dropping out of MIT, has grown the company into a major player in the AI industry. Scale AI works with business, governments and labs to exploit the benefits of artificial intelligence, and has a client list that includes OpenAI, Microsoft, Meta, Google, and the U.S. Department of Defense. As Wang departs for Meta, Jason Droege, former Uber Eats founder and current Chief Strategy Officer will step in as interim CEO to ensure that Scale AI continues to operate independently despite Meta's significant stake. Recommended read:
References :
@Latest news
//
Meta CEO Mark Zuckerberg is spearheading a new initiative to develop artificial general intelligence (AGI), recruiting top AI researchers to form an elite team. This push aims to create AI systems capable of performing any intellectual task that a human can, positioning Meta to compete directly with tech giants like Google and OpenAI. Zuckerberg's involvement includes personal recruitment efforts, indicating the high priority Meta is placing on this project. This signals a significant shift for Meta, aiming to lead in the rapidly evolving AI landscape.
Disappointment with the performance of Meta's LLaMA 4 model compared to competitors like OpenAI's GPT-4 and Google's Gemini spurred Zuckerberg's increased focus on AGI. Internally, LLaMA 4 was considered inadequate in real-world user experience, lacking coherence and usability. Furthermore, Meta's metaverse investments have not yielded the anticipated results, leading the company to redirect its focus and resources toward AI, aiming to recapture relevance and mindshare in the tech industry. With tens of billions already invested in infrastructure and foundational models, Meta is now fully committed to achieving AGI. To further bolster its AI ambitions, Meta is investing heavily in AI start-up Scale AI. Meta has invested €12 billion and acquired a 49% stake in Scale AI. The investment has caused Google to end its $200 million partnership with Scale AI. Zuckerberg has also offered large salaries to poach AI talent. This move is part of Meta's broader strategy to build superintelligence and challenge the dominance of other AI leaders. Meta's aggressive pursuit of AI talent and strategic investments highlight its determination to become a frontrunner in the race to build AGI. Recommended read:
References :
@techhq.com
//
References:
techhq.com
, Tech News | Euronews RSS
,
Meta is making significant strides in powering its AI infrastructure with nuclear energy and revolutionizing advertising through AI-generated content. The company has entered into a 20-year agreement with Constellation Energy to secure 1,121 megawatts of emissions-free power from the Clinton Clean Energy Center in southern Illinois, starting in 2027. This move aims to address the surging energy demands of Meta's AI data centers and positions the company alongside other tech giants like Google, Microsoft, and Amazon in embracing nuclear power as a sustainable solution.
The partnership with Constellation not only ensures a reliable energy supply but also contributes to the local economy by preserving over 1,100 jobs and generating $13.5 million in annual tax revenue. Meta emphasizes nuclear power's role in providing firm electricity while supporting local economies and strengthening America's energy leadership. Addressing grid reliability concerns, the agreement maintains existing capacity and adds incremental power to the grid, enabling Constellation to explore further nuclear development at the site. Simultaneously, Meta is betting big on AI-generated advertising, aiming to streamline the ad creation process for businesses. The goal is to enable users to input a product image and define a budget, then let AI handle everything else – generating copy, creating images and video, deploying the ads, targeting the audience, and even recommending spend. Meta's current tools already tweak and optimize ad variations and Meta believes that Small businesses will benefit the most, as generative tools could level the creative playing field, removing the need for agencies, studios, or in-house teams. Recommended read:
References :
Claire Prudhomme@marketingaiinstitute.com
//
References:
shellypalmer.com
, www.marketingaiinstitute.com
,
Meta is making a significant push towards fully automating ad creation, aiming to allow businesses to generate ads with minimal input by 2026. According to CEO Mark Zuckerberg, the goal is to enable advertisers to simply provide a product image and budget, then let AI handle the rest, including generating copy, creating images and video, deploying the ads, targeting the audience, and even recommending spend. This strategic move, described as a "redefinition" of the advertising industry, seeks to reduce friction for advertisers and scale performance using AI, which is particularly relevant given that over 97% of Meta's $134 billion in 2024 revenue is tied to advertising.
This level of automation goes beyond simply tweaking existing ads; it promises concept-to-completion automation with personalization built in. Meta's AI tools are expected to show users different versions of the same ad based on factors such as their location. The company believes small businesses will particularly benefit, as generative tools could level the creative playing field and remove the need for agencies, studios, or in-house teams. Alex Schultz, Meta’s chief marketing officer and vice president of Analytics, assures agencies that AI will enable them to focus precious time and resources on creativity. While Meta envisions a streamlined and efficient advertising process, some are concerned about the potential impact on brand standards and the resonance of AI-generated content compared to human-crafted campaigns. The move has also sent shock waves through the traditional marketing industry, with fears that agencies could lose control over the ad creation process. As competitors like Google also push similar tools, the trend suggests a shift where the creative brief becomes a prompt, the agency becomes an algorithm, and the marketer becomes a curator of generative campaigns. Recommended read:
References :
@www.marktechpost.com
//
Meta is undergoing significant changes within its AI division, aiming to accelerate development and integrate AI more deeply into its advertising platform. The company is restructuring its AI organization into two teams: one focused on AI products and the other on advancing Artificial General Intelligence (AGI) research, particularly for its Llama models. This reorganization comes amidst a substantial talent exodus, with a significant portion of the original Llama research team having departed, many joining competitors like Mistral AI. Despite these challenges, Meta AI has reached a milestone of 1 billion monthly active users across Facebook, Instagram, WhatsApp, and Messenger, highlighting the broad reach of its AI initiatives.
Meta's focus is now shifting towards monetizing its AI capabilities, particularly through advertising. By the end of 2026, Meta intends to enable advertisers to fully create and target campaigns using AI, potentially disrupting traditional advertising agencies. Advertisers will be able to provide a product image and budget, and Meta's AI would generate the entire ad, including imagery, video, and text, while also targeting specific user demographics. This move aims to attract more advertisers, especially small and mid-sized businesses, by simplifying the ad creation process and leveraging Meta's extensive user data for targeted campaigns. However, Meta's increased reliance on AI raises concerns regarding data privacy and ethical considerations. The company has begun using data from Facebook and Instagram users, including posts, photos, and interactions with Meta AI, to train its AI models. Furthermore, Meta is reportedly planning to automate up to 90% of its risk assessments across Facebook and Instagram, including product development and rule changes. This shift raises questions about potential oversights and the impact on user safety, given the reliance on AI to evaluate potential risks and enforce policies. Recommended read:
References :
@www.eweek.com
//
Meta is making a significant move into military technology, partnering with Anduril Industries to develop augmented and virtual reality (XR) devices for the U.S. Army. This collaboration reunites Meta with Palmer Luckey, the founder of Oculus who was previously fired from the company. The initiative aims to provide soldiers with enhanced situational awareness on the battlefield through advanced perception capabilities and AI-enabled combat tools. The devices, potentially named EagleEye, will integrate Meta's Llama AI models with Anduril's Lattice system to deliver real-time data and improve operational coordination.
The new XR headsets are designed to support real-time threat detection, such as identifying approaching drones or concealed enemy positions. They will also provide interfaces for operating AI-powered weapon systems. Anduril states that the project will save the U.S. military billions of dollars by using high-performance components and technology originally developed for commercial use. The partnership reflects a broader trend of Meta aligning more closely with national security interests. In related news, Meta's research team has made a surprising discovery that shorter reasoning chains can significantly improve AI accuracy. A study released by Meta and The Hebrew University of Jerusalem found that AI models achieve 34.5% better accuracy when using shorter reasoning processes. This challenges the conventional belief that longer, more complex reasoning chains lead to better results. The researchers developed a new method called "short-m@k," which runs multiple reasoning attempts in parallel, halting computation once the first few processes are complete and selecting the final answer through majority voting. This method could reduce computing costs by up to 40% while maintaining performance levels. Recommended read:
References :
staff@insideAI News
//
References:
insideAI News
, Ken Yeung
Meta is partnering with Cerebras to enhance AI inference speeds within Meta's new Llama API. This collaboration combines Meta's open-source Llama models with Cerebras' specialized inference technology, aiming to provide developers with significantly faster performance. According to Cerebras, developers building on the Llama 4 Cerebras model within the API can expect speeds up to 18 times quicker than traditional GPU-based solutions. This acceleration is expected to unlock new possibilities for building real-time and agentic AI applications, making complex tasks like low-latency voice interaction, interactive code generation, and real-time reasoning more feasible.
This partnership allows Cerebras to expand its reach to a broader developer audience, strengthening its existing relationship with Meta. Since launching its inference solutions in 2024, Cerebras has emphasized its ability to deliver rapid Llama inference, serving billions of tokens through its AI infrastructure. Andrew Feldman, CEO and co-founder of Cerebras, stated that the company is proud to make Llama API the fastest inference API available, empowering developers to create AI systems previously unattainable with GPU-based inference clouds. Independent benchmarks by Artificial Analysis support this claim, indicating that Cerebras achieves significantly higher token processing speeds compared to platforms like ChatGPT and DeepSeek. Developers will have direct access to the enhanced Llama 4 inference by selecting Cerebras within the Llama API. Meta also continues to innovate with its AI app, testing new features such as "Reasoning" mode and "Voice Personalization," designed to enhance user interaction. The “Reasoning” feature could potentially offer more transparent explanations for the AI’s responses, while voice settings like "Focus on my voice" and "Welcome message" could offer more personalized audio interactions, especially relevant for Meta's hardware ambitions in areas such as smart glasses and augmented reality devices. Recommended read:
References :
@www.marktechpost.com
//
Meta is making significant strides in the AI landscape, highlighted by the release of Llama Prompt Ops, a Python package aimed at streamlining prompt adaptation for Llama models. This open-source tool helps developers enhance prompt effectiveness by transforming inputs to better suit Llama-based LLMs, addressing the challenge of inconsistent performance across different AI models. Llama Prompt Ops facilitates smoother cross-model prompt migration and improves performance and reliability, featuring a transformation pipeline for systematic prompt optimization.
Meanwhile, Meta is expanding its AI strategy with the launch of a standalone Meta AI app, powered by Llama 4, to compete with rivals like Microsoft’s Copilot and ChatGPT. This app is designed to function as a general-purpose chatbot and a replacement for the “Meta View” app used with Meta Ray-Ban glasses, integrating a social component with a public feed showcasing user interactions with the AI. Meta also previewed its Llama API, designed to simplify the integration of its Llama models into third-party products, attracting AI developers with an open-weight model that supports modular, specialized applications. However, Meta's AI advancements are facing legal challenges, as a US judge is questioning the company's claim that training AI on copyrighted books constitutes fair use. The case, focusing on Meta's Llama model, involves training data including works by Sarah Silverman. The judge raised concerns that using copyrighted material to create a product capable of producing an infinite number of competing products could undermine the market for original works, potentially obligating Meta to pay licenses to copyright holders. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Meta is actively expanding the capabilities of its standalone Meta AI app, introducing new features focused on enhanced personalization and functionality. The company is developing a "Discover AIs" tab, which could serve as a hub for users to explore and interact with various AI assistants, potentially including third-party or specialized models. This aligns with Meta’s broader strategy to integrate personalized AI agents across its apps and hardware. Meta launched a dedicated Meta AI app powered by Llama 4 that focuses on offering more natural voice conversations and can leverage user data from Facebook and Instagram to provide tailored responses.
Meta is also testing a "reasoning" mode, suggesting the company aims to provide more transparent and advanced explanations in its AI assistant's responses. While the exact implementation remains unclear, the feature could emphasize structured logic or chain-of-thought capabilities, similar to developments in models from OpenAI and Google DeepMind. This would give users greater insight into how the AI derives its answers, potentially boosting trust and utility for complex queries. Further enhancing user experience, Meta is working on new voice settings, including "Focus on my voice" and "Welcome message." "Focus on my voice" could improve the AI's ability to isolate and respond to the primary user's speech in environments with multiple speakers. The "Welcome message" feature might offer a customizable greeting or onboarding experience when the assistant is activated. These features are particularly relevant for Meta’s hardware ambitions, such as its Ray-Ban smart glasses and future AR devices, where voice interaction plays a critical role. To ensure privacy, Meta is also developing Private Processing for AI tools on WhatsApp, allowing users to leverage AI in a secure way. Recommended read:
References :
kevinokemwa@outlook.com (Kevin@windowscentral.com
//
Meta is aggressively pursuing the development of AI-powered "friends" to combat what CEO Mark Zuckerberg identifies as a growing "loneliness epidemic." Zuckerberg envisions these AI companions as social chatbots capable of engaging in human-like interactions. This initiative aims to bridge the gap in human connectivity, which Zuckerberg believes is lacking in today's fast-paced world. He suggests that virtual friends might help individuals who struggle to establish meaningful connections with others in real life.
Zuckerberg revealed that Meta is launching a standalone Meta AI app powered by the Llama 4 model. This app is designed to facilitate more natural voice conversations and provide tailored responses by leveraging user data from Facebook and Instagram. This level of personalization aims to create a more engaging and relevant experience for users seeking companionship and interaction with AI. Furthermore, the CEO indicated that Meta is also focusing on AI smart glasses. He sees these glasses as a core element of the future of technology. However, Zuckerberg acknowledged that the development of AI friends is still in its early stages, and there may be societal stigmas associated with forming connections with AI-powered chatbots. He also stated that while smart glasses are a point of focus for the company, it's unlikely they will replace smartphones. In addition to the development of AI companions, Meta is also pushing forward with other AI initiatives, including integrating the new Meta AI app with the Meta View companion app for its Ray-Ban Meta smart glasses and launching an AI assistant app that personalizes its responses to user data. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |