Chris McKay@Maginative
//
Meta has unleashed Llama 4, its latest series of open-weight AI models, marking a significant advancement in multimodal reasoning. The new lineup includes Llama 4 Scout and Llama 4 Maverick, designed to empower developers in building more personalized multimodal experiences. These models are not just text-based; they are natively multimodal, integrating text and vision tokens seamlessly into a unified model backbone, allowing them to understand and generate text, video, and imagery.
Developers can now access Llama 4 Scout and Llama 4 Maverick on GroqCloud™, gaining day-zero access to these advanced open-source AI models. Llama 4 Scout boasts an industry-leading 10 million token context window, enabling it to handle extensive codebases and lengthy documents effectively. Llama 4 Maverick, on the other hand, demonstrates exceptional performance, even outperforming GPT-4o in reasoning, coding, and vision tasks while utilizing fewer parameters than DeepSeek V3. Groq is offering Llama 4 Scout for $0.11 / M input tokens and $0.34 / M output tokens, and Llama 4 Maverick for $0.50 / M input tokens and $0.77 / M output tokens. The Llama 4 models introduce a Mixture of Experts (MoE) architecture, enhancing computational efficiency during both training and inference. With Llama 4 Scout containing 17 billion active parameters across 16 experts (109 billion total) and Llama 4 Maverick featuring 17 billion active parameters across 128 experts (400 billion total), Meta is pushing the boundaries of AI capabilities. While a 2-trillion parameter Llama 4 Behemoth model is also being previewed, Meta's commitment to open-source AI continues, aiming to provide developers with helpful, safe, and adaptable experiences. Recommended read:
References :
Jonathan Kemper@THE DECODER
//
References:
Analytics India Magazine
, THE DECODER
Meta is developing MoCha (Movie Character Animator), an AI system designed to generate complete character animations. MoCha takes natural language prompts describing the character, scene, and actions, along with a speech audio clip, and outputs a cinematic-quality video. This end-to-end model synchronizes speech with facial movements, generates full-body gestures, and maintains character consistency, even managing turn-based dialogue between multiple speakers. The system introduces a "Speech-Video Window Attention" mechanism to solve challenges in AI video generation, improving lip sync accuracy by limiting each frame's access to a specific window of audio data and adding tokens to create smoother transitions.
MoCha runs on a diffusion transformer model with 30 billion parameters and produces HD video clips around five seconds long at 24 frames per second. For scenes with multiple characters, the team developed a streamlined prompt system, allowing users to define characters once and reference them with simple tags throughout different scenes. Meta’s AI research head, Joelle Pineau, announced her resignation which is effective at the end of May, vacating a high-profile position amid intense competition in AI development. Recommended read:
References :
Facebook@Meta
//
References:
Meta
, TechInformed
,
Meta is expanding its AI assistant, Meta AI, to 41 European countries, including those in the European Union, and 21 overseas territories. This marks Meta's largest global AI rollout to date. However, the European version will initially offer limited capabilities, starting with an "intelligent chat function" available in six languages: English, French, Spanish, Portuguese, German, and Italian.
This rollout comes after regulatory challenges with European privacy authorities. Notably, the European version of Meta AI has not been trained on local users' data, addressing previous concerns about user consent and data privacy. Users can access the assistant through a blue circle icon across Meta's apps like Facebook, Instagram, WhatsApp, and Messenger, with plans to expand the feature to group chats, starting with WhatsApp. Meta is also mulling ad-free subscriptions for users in the UK following legal challenges to its personalised advertising practices. Recommended read:
References :
Alex Knapp,@Alex Knapp
//
References:
Meta
, Alex Knapp
,
Meta's open-source large language model (LLM), Llama, has achieved a significant milestone, surpassing one billion downloads since its release in 2023. This achievement underscores the growing influence of Llama in the AI community, attracting both researchers and enterprises seeking to integrate it into various applications. The model's popularity has surged, with companies like Spotify, AT&T, and DoorDash adopting Llama-based models for production environments.
Meta views open sourcing AI models as crucial, with each download of Llama moving closer to this goal. However, Llama's widespread use hasn't been without its challenges, including copyright lawsuits alleging training on copyrighted books without permission. The company plans to introduce multimodal models and improved reasoning capabilities. Additionally, Meta has been working to incorporate innovations from competing models to enhance Llama's performance. Recommended read:
References :
Dr. Hura@Digital Information World
//
Meta is reportedly planning to launch a standalone Meta AI app in the second quarter, according to a CNBC report. This move aims to make Meta AI, a ChatGPT-like chatbot service, more accessible and competitive with other AI assistants like OpenAI's ChatGPT and Google's Gemini. Currently, Meta AI is available within Facebook, Instagram, WhatsApp, Messenger, and via a standalone website. Meta is also considering a paid subscription for Meta AI, though details about pricing and features are yet to be revealed.
Alongside its AI app development, Meta's Aria Gen 2 is revolutionizing robotics. Aria Gen 2 enables 400% Faster Training with Egocentric AI and unlocks faster, scalable, and cost-efficient robot training. Leveraging egocentric AI, researchers equip robots with a more human-like understanding of the world. Meta's Aria Gen 2 is a next-generation AI research platform from Meta’s Project Aria, which leverages egocentric AI and first-person perception. By capturing what a human sees, hears, and experiences through RGB cameras, SLAM sensors, IMUs, and eye-tracking, Aria Gen 2 provides real-time perception and on-device AI processing, enabling robots to be trained using human egocentric recordings. Recommended read:
References :
Jean-marc Mommessin@MarkTechPost
//
Meta is making significant strides in artificial intelligence across different sectors. Their Aria Gen 2 platform is revolutionizing robot learning by utilizing egocentric AI and first-person perception. This approach enables robots to learn more efficiently, achieving a 400% faster training rate compared to traditional methods. By equipping robots with a human-like understanding of the world, Aria Gen 2 unlocks faster, more scalable, and cost-efficient robot training as demonstrated by Georgia Tech.
Beyond robotics, Meta's AI advancements are also impacting sports. Sevilla FC is leveraging Llama and IBM watsonx to scout soccer talent, using Scout Advisor, a generative AI-driven scouting tool. Recruiters can ask Scout Advisor questions about the specific player attributes they're looking for and receive a list of matching players along with AI-generated performance summaries. Meta is also planning to release a standalone app for its AI assistant, Meta AI, to compete with AI chatbots like OpenAI's ChatGPT and Google's Gemini. Recommended read:
References :
@www.reuters.com
//
References:
Engineering at Meta
, www.eweek.com
,
Meta is expanding its artificial intelligence research into the realm of humanoid robotics, aiming to develop AI-driven software and sensors. This initiative focuses on creating intelligent machines that can interact with the physical world, potentially powering consumer robots. The company's efforts are concentrated on "embodied AI," which combines intelligence with real-world interactions, enabling robots to move, sense, and make decisions in three-dimensional environments.
Meta is not initially planning to release its own branded robots. Instead, the company is concentrating on developing AI-powered software and sensor technology that can be utilized by other robotics manufacturers. This strategy positions Meta alongside tech giants like Tesla, Apple, and Google, all of which are also investing in the robotics sector. Meta is also prioritizing user data protection by using source code analysis to detect and prevent unauthorized data scraping across its platforms, including Facebook, Instagram, and Reality Labs. Recommended read:
References :
Facebook@Meta
//
References:
Meta
, www.eweek.com
,
Meta is significantly expanding its focus on artificial intelligence by venturing into the realm of humanoid robotics. The company's efforts center on developing AI-driven software and sensor technology that could potentially power future consumer robots. Instead of building its own physical robots, Meta intends to create AI-powered software and sensors that can be integrated into robots manufactured by other companies. This approach aims to leverage the Llama AI platform to create intelligent machines capable of navigating and interacting within the physical world.
Meta is also prioritizing user data protection by implementing static analysis tools to prevent unauthorized data scraping. These tools are integrated into the workflow to detect potential scraping vectors across Facebook, Instagram, and parts of the Reality Labs codebases. Furthermore, Meta is updating its Facebook Live video storage policy. Beginning February 19th, new live broadcasts can be replayed, downloaded, or shared for 30 days, after which they will be automatically removed. Existing live videos older than 30 days will also be removed after a 90 day notice is given to download or transfer the content. Recommended read:
References :
@the-decoder.com
//
References:
engineering.fb.com
, Neoscope News - Futurism
,
Meta is making significant strides in artificial intelligence and global connectivity. Meta AI has achieved an 80% accuracy rate in reconstructing typed sentences from brain activity using MEG and EEG technology. This research, conducted in collaboration with the Basque Center on Cognition, Brain and Language, uses AI to decode brain signals captured as participants typed. The system shows potential for advancing our understanding of language processing in the brain.
Meta is also expanding its global infrastructure with Project Waterworth, a massive subsea cable endeavor. This project involves laying over 50,000 km of cable to connect the U.S., India, Brazil, South Africa, and other key regions. Project Waterworth aims to provide industry-leading connectivity, promoting economic cooperation, digital inclusion, and AI innovation globally. Recommended read:
References :
@the-decoder.com
//
Meta is significantly increasing its investment in the development of AI-powered humanoid robots, with plans to create a universal hardware and software platform. The company's Reality Labs division has formed a team focused on building robots capable of handling everyday tasks in both home and work environments. Meta aims to develop the fundamental building blocks, including AI systems, sensors, and software, that other companies can use to manufacture and sell their own robots, aiming for its platform to become the 'Android' of the humanoid robot industry.
The company believes its experience with virtual and augmented reality provides a unique advantage, allowing it to accelerate robot development with data collected from its AR and VR devices. Bloomberg reports Meta has already started discussions with robotics manufacturers, like Unitree Robotics and Figure AI. While Meta promises it won't release dangerous AI systems, its investment signals a major push into the rapidly evolving field of humanoid robotics, where it will compete with companies like Google DeepMind, OpenAI, and Tesla. Recommended read:
References :
@docs.google.com
//
References:
PCMag Middle East ai
, techcrunch.com
,
Meta is partnering with UNESCO to launch the Language Technology Partner Program, aiming to incorporate lesser-known Indigenous languages into Meta AI. The program seeks contributors to provide speech recordings, transcriptions, pre-translated sentences, and written work in target languages. This data will be used to build Meta’s AI systems with the goal of creating systems that can understand and respond to complex human needs, regardless of language or cultural background. Applications to join the program will be open until March 7, 2025.
The government of Nunavut, a territory in northern Canada, has already signed up for the program. Meta also released an open-source machine translation benchmark to evaluate the performance of language translation models. CEO Mark Zuckerberg announced Meta planned to end 2025 with "more than 1.3 million GPUs," doubling its current GPU capacity to power edge AI assistants in the company's upcoming Llama 4 model. Recommended read:
References :
@docs.google.com
//
References:
PCMag Middle East ai
, techcrunch.com
,
Meta is partnering with UNESCO to incorporate lesser-known Indigenous languages into Meta AI. The initiative, called the Language Technology Partner Program, seeks contributors to provide over 10 hours of speech recordings with transcriptions, large amounts of written text, and sets of translated sentences. Meta's AI teams will integrate these languages into AI speech recognition and translation models, which will be open-sourced upon completion.
The program aims to support UNESCO’s work in preserving linguistic diversity and promoting digital inclusivity. The government of Nunavut in northern Canada, where endangered Native Inuit languages are spoken, has already joined as a partner. Meta believes this collaboration will help create intelligent systems that understand and respond to complex human needs, regardless of language or cultural background. Meta is inviting collaborators to join them by March 7, 2025, to revolutionize language technology and promote digital inclusivity worldwide. Recommended read:
References :
@techcrunch.com
//
References:
www.techmeme.com
, techcrunch.com
Meta is actively developing AI safety systems to mitigate the potential for misuse of its AI models. The company is carefully defining the types of AI systems it deems too risky to release to the public. These include systems that could be used to aid in cyberattacks, chemical, and biological attacks. Meta will flag such systems and may halt their development altogether if the risks are considered too high.
To determine the risk level, Meta will rely on input from internal and external researchers, reviewed by senior-level decision-makers, rather than solely on empirical tests. If a system is deemed high-risk, access will be limited, and it won’t be released until mitigations reduce the risk to moderate levels. In cases of critical-risk AI, which could lead to catastrophic outcomes, Meta will implement more stringent measures. Anthropic is also addressing AI safety through their Constitutional Classifiers, designed to guard against jailbreaks and monitor content for harmful outputs. Leading tech groups, including Microsoft, are also investing in similar safety systems. Recommended read:
References :
Kyle Wiggers@TechCrunch
//
Meta, led by CEO Mark Zuckerberg, is significantly increasing its investment in AI infrastructure, projecting a capital expenditure of $60 to $80 billion in 2025. This substantial investment is primarily targeted towards building new data centers and expanding its GPU capacity, aiming to possess over 1.3 million GPUs by the end of the year. This aggressive move is in response to the intensifying competition within the AI sector, fueled by the rise of open source AI models and advancements particularly from Chinese AI companies, and also the AI race with other tech companies such as Microsoft who are also spending billions on AI.
This massive expansion is intended to support Meta's ambitions in generative AI with Llama 4, aiming to develop an 'AI engineer' and to become a leading AI assistant serving billions of users. The company is building a massive data center consuming over 1 gigawatt of compute power this year which will eventually rise to 2 gigawatts, a move that is expected to drive core product development and extend American technology leadership. Meta's planned investment is part of a broader industry trend where rivals like Microsoft and OpenAI are also making significant financial commitments to build out AI infrastructure. Recommended read:
References :
@www.cnbc.com
//
References:
africa.businessinsider.com
, techcrunch.com
,
Meta is significantly increasing its investment in artificial intelligence, with CEO Mark Zuckerberg pledging "hundreds of billions" of dollars in long-term spending. This strategic move comes as Meta reports a strong fourth quarter, boasting a 21% year-over-year revenue increase to $48.4 billion and a 49% jump in net income to $20.8 billion. Zuckerberg views this massive investment in AI infrastructure as a crucial "strategic advantage" for Meta's future, enabling them to compete effectively and serve their billions of users. This move is in part a response to the emergence of new competitors like DeepSeek.
Meta's Reality Labs, while still operating at a loss of $4.97 billion in Q4, has shown positive signs with revenue up 1% year-over-year to $1.1 billion. Furthermore, internal memos reveal that Reality Labs surpassed nearly all sales and user targets for 2024, experiencing a 40% overall sales growth. Meta is particularly focused on developing open-source AI models, aiming to make Llama 4 the most competitive in the world. This open-source strategy is seen as a way to allow Meta to innovate and compete with established AI leaders, despite recent market anxieties regarding DeepSeek. Recommended read:
References :
@www.meta.com
//
Meta has unveiled SeamlessM4T, a groundbreaking AI model that is revolutionizing real-time speech translation. This advanced system is capable of translating speech and text across an impressive 101 languages, offering direct speech-to-speech translation for 36 of them. Unlike traditional models that rely on multiple steps, SeamlessM4T can perform near-instant translations, much like the fictional Babel Fish. The model has achieved a high level of accuracy through extensive training on millions of hours of multilingual audio, surpassing existing systems in both accuracy and noise resilience.
SeamlessM4T also excels in automatic speech recognition, demonstrating human-level performance even in noisy environments, according to studies. Meta's new model can translate text with up to 23% more accuracy than other existing systems. It can filter out background noise and adapt to different speakers which is a huge leap forward in the area of machine learning translation. Meta intends to make resources publicly available for non-commercial use to encourage further research in inclusive speech translation technology. The technology is currently being implemented across Meta products such as RayBan Meta and Instagram to enable real time communication for their users. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |