Claire Prudhomme@marketingaiinstitute.com
//
References:
shellypalmer.com
, www.marketingaiinstitute.com
,
Meta is making a significant push towards fully automating ad creation, aiming to allow businesses to generate ads with minimal input by 2026. According to CEO Mark Zuckerberg, the goal is to enable advertisers to simply provide a product image and budget, then let AI handle the rest, including generating copy, creating images and video, deploying the ads, targeting the audience, and even recommending spend. This strategic move, described as a "redefinition" of the advertising industry, seeks to reduce friction for advertisers and scale performance using AI, which is particularly relevant given that over 97% of Meta's $134 billion in 2024 revenue is tied to advertising.
This level of automation goes beyond simply tweaking existing ads; it promises concept-to-completion automation with personalization built in. Meta's AI tools are expected to show users different versions of the same ad based on factors such as their location. The company believes small businesses will particularly benefit, as generative tools could level the creative playing field and remove the need for agencies, studios, or in-house teams. Alex Schultz, Meta’s chief marketing officer and vice president of Analytics, assures agencies that AI will enable them to focus precious time and resources on creativity. While Meta envisions a streamlined and efficient advertising process, some are concerned about the potential impact on brand standards and the resonance of AI-generated content compared to human-crafted campaigns. The move has also sent shock waves through the traditional marketing industry, with fears that agencies could lose control over the ad creation process. As competitors like Google also push similar tools, the trend suggests a shift where the creative brief becomes a prompt, the agency becomes an algorithm, and the marketer becomes a curator of generative campaigns. Recommended read:
References :
@www.marktechpost.com
//
Meta is undergoing significant changes within its AI division, aiming to accelerate development and integrate AI more deeply into its advertising platform. The company is restructuring its AI organization into two teams: one focused on AI products and the other on advancing Artificial General Intelligence (AGI) research, particularly for its Llama models. This reorganization comes amidst a substantial talent exodus, with a significant portion of the original Llama research team having departed, many joining competitors like Mistral AI. Despite these challenges, Meta AI has reached a milestone of 1 billion monthly active users across Facebook, Instagram, WhatsApp, and Messenger, highlighting the broad reach of its AI initiatives.
Meta's focus is now shifting towards monetizing its AI capabilities, particularly through advertising. By the end of 2026, Meta intends to enable advertisers to fully create and target campaigns using AI, potentially disrupting traditional advertising agencies. Advertisers will be able to provide a product image and budget, and Meta's AI would generate the entire ad, including imagery, video, and text, while also targeting specific user demographics. This move aims to attract more advertisers, especially small and mid-sized businesses, by simplifying the ad creation process and leveraging Meta's extensive user data for targeted campaigns. However, Meta's increased reliance on AI raises concerns regarding data privacy and ethical considerations. The company has begun using data from Facebook and Instagram users, including posts, photos, and interactions with Meta AI, to train its AI models. Furthermore, Meta is reportedly planning to automate up to 90% of its risk assessments across Facebook and Instagram, including product development and rule changes. This shift raises questions about potential oversights and the impact on user safety, given the reliance on AI to evaluate potential risks and enforce policies. Recommended read:
References :
@www.eweek.com
//
Meta is making a significant move into military technology, partnering with Anduril Industries to develop augmented and virtual reality (XR) devices for the U.S. Army. This collaboration reunites Meta with Palmer Luckey, the founder of Oculus who was previously fired from the company. The initiative aims to provide soldiers with enhanced situational awareness on the battlefield through advanced perception capabilities and AI-enabled combat tools. The devices, potentially named EagleEye, will integrate Meta's Llama AI models with Anduril's Lattice system to deliver real-time data and improve operational coordination.
The new XR headsets are designed to support real-time threat detection, such as identifying approaching drones or concealed enemy positions. They will also provide interfaces for operating AI-powered weapon systems. Anduril states that the project will save the U.S. military billions of dollars by using high-performance components and technology originally developed for commercial use. The partnership reflects a broader trend of Meta aligning more closely with national security interests. In related news, Meta's research team has made a surprising discovery that shorter reasoning chains can significantly improve AI accuracy. A study released by Meta and The Hebrew University of Jerusalem found that AI models achieve 34.5% better accuracy when using shorter reasoning processes. This challenges the conventional belief that longer, more complex reasoning chains lead to better results. The researchers developed a new method called "short-m@k," which runs multiple reasoning attempts in parallel, halting computation once the first few processes are complete and selecting the final answer through majority voting. This method could reduce computing costs by up to 40% while maintaining performance levels. Recommended read:
References :
staff@insideAI News
//
References:
insideAI News
, Ken Yeung
Meta is partnering with Cerebras to enhance AI inference speeds within Meta's new Llama API. This collaboration combines Meta's open-source Llama models with Cerebras' specialized inference technology, aiming to provide developers with significantly faster performance. According to Cerebras, developers building on the Llama 4 Cerebras model within the API can expect speeds up to 18 times quicker than traditional GPU-based solutions. This acceleration is expected to unlock new possibilities for building real-time and agentic AI applications, making complex tasks like low-latency voice interaction, interactive code generation, and real-time reasoning more feasible.
This partnership allows Cerebras to expand its reach to a broader developer audience, strengthening its existing relationship with Meta. Since launching its inference solutions in 2024, Cerebras has emphasized its ability to deliver rapid Llama inference, serving billions of tokens through its AI infrastructure. Andrew Feldman, CEO and co-founder of Cerebras, stated that the company is proud to make Llama API the fastest inference API available, empowering developers to create AI systems previously unattainable with GPU-based inference clouds. Independent benchmarks by Artificial Analysis support this claim, indicating that Cerebras achieves significantly higher token processing speeds compared to platforms like ChatGPT and DeepSeek. Developers will have direct access to the enhanced Llama 4 inference by selecting Cerebras within the Llama API. Meta also continues to innovate with its AI app, testing new features such as "Reasoning" mode and "Voice Personalization," designed to enhance user interaction. The “Reasoning” feature could potentially offer more transparent explanations for the AI’s responses, while voice settings like "Focus on my voice" and "Welcome message" could offer more personalized audio interactions, especially relevant for Meta's hardware ambitions in areas such as smart glasses and augmented reality devices. Recommended read:
References :
@www.marktechpost.com
//
Meta is making significant strides in the AI landscape, highlighted by the release of Llama Prompt Ops, a Python package aimed at streamlining prompt adaptation for Llama models. This open-source tool helps developers enhance prompt effectiveness by transforming inputs to better suit Llama-based LLMs, addressing the challenge of inconsistent performance across different AI models. Llama Prompt Ops facilitates smoother cross-model prompt migration and improves performance and reliability, featuring a transformation pipeline for systematic prompt optimization.
Meanwhile, Meta is expanding its AI strategy with the launch of a standalone Meta AI app, powered by Llama 4, to compete with rivals like Microsoft’s Copilot and ChatGPT. This app is designed to function as a general-purpose chatbot and a replacement for the “Meta View” app used with Meta Ray-Ban glasses, integrating a social component with a public feed showcasing user interactions with the AI. Meta also previewed its Llama API, designed to simplify the integration of its Llama models into third-party products, attracting AI developers with an open-weight model that supports modular, specialized applications. However, Meta's AI advancements are facing legal challenges, as a US judge is questioning the company's claim that training AI on copyrighted books constitutes fair use. The case, focusing on Meta's Llama model, involves training data including works by Sarah Silverman. The judge raised concerns that using copyrighted material to create a product capable of producing an infinite number of competing products could undermine the market for original works, potentially obligating Meta to pay licenses to copyright holders. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Meta is actively expanding the capabilities of its standalone Meta AI app, introducing new features focused on enhanced personalization and functionality. The company is developing a "Discover AIs" tab, which could serve as a hub for users to explore and interact with various AI assistants, potentially including third-party or specialized models. This aligns with Meta’s broader strategy to integrate personalized AI agents across its apps and hardware. Meta launched a dedicated Meta AI app powered by Llama 4 that focuses on offering more natural voice conversations and can leverage user data from Facebook and Instagram to provide tailored responses.
Meta is also testing a "reasoning" mode, suggesting the company aims to provide more transparent and advanced explanations in its AI assistant's responses. While the exact implementation remains unclear, the feature could emphasize structured logic or chain-of-thought capabilities, similar to developments in models from OpenAI and Google DeepMind. This would give users greater insight into how the AI derives its answers, potentially boosting trust and utility for complex queries. Further enhancing user experience, Meta is working on new voice settings, including "Focus on my voice" and "Welcome message." "Focus on my voice" could improve the AI's ability to isolate and respond to the primary user's speech in environments with multiple speakers. The "Welcome message" feature might offer a customizable greeting or onboarding experience when the assistant is activated. These features are particularly relevant for Meta’s hardware ambitions, such as its Ray-Ban smart glasses and future AR devices, where voice interaction plays a critical role. To ensure privacy, Meta is also developing Private Processing for AI tools on WhatsApp, allowing users to leverage AI in a secure way. Recommended read:
References :
kevinokemwa@outlook.com (Kevin@windowscentral.com
//
Meta is aggressively pursuing the development of AI-powered "friends" to combat what CEO Mark Zuckerberg identifies as a growing "loneliness epidemic." Zuckerberg envisions these AI companions as social chatbots capable of engaging in human-like interactions. This initiative aims to bridge the gap in human connectivity, which Zuckerberg believes is lacking in today's fast-paced world. He suggests that virtual friends might help individuals who struggle to establish meaningful connections with others in real life.
Zuckerberg revealed that Meta is launching a standalone Meta AI app powered by the Llama 4 model. This app is designed to facilitate more natural voice conversations and provide tailored responses by leveraging user data from Facebook and Instagram. This level of personalization aims to create a more engaging and relevant experience for users seeking companionship and interaction with AI. Furthermore, the CEO indicated that Meta is also focusing on AI smart glasses. He sees these glasses as a core element of the future of technology. However, Zuckerberg acknowledged that the development of AI friends is still in its early stages, and there may be societal stigmas associated with forming connections with AI-powered chatbots. He also stated that while smart glasses are a point of focus for the company, it's unlikely they will replace smartphones. In addition to the development of AI companions, Meta is also pushing forward with other AI initiatives, including integrating the new Meta AI app with the Meta View companion app for its Ray-Ban Meta smart glasses and launching an AI assistant app that personalizes its responses to user data. Recommended read:
References :
@cyberinsider.com
//
WhatsApp has unveiled 'Private Processing', a new AI infrastructure designed to enable AI features while maintaining user privacy. This technology allows users to utilize advanced AI capabilities, such as message summarization and composition tools, by offloading tasks to privacy-preserving cloud servers. The system is designed to ensure that neither Meta nor WhatsApp can access the content of end-to-end encrypted chats during AI processing. This move comes as messaging platforms seek to integrate AI capabilities without compromising secure communications, addressing user concerns about the privacy implications of AI integration within the popular messaging app.
While WhatsApp already incorporates a light blue circle that gives users access to the Meta AI assistant, interactions with the AI assistant are not shielded from Meta the way end-to-end encrypted WhatsApp chats are. The new feature, dubbed Private Processing, is meant to address these concerns with what the company says is a carefully architected and purpose-built platform devoted to processing data for AI tasks without the information being accessible to Meta, WhatsApp, or any other party. This is achieved by processing messages in Trusted Execution Environments (TEEs), ensuring that the data remains confidential and secure. It also designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. The 'Private Processing' feature will be optional, giving users complete control over how and when they choose to utilize it. Meta security engineering director Chris Rohlf states that this wasn't just about managing the expansion of that threat model and making sure the expectations for privacy and security were met—it was about careful consideration of the user experience and making this opt-in. Although initial reviews by researchers of the scheme’s integrity have been positive, some note that the move toward AI features could ultimately put WhatsApp on a slippery slope. It is not immediately available to WhatsApp users but will gradually be rolled out in the upcoming weeks. Recommended read:
References :
@about.fb.com
//
References:
about.fb.com
, techxplore.com
,
Meta has launched the Meta AI app, a standalone AI assistant designed to compete with ChatGPT. This marks Meta's initial step in building a more personalized AI experience for its users. The app is built with Llama 4, a model Meta touts as more cost-efficient than its competitors. Users can access Meta AI through voice conversations and interactions will be personalized over time as the AI learns user preferences and contexts across Meta's various apps.
The Meta AI app includes a Discover feed, enabling users to share and explore how others are utilizing AI. It also replaces Meta View as the companion app for Ray-Ban Meta smart glasses, allowing for seamless conversations across glasses, mobile app, and desktop interfaces. According to Meta CEO Mark Zuckerberg, the app is designed to be a personal AI, starting with basic context about user interests and evolving to incorporate more comprehensive knowledge from across Meta's platforms. With the introduction of the Meta AI app, Meta aims to provide a direct path to its generative AI models for its users, similar to the approach taken by OpenAI with ChatGPT. The release comes as OpenAI stands as a leader of straight-to-user AI through its ChatGPT assistant that is regularly updated with new capabilities. Zuckerberg noted that a billion people are already using Meta AI across Meta's apps. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
Meta has officially launched its Llama API, marking a significant entry into the commercial AI services market. This move allows developers to easily explore and fine-tune artificial intelligence models, accessing inference speeds up to 18 times faster than traditional GPU-based solutions. Meta's partnership with Cerebras is central to this achievement, enabling processing speeds of 2,648 tokens per second for Llama 4. This partnership transforms Meta’s popular open-source Llama models into a commercial service and positions them to compete directly with OpenAI, Anthropic, and Google.
The Llama API leverages Cerebras’ specialized AI chips to deliver unprecedented speed increases. This breakthrough is a result of Meta's collaboration with Cerebras, whose system outperforms competitors like SambaNova and Groq, as well as traditional GPU-based services. The API provides access to a lightweight software development kit (SDK) compatible with OpenAI, streamlining the process for developers to convert models. As part of the launch, Meta is also providing partners with tools to detect and prevent threats such as phishing attacks and various types of online fraud created using AI technologies. Furthermore, Meta is working with Groq to accelerate the official Llama API, serving it on the world’s most efficient inference chip. Meta is also launching the Meta AI app, its first dedicated app for its AI assistant, powered by Llama 4. The app will include a discover feed so people can share and explore how others are using the AI. It is intended to provide a more personalized experience by adapting to user preferences and maintaining context across conversations. The Meta AI app represents a new way for users to interact with Meta's AI assistant beyond existing integrations with WhatsApp, Instagram, Facebook, and Messenger. Recommended read:
References :
Facebook@Meta Newsroom
//
Meta has launched its first dedicated AI application, directly challenging ChatGPT in the burgeoning AI assistant market. The Meta AI app, built on the Llama 4 large language model (LLM), aims to offer users a more personalized AI experience. The application is designed to learn user preferences, remember context from previous interactions, and provide seamless voice-based conversations, setting it apart from competitors. This move is a significant step in Meta's strategy to establish itself as a major player in the AI landscape, offering a direct path to its generative AI models.
The new Meta AI app features a 'Discover' feed, a social component allowing users to explore how others are utilizing AI and share their own AI-generated creations. The app also replaces Meta View as the companion application for Ray-Ban Meta smart glasses, enabling a fluid experience across glasses, mobile, and desktop platforms. Users will be able to initiate conversations on one device and continue them seamlessly on another. To use the application, a Meta products account is required, though users can sign in with their existing Facebook or Instagram profiles. CEO Mark Zuckerberg emphasized that the app is designed to be a user’s personal AI, highlighting the ability to engage in voice conversations. The app begins with basic information about a user's interests, evolving over time to incorporate more detailed knowledge about the user and their network. The launch of the Meta AI app comes as other companies are also developing their AI models, seeking to demonstrate the power and flexibility of its in-house Llama 4 models to both consumers and third-party software developers. Recommended read:
References :
@techcrunch.com
//
References:
techcrunch.com
, Last Week in AI
,
Meta's release of Llama 4, a multimodal LLM, has stirred controversy in the AI community. While it boasts multimodality and a large context window, the model has faced criticism due to its performance on a popular chat benchmark, LM Arena. Specifically, the "vanilla" version of the Maverick AI model, a variant of Llama 4, ranked below competitors like OpenAI's GPT-4o, Anthropic's Claude 3.5 Sonnet, and Google's Gemini 1.5 Pro, despite these models being several months old. This poor ranking raises questions about the model's reliability and the validity of the evaluation methodologies used.
Meta's initial strategy of using an experimental, unreleased version of Llama 4 Maverick to achieve a high score on LM Arena further exacerbated the issue. This prompted LM Arena maintainers to change their policies and re-evaluate the unmodified version, revealing its comparatively weak performance. Meta explained that the experimental version was optimized for conversationality, which may have artificially inflated its score on LM Arena. However, experts caution that tailoring a model to a specific benchmark can be misleading and may not accurately reflect its performance in real-world applications. The controversy surrounding Llama 4's benchmark results highlights the challenges in evaluating and comparing large language models. While benchmarks like LM Arena can provide some insights, they may not fully capture the nuances of model performance across different contexts. Meta's spokesperson stated that they experiment with "all types of custom variants" and are excited to see how developers customize Llama 4 for their own use cases, emphasizing the open-source nature of the release and the potential for future improvements based on community feedback. Recommended read:
References :
Mia Sato@The Verge
//
Meta, the parent company of Instagram, is intensifying its efforts to use AI to identify teenagers using adult accounts on its platform. This initiative aims to ensure that young users are placed into more restrictive "Teen Account" settings, which offer enhanced protections and address child safety concerns. Instagram is actively working to enroll more teens into these teen accounts, providing a safer online experience for younger users. The social media network is implementing measures to automatically place suspected teens into Teen Account settings.
As part of this effort, Instagram will begin sending notifications to parents, providing information on the importance of ensuring their teens provide accurate age information online. The notifications will also include tips for parents to check and confirm their teens' ages together. Meta is using AI to proactively look for teen accounts that have an adult birthday and change settings for users it suspects are kids. This AI-driven age detection system analyzes various signals to determine if a user is under 18, such as messages from friends containing birthday wishes. In addition to age detection enhancements, Meta AI is introducing Collaborative Reasoner (Coral), an AI framework designed to evaluate and improve collaborative reasoning skills in large language models (LLMs). Coral reformulates traditional reasoning problems into multi-agent tasks, where two agents must reach consensus through natural conversation. This framework aims to emulate real-world social dynamics, requiring agents to challenge incorrect conclusions and negotiate conflicting viewpoints to arrive at joint decisions, furthering Meta's investment into responsible AI development. Recommended read:
References :
@www.quantamagazine.org
//
References:
finance.yahoo.com
, Quanta Magazine
,
Researchers are exploring innovative methods to enhance the performance of artificial intelligence language models by minimizing their reliance on direct language processing. This approach involves enabling models to operate more within mathematical or "latent" spaces, reducing the need for constant translation between numerical representations and human language. Studies suggest that processing information directly in these spaces can improve efficiency and reasoning capabilities, as language can sometimes constrain and diminish the information retained by the model. By sidestepping the traditional language-bound processes, AI systems may achieve better results by "thinking" independently of linguistic structures.
Meta has announced plans to resume training its AI models using publicly available content from European users. This move aims to improve the capabilities of Meta's AI systems by leveraging a vast dataset of user-generated information. The decision comes after a period of suspension prompted by concerns regarding data privacy, which were raised by activist groups. Meta is emphasizing that the training will utilize public posts and comments shared by adult users within the European Union, as well as user interactions with Meta AI, such as questions and queries, to enhance model accuracy and overall performance. A new method has been developed to efficiently safeguard sensitive data used in AI model training, reducing the traditional tradeoff between privacy and accuracy. This innovative framework maintains an AI model's performance while preventing attackers from extracting confidential information, such as medical images or financial records. By focusing on the stability of algorithms and utilizing a metric called PAC Privacy, researchers have shown that it's possible to privatize almost any algorithm without needing access to its internal workings, potentially making privacy more accessible and less computationally expensive in real-world applications. Recommended read:
References :
@www.thecanadianpressnews.ca
//
Meta is resuming its AI training program using public content shared by adult users in the European Union. This decision follows earlier delays due to regulatory concerns and aims to improve the understanding of European cultures, languages, and history within Meta's AI models. The data utilized will include public posts and comments from platforms like Facebook and Instagram, helping the AI to better reflect the nuances and complexities of European communities. Meta believes this is crucial for developing AI that is not only available to Europeans but is specifically tailored for them.
Meta will begin notifying EU users this week through in-app notifications and email, explaining the types of data they plan to use and how it will enhance AI functionality and the overall user experience. These notifications will include a direct link to an objection form, allowing users to easily opt out of having their data used for AI training purposes. Meta emphasizes that they will honor all objection forms, both those previously received and any new submissions. This approach aims to balance AI development with individual privacy rights under the stringent data privacy rules in the EU. The move comes after Meta had to previously shelve its European AI rollout plans following concerns raised about the privacy implications of its AI tools. Meta also faces ongoing legal challenges related to the use of copyright-protected material in its large language model development. The company maintains that access to EU user data is essential for localizing its AI tools, enabling them to understand everything from dialects and colloquialisms to hyper-local knowledge and unique cultural expressions like humor and sarcasm. Without this data, Meta argues, the region risks being left behind in AI development, particularly as AI models become more advanced and multi-modal. Recommended read:
References :
@www.thecanadianpressnews.ca
//
Meta Platforms, the parent company of Facebook and Instagram, has announced it will resume using publicly available content from European users to train its artificial intelligence models. This decision comes after a pause last year following privacy concerns raised by activists. Meta plans to use public posts, comments, and interactions with Meta AI from adult users in the European Union to enhance its generative AI models. The company says this data is crucial for developing AI that understands the nuances of European languages, dialects, colloquialisms, humor, and local knowledge.
Meta emphasizes that it will not use private messages or data from users under 18 for AI training. To address privacy concerns, Meta will notify EU users through in-app and email notifications, providing them with a way to opt out of having their data used. These notifications will include a link to a form allowing users to object to the use of their data, and Meta has committed to honoring all previously and newly submitted objection forms. The company states its AI is designed to cater to diverse perspectives and to acknowledge the distinctive attributes of various European communities. Meta claims its approach aligns with industry practices, noting that companies like Google and OpenAI have already utilized European user data for AI training. Meta defends its actions as necessary to develop AI services that are relevant and beneficial to European users. While Meta highlights that a panel of EU privacy regulators “affirmed” that its original approach met legal obligations. Groups like NOYB had previously complained and urged regulators to intervene, advocating for an opt-in system where users actively consent to the use of their data for AI training. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Meta has recently unveiled its Llama 4 AI models, marking a significant advancement in the field of open-source AI. The release includes Llama 4 Maverick and Llama 4 Scout, with Llama 4 Behemoth and Llama 4 Reasoning expected to follow. These models are designed to be more efficient and capable than their predecessors, with a focus on improving reasoning, coding, and creative writing abilities. The move is seen as a response to the growing competition in the AI landscape, particularly from models like DeepSeek, which have demonstrated impressive performance at a lower cost.
The Llama 4 family employs a Mixture of Experts (MoE) architecture for enhanced efficiency. Llama 4 Maverick is a 400 billion parameter sparse model with 17 billion active parameters and 128 experts, making it suitable for general assistant and chat use cases. Llama 4 Scout, with 109 billion parameters and 17 billion active parameters across 16 experts, stands out with its 10 million token context window, enabling it to handle extensive text and large documents effectively, making it suitable for multi-document summarization and parsing extensive user activity. Meta's decision to release these models before LlamaCon gives developers ample time to experiment with them. While Llama 4 Maverick shows strength in areas such as large context retrieval and writing detailed responses, benchmarks indicate that DeepSeek v3 0324 outperforms it in coding and common-sense reasoning. Meta is also exploring the intersection of neuroscience and AI, with researchers like Jean-Rémi King investigating cognitive principles in artificial architectures. This interdisciplinary approach aims to further improve the reasoning and understanding capabilities of AI models, potentially leading to more advanced and human-like AI systems. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Meta has unveiled its latest advancements in AI with the Llama 4 family of models, consisting of Llama 4 Scout, Maverick, and the upcoming Behemoth. These models are designed for a variety of AI tasks, ranging from general chat to document summarization and advanced reasoning. Llama 4 Maverick, with 17 billion active parameters, is positioned as a general-purpose model ideal for image and text understanding tasks, making it suitable for chat applications and AI assistants. Llama 4 Scout is designed for document summarization.
Meta is emphasizing efficiency and accessibility with Llama 4. Both the Maverick and Scout models are designed to run efficiently, even on a single NVIDIA H100 GPU, showcasing Meta’s dedication to balancing high performance with reasonable resource consumption. TheSequence #530 highlights that Llama 4 brings unquestionable technical innovations. Furthermore, the Llama 4 series introduces three distinct models—Scout, Maverick, and Behemoth—designed for a range of use cases, from general-purpose reasoning to long-context and multimodal applications. The release of Llama 4 includes enhancements beyond its technical capabilities. In the UK, Ray-Ban Meta glasses are receiving an upgrade to integrate Meta AI features, enabling users to interact with their surroundings through questions and receive intelligent, context-aware responses. Soon to follow is the rollout of live translation on these glasses, facilitating real-time speech translation between English, Spanish, Italian, and French, further enhancing the user experience and accessibility. Recommended read:
References :
Jonathan Kemper@THE DECODER
//
References:
Analytics India Magazine
, THE DECODER
Meta is developing MoCha (Movie Character Animator), an AI system designed to generate complete character animations. MoCha takes natural language prompts describing the character, scene, and actions, along with a speech audio clip, and outputs a cinematic-quality video. This end-to-end model synchronizes speech with facial movements, generates full-body gestures, and maintains character consistency, even managing turn-based dialogue between multiple speakers. The system introduces a "Speech-Video Window Attention" mechanism to solve challenges in AI video generation, improving lip sync accuracy by limiting each frame's access to a specific window of audio data and adding tokens to create smoother transitions.
MoCha runs on a diffusion transformer model with 30 billion parameters and produces HD video clips around five seconds long at 24 frames per second. For scenes with multiple characters, the team developed a streamlined prompt system, allowing users to define characters once and reference them with simple tags throughout different scenes. Meta’s AI research head, Joelle Pineau, announced her resignation which is effective at the end of May, vacating a high-profile position amid intense competition in AI development. Recommended read:
References :
@tomshardware.com
//
References:
Jon Keegan
, www.tomshardware.com
,
Ant Group has announced a significant breakthrough in AI, achieving a 20% reduction in AI costs by training models on domestically produced Chinese chips. According to reports, the company utilized chips from Chinese tech giants Alibaba and Huawei, reaching performance levels comparable to those obtained with Nvidia's H800 chips. The AI models, named Ling-Plus and Ling-Lite, are said to match or even outperform leading models, with Ant Group claiming its AI models outperformed Meta’s in benchmarks and cut inference costs.
This accomplishment signals a potential leap forward in China's AI development efforts and a move towards self-reliance in semiconductor technology. While Ant Group still uses Nvidia hardware for some tasks, it is now relying more on alternatives, including chips from AMD and Chinese manufacturers, driven in part by U.S. sanctions that limit access to Nvidia's advanced GPUs. This shift could lessen the country’s dependence on foreign technology. Recommended read:
References :
Facebook@Meta Newsroom
//
References:
Meta Newsroom
, TechInformed
,
Meta is expanding its AI assistant, Meta AI, to 41 European countries, including those in the European Union, and 21 overseas territories. This marks Meta's largest global AI rollout to date. However, the European version will initially offer limited capabilities, starting with an "intelligent chat function" available in six languages: English, French, Spanish, Portuguese, German, and Italian.
This rollout comes after regulatory challenges with European privacy authorities. Notably, the European version of Meta AI has not been trained on local users' data, addressing previous concerns about user consent and data privacy. Users can access the assistant through a blue circle icon across Meta's apps like Facebook, Instagram, WhatsApp, and Messenger, with plans to expand the feature to group chats, starting with WhatsApp. Meta is also mulling ad-free subscriptions for users in the UK following legal challenges to its personalised advertising practices. Recommended read:
References :
Alex Knapp,@Alex Knapp
//
References:
Meta Newsroom
, Alex Knapp
,
Meta's open-source large language model (LLM), Llama, has achieved a significant milestone, surpassing one billion downloads since its release in 2023. This achievement underscores the growing influence of Llama in the AI community, attracting both researchers and enterprises seeking to integrate it into various applications. The model's popularity has surged, with companies like Spotify, AT&T, and DoorDash adopting Llama-based models for production environments.
Meta views open sourcing AI models as crucial, with each download of Llama moving closer to this goal. However, Llama's widespread use hasn't been without its challenges, including copyright lawsuits alleging training on copyrighted books without permission. The company plans to introduce multimodal models and improved reasoning capabilities. Additionally, Meta has been working to incorporate innovations from competing models to enhance Llama's performance. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |