Alex Knapp,@Alex Knapp
//
Meta's open-source large language model (LLM), Llama, has achieved a significant milestone, surpassing one billion downloads since its release in 2023. This achievement underscores the growing influence of Llama in the AI community, attracting both researchers and enterprises seeking to integrate it into various applications. The model's popularity has surged, with companies like Spotify, AT&T, and DoorDash adopting Llama-based models for production environments.
Meta views open sourcing AI models as crucial, with each download of Llama moving closer to this goal. However, Llama's widespread use hasn't been without its challenges, including copyright lawsuits alleging training on copyrighted books without permission. The company plans to introduce multimodal models and improved reasoning capabilities. Additionally, Meta has been working to incorporate innovations from competing models to enhance Llama's performance. References :
Classification:
@docs.google.com
//
Meta is significantly expanding its AI initiatives, partnering with UNESCO to incorporate lesser-known Indigenous languages into Meta AI. This collaboration aims to support linguistic diversity and inclusivity in the digital world. The Language Technology Partner Program seeks contributors to provide speech recordings, transcriptions, pre-translated sentences, and written works in target languages, which will then be used to build Meta's AI systems. The government of Nunavut, a territory in northern Canada that speaks Native Inuit languages, has already signed up for the program.
Meta's investment in AI extends to developing tools like Automated Compliance Hardening (ACH), an LLM-powered bug catcher designed to improve software testing and identify potential privacy regressions. ACH automates the process of searching for privacy-related faults and preventing them from entering systems in the future, ultimately hardening code bases to reduce risk. Meta is focusing on catastrophic outcomes by threat modeling and identifying capabilities that would enable a threat actor to realize a threat scenario, however the framework's consideration of only "unique" risks and exclusion of potential acceleration of AI R&D has raised concerns. References :
Classification:
@gbhackers.com
//
A critical vulnerability has been discovered in Meta's Llama framework, a popular open-source tool for developing generative AI applications. This flaw, identified as CVE-2024-50050, allows remote attackers to execute arbitrary code on servers running the Llama-stack framework. The vulnerability arises from the unsafe deserialization of Python objects via the 'pickle' module, which is used in the framework's default Python inference server method 'recv_pyobj'. This method handles serialized data received over network sockets, and due to the inherent insecurity of 'pickle' with untrusted sources, malicious data can be crafted to trigger arbitrary code execution during deserialization. This risk is compounded by the framework's rapidly growing popularity, with thousands of stars on GitHub.
The exploitation of this vulnerability could lead to various severe consequences, including resource theft, data breaches, and manipulation of the hosted AI models. Attackers can potentially gain full control over the server by sending malicious code through the network. The pyzmq library, which Llama uses for messaging, is a root cause as its 'recv_pyobj' method is known to be vulnerable when used with untrusted data. While some sources have given the flaw a CVSS score of 9.3, others have given it scores as low as 6.3 out of 10. References :
Classification:
|
BenchmarksBlogsResearch Tools |