@Latest news
//
Meta CEO Mark Zuckerberg is spearheading a new initiative to develop artificial general intelligence (AGI), recruiting top AI researchers to form an elite team. This push aims to create AI systems capable of performing any intellectual task that a human can, positioning Meta to compete directly with tech giants like Google and OpenAI. Zuckerberg's involvement includes personal recruitment efforts, indicating the high priority Meta is placing on this project. This signals a significant shift for Meta, aiming to lead in the rapidly evolving AI landscape.
Disappointment with the performance of Meta's LLaMA 4 model compared to competitors like OpenAI's GPT-4 and Google's Gemini spurred Zuckerberg's increased focus on AGI. Internally, LLaMA 4 was considered inadequate in real-world user experience, lacking coherence and usability. Furthermore, Meta's metaverse investments have not yielded the anticipated results, leading the company to redirect its focus and resources toward AI, aiming to recapture relevance and mindshare in the tech industry. With tens of billions already invested in infrastructure and foundational models, Meta is now fully committed to achieving AGI. To further bolster its AI ambitions, Meta is investing heavily in AI start-up Scale AI. Meta has invested €12 billion and acquired a 49% stake in Scale AI. The investment has caused Google to end its $200 million partnership with Scale AI. Zuckerberg has also offered large salaries to poach AI talent. This move is part of Meta's broader strategy to build superintelligence and challenge the dominance of other AI leaders. Meta's aggressive pursuit of AI talent and strategic investments highlight its determination to become a frontrunner in the race to build AGI. Recommended read:
References :
@siliconangle.com
//
OpenAI is facing increased scrutiny over its data retention policies following a recent court order related to a high-profile copyright lawsuit filed by The New York Times in 2023. The lawsuit alleges that OpenAI and Microsoft Corp. used millions of the Times' articles without permission to train their AI models, including ChatGPT. The paper further alleges that ChatGPT outputted Times content verbatim without attribution. As a result, OpenAI has been ordered to retain all ChatGPT logs, including deleted conversations, indefinitely to ensure that potentially relevant evidence is not destroyed. This move has sparked debate over user privacy and data security.
OpenAI COO Brad Lightcap announced that while users' deleted ChatGPT prompts and responses are typically erased after 30 days, this practice will cease to comply with the court order. The retention policy will affect users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), but not those using the Enterprise or Edu editions or those with a Zero Data Retention agreement. The company asserts that the retained data will be stored separately in a secure system accessible only by a small, audited OpenAI legal and security team, solely to meet legal obligations. The court order was granted within one day of the NYT's request due to concerns that users might delete chats if using ChatGPT to bypass paywalls. OpenAI CEO Sam Altman has voiced strong opposition to the court order, calling it an "inappropriate request" and stating that OpenAI will appeal the decision. He argues that AI interactions should be treated with similar privacy protections as conversations with a lawyer or doctor, suggesting the need for "AI privilege". The company also expressed concerns about its ability to comply with the European Union's General Data Protection Regulation (GDPR), which grants users the right to be forgotten. Altman pledged to fight any demand that compromises user privacy, which he considers a core principle, promising customers that the company will fight to protect their privacy at every step if the plaintiffs continue to push for access. Recommended read:
References :
Megan Crouse@eWEEK
//
OpenAI's ChatGPT is expanding its reach with new integrations, allowing users to connect directly to tools like Google Drive and Dropbox. This update allows ChatGPT to access and analyze data from these cloud storage services, enabling users to ask questions and receive summaries with cited sources. The platform is positioning itself as a user interface for data, offering one-click access to files, effectively streamlining the search process for information stored across various documents and spreadsheets. In addition to cloud connectors, ChatGPT has also introduced a "Record" feature for Team accounts that can record meetings, generate summaries, and offer action items.
These new features for ChatGPT come with data privacy considerations. While OpenAI states that files accessed through Google Drive or Dropbox connectors are not used for training its models for ChatGPT Team, Enterprise, and Education accounts, concerns remain about the data usage for free users and ChatGPT Plus subscribers. However, OpenAI confirms that audio recorded by the tool is immediately deleted after transcription, and transcripts are subject to workspace retention policies. Moreover, content from Team, Enterprise, and Edu workspaces, including audio recordings and transcripts from ChatGPT record, is excluded from model training by default. Meanwhile, Reddit has filed a lawsuit against Anthropic, alleging the AI company scraped Reddit's data without permission to train its Claude AI models. Reddit accuses Anthropic of accessing its servers over 100,000 times after promising to stop scraping and claims Anthropic intentionally trained on the personal data of Reddit users without requesting their consent. Reddit has licensing deals with OpenAI and Google, but Anthropic doesn't have such a deal. Reddit seeks an injunction to force Anthropic to stop using any Reddit data immediately, and also asking the court to prohibit Anthropic from selling or licensing any product that was built using that data. Despite these controversies, Microsoft CEO Satya Nadella has stated that Microsoft profits from every ChatGPT usage, highlighting the success of their investment in OpenAI. Recommended read:
References :
Ashutosh Singh@The Tech Portal
//
Google has launched AI Edge Gallery, an open-source platform aimed at developers who want to deploy AI models directly on Android devices. This new platform allows for on-device AI execution using tools like LiteRT and MediaPipe, supporting models from Hugging Face. With future support for iOS planned, AI Edge Gallery emphasizes data privacy and low latency by eliminating the need for cloud connectivity, making it ideal for industries that require local processing of sensitive data.
The AI Edge Gallery app, released under the Apache 2.0 license and hosted on GitHub, is currently in an experimental Alpha release. The app integrates Gemma 3 1B, a compact 529MB language model, capable of processing up to 2,585 tokens per second on mobile GPUs, enabling tasks like text generation and image analysis in under a second. By using Google’s AI Edge platform, developers can leverage tools like MediaPipe and TensorFlow Lite to optimize model performance on mobile devices. The company is actively seeking feedback from developers and users. AI Edge Gallery contains categories like ‘AI Chat’ and ‘Ask Image’ to guide users to relevant tools, as well as a ‘Prompt Lab’ for testing and refining prompts. On-device AI processing ensures that complex AI tasks can be performed without transmitting data to external servers, reducing potential security risks and improving response times. While newer devices with high-performance chips can run models smoothly, older phones may experience lag. Google is also planning to launch the app on iOS soon. Recommended read:
References :
Ashutosh Singh@The Tech Portal
//
Google has launched the 'AI Edge Gallery' app for Android, with plans to extend it to iOS soon. This innovative app enables users to run a variety of AI models locally on their devices, eliminating the need for an internet connection. The AI Edge Gallery integrates models from Hugging Face, a popular AI repository, allowing for on-device execution. This approach not only enhances privacy by keeping data on the device but also offers faster processing speeds and offline functionality, which is particularly useful in areas with limited connectivity.
The app uses Google’s AI Edge platform, which includes tools like MediaPipe and TensorFlow Lite, to optimize model performance on mobile devices. A key model utilized is Gemma 31B, a compact language model designed for mobile platforms that can process data rapidly. The AI Edge Gallery features an interface with categories like ‘AI Chat’ and ‘Ask Image’ to help users find the right tools. Additionally, a ‘Prompt Lab’ is available for users to experiment with and refine prompts. Google is emphasizing that the AI Edge Gallery is currently an experimental Alpha release and is encouraging user feedback. The app is open-source under the Apache 2.0 license, allowing for free use, including for commercial purposes. However, the performance of the app may vary based on the device's hardware capabilities. While newer phones with advanced processors can run models smoothly, older devices might experience lag, particularly with larger models. In related news, Google Cloud has introduced advancements to BigLake, its storage engine designed to create open data lakehouses on Google Cloud that are compatible with Apache Iceberg. These enhancements aim to eliminate the need to sacrifice open-format flexibility for high-performance, enterprise-grade storage management. These updates include Open interoperability across analytical and transactional systems: The BigLake metastore provides the foundation for interoperability, allowing you to access all your Cloud Storage and BigQuery storage data across multiple runtimes including BigQuery, AlloyDB (preview), and open-source, Iceberg-compatible engines such as Spark and Flink.New, high-performance Iceberg-native Cloud Storage: We are simplifying lakehouse management with automatic table maintenance (including compaction and garbage collection) and integration with Google Cloud Storage management tools, including auto-class tiering and encryption. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |