Ashutosh Singh@The Tech Portal
//
Google has launched AI Edge Gallery, an open-source platform aimed at developers who want to deploy AI models directly on Android devices. This new platform allows for on-device AI execution using tools like LiteRT and MediaPipe, supporting models from Hugging Face. With future support for iOS planned, AI Edge Gallery emphasizes data privacy and low latency by eliminating the need for cloud connectivity, making it ideal for industries that require local processing of sensitive data.
The AI Edge Gallery app, released under the Apache 2.0 license and hosted on GitHub, is currently in an experimental Alpha release. The app integrates Gemma 3 1B, a compact 529MB language model, capable of processing up to 2,585 tokens per second on mobile GPUs, enabling tasks like text generation and image analysis in under a second. By using Google’s AI Edge platform, developers can leverage tools like MediaPipe and TensorFlow Lite to optimize model performance on mobile devices. The company is actively seeking feedback from developers and users. AI Edge Gallery contains categories like ‘AI Chat’ and ‘Ask Image’ to guide users to relevant tools, as well as a ‘Prompt Lab’ for testing and refining prompts. On-device AI processing ensures that complex AI tasks can be performed without transmitting data to external servers, reducing potential security risks and improving response times. While newer devices with high-performance chips can run models smoothly, older phones may experience lag. Google is also planning to launch the app on iOS soon. References :
Classification:
Ashutosh Singh@The Tech Portal
//
Google has launched the 'AI Edge Gallery' app for Android, with plans to extend it to iOS soon. This innovative app enables users to run a variety of AI models locally on their devices, eliminating the need for an internet connection. The AI Edge Gallery integrates models from Hugging Face, a popular AI repository, allowing for on-device execution. This approach not only enhances privacy by keeping data on the device but also offers faster processing speeds and offline functionality, which is particularly useful in areas with limited connectivity.
The app uses Google’s AI Edge platform, which includes tools like MediaPipe and TensorFlow Lite, to optimize model performance on mobile devices. A key model utilized is Gemma 31B, a compact language model designed for mobile platforms that can process data rapidly. The AI Edge Gallery features an interface with categories like ‘AI Chat’ and ‘Ask Image’ to help users find the right tools. Additionally, a ‘Prompt Lab’ is available for users to experiment with and refine prompts. Google is emphasizing that the AI Edge Gallery is currently an experimental Alpha release and is encouraging user feedback. The app is open-source under the Apache 2.0 license, allowing for free use, including for commercial purposes. However, the performance of the app may vary based on the device's hardware capabilities. While newer phones with advanced processors can run models smoothly, older devices might experience lag, particularly with larger models. In related news, Google Cloud has introduced advancements to BigLake, its storage engine designed to create open data lakehouses on Google Cloud that are compatible with Apache Iceberg. These enhancements aim to eliminate the need to sacrifice open-format flexibility for high-performance, enterprise-grade storage management. These updates include Open interoperability across analytical and transactional systems: The BigLake metastore provides the foundation for interoperability, allowing you to access all your Cloud Storage and BigQuery storage data across multiple runtimes including BigQuery, AlloyDB (preview), and open-source, Iceberg-compatible engines such as Spark and Flink.New, high-performance Iceberg-native Cloud Storage: We are simplifying lakehouse management with automatic table maintenance (including compaction and garbage collection) and integration with Google Cloud Storage management tools, including auto-class tiering and encryption. References :
Classification:
Ken Yeung@Ken Yeung
//
Microsoft is aggressively pushing AI innovation to the edge, a key theme highlighted at Microsoft Build 2025. The company is developing and integrating AI capabilities into various platforms, aiming to create smarter, faster experiences across devices. This initiative involves not only expanding cloud capabilities but also embedding AI agents into browsers, websites, the operating system, and everyday workflows. Microsoft envisions a future of AI-powered productivity where human workers partner with autonomous agents to streamline tasks and enhance efficiency.
Microsoft is also making strides in AI-driven weather forecasting with its latest AI model, Aurora. This model promises accurate 10-day forecasts in seconds, a significant improvement over traditional models that take hours. Aurora isn't limited to weather; it can also handle any Earth system with available data, opening possibilities for forecasting air pollution, cyclones, and other environmental factors. While the model "doesn't know the laws of physics," its data-driven approach delivers detailed and quick forecasts, demonstrating the potential of AI in revolutionizing how we understand and predict environmental changes. A core component of Microsoft's AI strategy is the integration of the Model Context Protocol (MCP) into Windows 11. This integration aims to transform the operating system into an "agentic" platform, where AI agents can securely interact with apps and system tools to carry out tasks across files and services. MCP acts as a standardized communication protocol, facilitating seamless interaction between AI agents, applications, and services. With security measures in place, MCP allows for powerful AI integrations while mitigating risks, enabling new forms of AI-driven experiences for Windows 11 users. References :
Classification:
|
BenchmarksBlogsResearch Tools |