@cloud.google.com
//
References:
AI & Machine Learning
, Ken Yeung
,
Google has launched Agent-to-Agent Protocol (A2A), a new standard aimed at fostering interoperability between AI agents. Unveiled at Cloud Next 2025, A2A provides a common language for agents, standardizing how they send tasks, stream updates, share resources, and discover each other. This initiative complements the Model Context Protocol (MCP), further normalizing agent interactions across various platforms. The goal is to enable seamless communication and collaboration between agents built on different frameworks and by different vendors, paving the way for more reliable and efficient AI systems.
Google's A2A protocol is designed to address the growing need for multi-agent systems in the enterprise. As AI agents take on more complex tasks, the ability for them to communicate and collaborate becomes crucial. A2A defines how agents can exchange structured messages with task states like working, input-required, or completed. By establishing a standardized protocol, Google aims to eliminate the "headaches" of integrating agents built using different frameworks such as LangChain, AutoGen, and Pydantic. This, in theory, will allow for the creation of more robust and effective AI solutions. To further support the development of multi-agent systems, Google is also introducing the Agent Development Kit (ADK). This open-source framework simplifies the process of building agents and sophisticated multi-agent systems, providing developers with precise control over agent behavior. Additionally, Google's Vertex AI platform offers tools to manage these multi-agent systems, including Agent Engine, a fully managed runtime that helps deploy custom agents to production with built-in testing and release capabilities. Google is partnering with over 50 industry leaders to promote the adoption of A2A and advance the vision of seamless multi-agent collaboration. Recommended read:
References :
Ken Yeung@Ken Yeung
//
Google has launched a new feature called "Discover Sources" for NotebookLM, its AI-powered tool designed to organize and analyze information. Rolling out to all users starting April 2, 2025, the new feature automatically curates relevant websites on a specified topic, recommending up to ten sources accompanied by AI-generated summaries. This enhancement streamlines research by allowing users to quickly surface relevant content from the internet.
NotebookLM, initially launched in 2023 as an AI-powered alternative to Evernote and Microsoft OneNote, previously relied on manual uploads of documents, articles, and notes. "Discover Sources" automates the process of pulling in information from the internet with a single click. The curated sources remain accessible within NotebookLM notebooks, allowing users to leverage them within Briefing Docs, FAQs, and Audio Overviews without repeatedly scouring the internet. This enhancement highlights the growing trend of AI-driven research tools shaping how we work and learn. Recommended read:
References :
Jonathan Kemper@THE DECODER
//
References:
Analytics India Magazine
, THE DECODER
Meta is developing MoCha (Movie Character Animator), an AI system designed to generate complete character animations. MoCha takes natural language prompts describing the character, scene, and actions, along with a speech audio clip, and outputs a cinematic-quality video. This end-to-end model synchronizes speech with facial movements, generates full-body gestures, and maintains character consistency, even managing turn-based dialogue between multiple speakers. The system introduces a "Speech-Video Window Attention" mechanism to solve challenges in AI video generation, improving lip sync accuracy by limiting each frame's access to a specific window of audio data and adding tokens to create smoother transitions.
MoCha runs on a diffusion transformer model with 30 billion parameters and produces HD video clips around five seconds long at 24 frames per second. For scenes with multiple characters, the team developed a streamlined prompt system, allowing users to define characters once and reference them with simple tags throughout different scenes. Meta’s AI research head, Joelle Pineau, announced her resignation which is effective at the end of May, vacating a high-profile position amid intense competition in AI development. Recommended read:
References :
Maria Deutscher@AI ? SiliconANGLE
//
Isomorphic Labs, an Alphabet spinout focused on AI-driven drug design, has secured $600 million in its first external funding round. The investment, led by Thrive Capital with participation from Alphabet and GV, will fuel the advancement of Isomorphic Labs' AI drug design engine and therapeutic programs. The company aims to leverage artificial intelligence, including its AlphaFold technology, to revolutionize drug discovery across various therapeutic areas, including oncology and immunology. This funding is expected to accelerate research and development efforts, as well as facilitate the expansion of Isomorphic Labs' team with top-tier talent.
Isomorphic Labs, founded in 2021 by Sir Demis Hassabis, seeks to reimagine and accelerate drug discovery by applying AI. Its AI-powered engine streamlines the design of small molecules with therapeutic applications and can predict the effectiveness of a small molecule's attachment to a protein. The company's software also eases other aspects of the drug development workflow. Isomorphic Labs has already established collaborations with pharmaceutical companies like Eli Lilly and Novartis, and the new funding will support the progression of its own drug programs into clinical development. Recommended read:
References :
Divya@gbhackers.com
//
References:
gbhackers.com
, Talkback Resources
,
Researchers from Duke University and Carnegie Mellon University have successfully jailbroken several leading AI language models, including OpenAI’s o1/o3, DeepSeek-R1, and Google’s Gemini 2.0 Flash. The team developed a novel attack method called Hijacking Chain-of-Thought (H-CoT), which exploits the reasoning processes of these models to bypass safety mechanisms designed to prevent harmful outputs. This research highlights significant security vulnerabilities in advanced AI systems and raises concerns about their potential misuse.
The researchers introduced the Malicious-Educator benchmark, which utilizes seemingly harmless educational prompts to mask dangerous requests. They found that all tested models failed to consistently recognize these contextual deceptions. For example, DeepSeek-R1 proved particularly susceptible to financial crime queries, providing actionable money laundering steps in a high percentage of test cases. The team has shared mitigation strategies with affected vendors. Recommended read:
References :
@blog.google
//
References:
Google Workspace Updates
, AI & Machine Learning
,
Google is launching Deep Research, a new feature for Gemini Advanced, aimed at enhancing research capabilities for Google Workspace users. This tool leverages web data analysis and experimental models to provide in-depth reports on complex topics, ultimately boosting productivity and insight generation. Gemini Advanced users can now access Deep Research to save time by automating the processes of browsing the web, analyzing real-time information, and compiling comprehensive research reports within minutes.
This new functionality allows users to quickly gain expertise on various subjects, offering significant benefits for businesses and educational institutions. Deep Research can be used to understand emerging industry trends, conduct competitive analysis, assist in customer research by preparing reports on prospective clients, and aid educators in grant writing, lesson planning, and creating presentations. After entering a prompt, users can review and revise the proposed research plan, and the tool then analyzes relevant information, refining its analysis continuously. The generated report is organized with links to original sources and can be exported to Google Docs. Recommended read:
References :
@the-decoder.com
//
Google has developed an AI Co-Scientist tool built on the Gemini 2.0 model to accelerate scientific discoveries, especially in biomedical research. This virtual research partner assists human researchers by generating and testing scientific hypotheses, synthesizing extensive literature, and identifying new connections. The AI Co-Scientist mimics a real-world research team with specialized AI agents that brainstorm, edit, translate, fact-check, debate, plan, and report, accelerating the research process.
The AI Co-Scientist has demonstrated its capabilities by replicating 10 years of antibiotic resistance research in under 72 hours. The system analyzed over 28,000 studies, proposed 143 mechanisms for bacterial DNA transfer, and simulated experiments, matching findings by researchers at Imperial College London. Google has also introduced 'Career Dreamer,' an AI tool using Gemini to analyze past experiences and suggest potential career paths, as well as assisting in drafting cover letters and resumes. Recommended read:
References :
Alyssa Hughes (2ADAPTIVE LLC dba 2A Consulting)@Microsoft Research
//
Microsoft has announced two major advancements in both quantum computing and artificial intelligence. The company unveiled Majorana 1, a new chip containing topological qubits, representing a key milestone in its pursuit of stable, scalable quantum computers. This approach uses topological qubits, which are less susceptible to environmental noise, aiming to overcome the long-standing instability issues that have challenged the development of reliable quantum processors. The company says it is on track to build a new kind of quantum computer based on topological qubits.
Microsoft is also introducing Muse, a generative AI model designed for gameplay ideation. Described as a first-of-its-kind World and Human Action Model (WHAM), Muse can generate game visuals and controller actions. The company says it is on track to build a new kind of quantum computer based on topological qubits. Microsoft’s team is developing research insights to support creative uses of generative AI models. Recommended read:
References :
@the-decoder.com
//
Perplexity AI has launched its Deep Research tool, joining Google and OpenAI in offering advanced AI-powered research capabilities. This new feature aims to provide more in-depth answers with real citations, catering to professional use cases. Deep Research is currently available on the web and will soon be added to Mac, iOS, and Android apps. Users can access it by selecting "Deep Research" from a drop-down menu when submitting a query in Perplexity.
Perplexity's Deep Research leverages Deepseek-R1, allowing the service to be offered at a significantly lower price than competitors like OpenAI. While OpenAI charges $200 per month for 100 queries, Perplexity offers 500 queries per day for $20 per month for its pro subscription, with basic access available for free with daily query limits for non-subscribers. The tool works by iteratively searching, reading documents, and reasoning about what to do next, similar to how a human researcher would approach a new topic, producing detailed reports in one to two minutes. Recommended read:
References :
@www.analyticsvidhya.com
//
References:
techxplore.com
, www.analyticsvidhya.com
,
DeepMind has unveiled AlphaGeometry2, a significant upgrade to its AlphaGeometry system. This new iteration achieves gold-medal level performance in solving challenging Olympiad geometry problems, surpassing the abilities of the average gold medalist. Researchers from Google DeepMind, along with collaborators from the University of Cambridge, Georgia Tech, and Brown University, enhanced the system's domain language, enabling it to handle more complex geometric concepts and increasing its coverage of IMO problems from 66% to 88%.
AlphaGeometry2 integrates a Gemini-based language model with a more efficient symbolic engine and a novel search algorithm. These improvements boost its solving rate to 84% on IMO geometry problems from 2000-2024. The system is advancing towards a fully automated system that interprets problems from natural language. Prior research suggests that AI capable of solving geometry problems could lead to more sophisticated applications, requiring both a high level of reasoning ability and the ability to choose from possible steps in working toward a solution. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |