Google Strikes Back at OpenAI: Launches Interactions API and Enhanced Deep Research

January 2, 2026

Google répond (déjà) à OpenAI avec l’API Interactions et un Deep Research amélioré

Google has introduced a new unified API designed for interacting with its AI models and agents, alongside significant improvements to Deep Research, its autonomous search agent.

Just weeks after launching Gemini 3, and a few days following the roll-out of managed MCP servers for its tools, Google is advancing its AI agent technology. On December 11, 2025, the tech giant from Mountain View unveiled the Interactions API, a new interface for developers, as well as an updated version of its search agent, Deep Research.

These announcements coincided with the release of GPT-5.2 by OpenAI, highlighting the fierce technological rivalry between the two companies. This dual release signifies a shift: Google is no longer just offering language models but is also providing broader access to comprehensive agent systems that can autonomously perform complex tasks.

The Interactions API: A Unified Foundation for Google’s Models and Agents

The Interactions API represents a significant evolution of the Gemini ecosystem. Unlike the generateContent API, which was initially designed by Google for one-time text generation interactions, the Interactions API introduces a single endpoint that can handle both direct calls to Gemini models and interactions with autonomous agents like Deep Research (detailed below). This unified approach simplifies the architecture for developers, who can now switch between a traditional model and a complex agent without needing to alter their infrastructure.

The architecture inherently includes server-side state management, background task execution for lengthy processes, and a data schema optimized for managing intertwined messages, reasoning, tool calls, and their outcomes. Google has also integrated direct support for MCP servers (Model Context Protocol), allowing models to seamlessly call external tools.

Available in public beta through Google AI Studio, the Interactions API aims to ease the development of agent-based applications by offloading the complexity of context management to the server. Google has announced that the Agent Development Kit (ADK) and the Agent2Agent (A2A) protocol already support this new API, with plans for upcoming deployment on Vertex AI for enterprise clients. The goal is to enable developers to connect Gemini models, native Google agents, and custom agents through a single interface, establishing the foundation for an interoperable agent ecosystem.

Deep Research: A Redesigned and Developer-Friendly Search Agent

Gemini Deep Research has also been revamped in this update. Powered by Gemini 3 Pro, which Google describes as the most factual model in its lineup, this autonomous agent is specifically trained to minimize hallucinations during complex search tasks. It operates iteratively: formulating queries, analyzing results, identifying gaps, and conducting new searches.

This version also significantly enhances web navigation capabilities, allowing for deep exploration of websites to extract specific data. The agent can now analyze both personal documents (PDF, CSV, docs) and public web data simultaneously, before synthesizing everything into structured reports with what are called “granular” citations.

For the first time, developers can integrate these advanced search capabilities directly into their applications using the Interactions API. Google plans to integrate this version of Deep Research into Google Search, Google Finance, the Gemini app, and NotebookLM soon. To showcase its advancements, Google has released DeepSearchQA, a new benchmark consisting of 900 tasks that evaluate agents on complex multi-step searches, on which Deep Research scored 66.1%.

Similar Posts

Rate this post

Leave a Comment

Share to...