Google's Gemini API and AI Studio Now Integrated with Google Search.

This will allow developers using the Google Gemini API and its Google AI Studio to start building AI-based services and bots that ground the results of their prompts in data from Google Search.
Google's Gemini API and AI Studio Now Integrated with Google Search.

This will allow developers using the Google Gemini API and its Google AI Studio to start building AI-based services and bots that ground the results of their prompts in data from Google Search. That should provide more accurate answers based on fresher data.

This would allow developers to try grounding for free in AI Studio, essentially Google's playground for developers to test and refine their prompts and access its latest large language models (LLMs). Gemini API users will have to be on the paid tier, and they will pay $35 per 1,000 grounded queries.

AI Studio launched the built-in compare mode and makes easy how the results of those grounded queries differ from that reliance on the model data only.

At its roots, grounding connects a model to verifiable data, which also helps systems not hallucinate. In an example Google showed me ahead of today's launch, a prompt asking who won the Emmy for best comedy series in 2024, the model—without grounding—said it was "Ted Lasso." But that was a hallucination. "Ted Lasso" won the award, but in 2022. With grounding on, the model provided the correct result ("Hacks"), included additional context, and cited its sources.

Turning on grounding is just a matter of flipping a switch and choosing how frequently the API should use grounding through changes to the "dynamic retrieval" setting. This could be as simple as choosing to turn it on for every prompt or go with a more subtle setting that then uses a smaller model to evaluate the prompt and decide whether it would be helped by being augmented with data from Google Search.

Grounding can help … when you ask a very recent question that's beyond the model's knowledge cutoff, but it could also help with a question which is not as recent … but you may want richer detail, explained Shrestha Basu Mallick, Google's group product manager for the Gemini API and AI Studio. "There may be developers who say we only want to ground on recent facts, and they would set this [dynamic retrieval value] higher. And there may be developers who say: No, I want the rich detail of Google search on everything."

According to Logan Kilpatrick, who joined Google earlier this year after previously leading developer relations at OpenAI, adding data from Google Search enriches results by putting supporting links back into the underlying sources, and Google has to show those links through the Gemini license for anyone using this feature.

"It is very important for us for two reasons: one, we want to make sure our publishers get the credit and the visibility," Basu Mallick added. "But second, this also is something that users like. When I get an LLM answer, I often go to Google Search and check that answer. We're providing a way for them to do this easily, so this is much valued by users."

It's very much a proof of concept, Kilpatrick said of AI Studio in this context. But in fairness, it was not just a proof of concept-it was more like a prompt tuning tool when we first released it. And it's a lot more now. There's a bunch we do to surface potential interesting use cases to developers front and center in the UI, but the goal is not to keep you in AI Studio and just have you sort of play around with the models. The goal is to get you code. You click 'Get Code' in the right top corner, start building something, and end up coming back to AI Studio to experiment with a future model.

Blog
|
2024-11-01 17:24:05