Google has announced Gemma 2, a 27-billion-parameter version of its open model, set to launch.

In a flurry of announcements at the Google I/O 2024 developer conference on Tuesday, Google expanded its family of open (but not open source) models comparable to Meta's Llama and Mistral's open models.
Google has announced Gemma 2, a 27-billion-parameter version of its open model, set to launch.

In a flurry of announcements at the Google I/O 2024 developer conference on Tuesday, Google expanded its family of open (but not open source) models comparable to Meta's Llama and Mistral's open models.
 The release to note here is Gemma 2: the next generation of Google's open-weights Gemma models, which will begin rolling out in June with a 27 billion parameter model.

The pre-trained variant of Gemma, PaliGemma, is already available. Google describes it as "the first vision language model in the Gemma family" for tasks in the use cases of image captioning, image labeling, and visual Q&A.

Thus far, the standard Gemma models, released earlier this year, were only available in 2-billion-parameter and 7-billion-parameter versions, so this new 27 billion is quite a significant step forward.

Josh Woodward, Google's VP of Google Labs, said in a briefing ahead of Tuesday's announcement that the Gemma models have been downloaded more than "millions of times" across the various services where it's available. He also made clear that Google optimized the 27-billion model to run on Nvidia's next-gen GPUs, as well as on a single Google Cloud TPU host and the managed Vertex AI service.

Size doesn't matter, though, if the model isn't any good. Google hasn't released a lot of information about Gemma 2 so far, so we'll have to see how it performs when developers get their hands on it. "We're already seeing some great quality. It's outperforming models two times bigger than it already," Woodward said.

 

Blog
|
2024-11-09 21:37:20