Gemma comes in two sizes — Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), which both ship with pre-trained and instruction-tuned variants and are built from the same research that informed Gemini’s development.
According to the announcement, Gemma isn’t meant to replace Gemini, Google’s competitor to GPT-4. Rather, they are designed to be lightweight alternatives suitable for more remedial tasks like chatbots and text summaries.
Don’t balk at the thought of a more lightweight AI just yet, though. What Gemma lacks in size, it makes up for it in speed and accessibility. The company states that “Gemma 2B and 7B achieve best-in-class performance for their sizes compared to other open models,” and they can even be run directly on your laptop using Hugging Face, Kaggle, and Google’s Vertex AI.
Why it matters: While developers can build on Gemini through APIs, Gemma is open-sourced, meaning developers have more freedom to experiment with the same research that informed the company’s flagship product and even contribute to its direction.
Of course, this comes with its own concerns as open models are harder to place guardrails on, so Gemma ships with “responsible AI toolkits,” and the models went under extensive testing by “red-teamers” — experts in things like misinformation, hate content, and bias.
Developers can use Gemma for free on Kaggle, and first-time Google Cloud users will receive $300 in credits to use the models. In the meantime, I’m placing a bet that the next Google AI will be named Gary.
This article originally appeared in the Product Hunt Daily Digest. Subscribe here.