Google unveiled its latest initiative in the realm of artificial intelligence (AI) on Wednesday, introducing a series of “open” AI models dubbed Gemma. These models, akin to Meta Platforms’ recent move, are now available for external developers to harness and adapt to their needs.
Under the umbrella of its parent company, Alphabet, Google announced that it is granting individuals and businesses access to its Gemma family of AI models at no cost. By providing crucial technical data, such as model weights, to the public, Google aims to foster a community where AI software can be constructed and customized by a diverse range of developers.
This strategic maneuver not only has the potential to draw software engineers towards Google’s technology but also to boost the utilization of its burgeoning cloud division, which has recently become a significant source of revenue for the tech giant. The Gemma models, tailored to optimize performance on Google Cloud, come with an enticing offer for first-time cloud customers who receive $300 in credits upon usage.
However, Google has opted to retain some control over the Gemma models, refraining from making them fully “open source.” This decision allows Google to maintain influence over usage terms and ownership rights, a move that has been met with mixed reactions from experts. While some argue that open-source AI could be susceptible to misuse, others view it as a means to democratize AI technology, enabling a broader spectrum of individuals to contribute to and benefit from its advancements.
It’s worth noting that Google’s larger, premier models, known as Gemini, have not been made openly accessible like Gemma. The Gemma models come in two sizes, with parameters ranging from two billion to seven billion, defining the scope of variables considered by the algorithms to produce outputs.
In comparison, Meta’s Llama 2 models boast sizes ranging from seven billion to a whopping 70 billion parameters. Google has refrained from disclosing the size of its largest Gemini models, though it’s evident that they are substantial. For context, OpenAI’s GPT-3 model, introduced in 2020, comprised a staggering 175 billion parameters.
In tandem with this announcement, chipmaker Nvidia revealed its collaboration with Google to ensure seamless integration of Gemma models with its chips. Additionally, Nvidia announced plans to adapt its chatbot software, currently under development for Windows PCs, to be compatible with Gemma, further expanding the reach and applicability of Google’s AI models.