Introducing Gemini 1.5, Google’s next-generation AI model


Introducing Gemini 1.5

By Demis Hassabis, CEO of Google DeepMind, on behalf of the Gemini staff

That is an thrilling time for AI. New advances within the discipline have the potential to make AI extra useful for billions of individuals over the approaching years. Since introducing Gemini 1.0, we’ve been testing, refining and enhancing its capabilities.

As we speak, we’re asserting our next-generation mannequin: Gemini 1.5.

Gemini 1.5 delivers dramatically enhanced efficiency. It represents a step change in our method, constructing upon analysis and engineering improvements throughout practically each a part of our basis mannequin growth and infrastructure. This consists of making Gemini 1.5 extra environment friendly to coach and serve, with a brand new Mixture-of-Experts (MoE) structure.

The primary Gemini 1.5 mannequin we’re releasing for early testing is Gemini 1.5 Professional. It’s a mid-size multimodal mannequin, optimized for scaling throughout a wide-range of duties, and performs at a similar level to 1.0 Ultra, our largest mannequin so far. It additionally introduces a breakthrough experimental function in long-context understanding.

Gemini 1.5 Professional comes with a regular 128,000 token context window. However beginning at the moment, a restricted group of builders and enterprise prospects can attempt it with a context window of as much as 1 million tokens by way of AI Studio and Vertex AI in non-public preview.

As we roll out the complete 1 million token context window, we’re actively engaged on optimizations to enhance latency, scale back computational necessities and improve the person expertise. We’re excited for folks to do this breakthrough functionality, and we share extra particulars on future availability under.

These continued advances in our next-generation fashions will open up new prospects for folks, builders and enterprises to create, uncover and construct utilizing AI.



Source link

Exit mobile version